From: Byungchul Park <byungchul@sk.com>
To: almasrymina@google.com
Cc: davem@davemloft.net, edumazet@google.com, horms@kernel.org,
	hawk@kernel.org, ilias.apalodimas@linaro.org, sdf@fomichev.me,
	dw@davidwei.uk, ap420073@gmail.com, dtatulea@nvidia.com,
	toke@redhat.com, io-uring@vger.kernel.org,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	kernel_team@skhynix.com, max.byungchul.park@gmail.com,
	ziy@nvidia.com, willy@infradead.org, david@redhat.com,
	axboe@kernel.dk, kuba@kernel.org, pabeni@redhat.com,
	asml.silence@gmail.com
Subject: Re: [PATCH net-next] page_pool: check if nmdesc->pp is !NULL to confirm its usage as pp for net_iov
Date: Fri, 17 Oct 2025 15:26:14 +0900	[thread overview]
Message-ID: <20251017062614.GA57077@system.software.com> (raw)
In-Reply-To: <20251016233139.GA37304@system.software.com>
On Fri, Oct 17, 2025 at 08:31:39AM +0900, Byungchul Park wrote:
> On Thu, Oct 16, 2025 at 03:36:57PM +0900, Byungchul Park wrote:
> > ->pp_magic field in struct page is current used to identify if a page
> > belongs to a page pool.  However, ->pp_magic will be removed and page
> > type bit in struct page e.g. PGTY_netpp should be used for that purpose.
> > 
> > As a preparation, the check for net_iov, that is not page-backed, should
> > avoid using ->pp_magic since net_iov doens't have to do with page type.
> > Instead, nmdesc->pp can be used if a net_iov or its nmdesc belongs to a
> > page pool, by making sure nmdesc->pp is NULL otherwise.
> > 
> > For page-backed netmem, just leave unchanged as is, while for net_iov,
> > make sure nmdesc->pp is initialized to NULL and use nmdesc->pp for the
> > check.
Hi Mina,
This patch extracts the network part from the following work:
  https://lore.kernel.org/all/20250729110210.48313-1-byungchul@sk.com/
Can I keep your reviewed-by tag on this patch?
  Reviewed-by: Mina Almasry <almasrymina@google.com>
	Byungchul
> +cc David Hildenbrand <david@redhat.com>
> +cc Zi Yan <ziy@nvidia.com>
> +cc willy@infradead.org
> 
> 	Byungchul
> > 
> > Signed-off-by: Byungchul Park <byungchul@sk.com>
> > ---
> >  io_uring/zcrx.c        |  4 ++++
> >  net/core/devmem.c      |  1 +
> >  net/core/netmem_priv.h |  6 ++++++
> >  net/core/page_pool.c   | 16 ++++++++++++++--
> >  4 files changed, 25 insertions(+), 2 deletions(-)
> > 
> > diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
> > index 723e4266b91f..cf78227c0ca6 100644
> > --- a/io_uring/zcrx.c
> > +++ b/io_uring/zcrx.c
> > @@ -450,6 +450,10 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
> >  		area->freelist[i] = i;
> >  		atomic_set(&area->user_refs[i], 0);
> >  		niov->type = NET_IOV_IOURING;
> > +
> > +		/* niov->desc.pp is already initialized to NULL by
> > +		 * kvmalloc_array(__GFP_ZERO).
> > +		 */
> >  	}
> >  
> >  	area->free_count = nr_iovs;
> > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > index d9de31a6cc7f..f81b700f1fd1 100644
> > --- a/net/core/devmem.c
> > +++ b/net/core/devmem.c
> > @@ -291,6 +291,7 @@ net_devmem_bind_dmabuf(struct net_device *dev,
> >  			niov = &owner->area.niovs[i];
> >  			niov->type = NET_IOV_DMABUF;
> >  			niov->owner = &owner->area;
> > +			niov->desc.pp = NULL;
> >  			page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
> >  						      net_devmem_get_dma_addr(niov));
> >  			if (direction == DMA_TO_DEVICE)
> > diff --git a/net/core/netmem_priv.h b/net/core/netmem_priv.h
> > index 23175cb2bd86..fb21cc19176b 100644
> > --- a/net/core/netmem_priv.h
> > +++ b/net/core/netmem_priv.h
> > @@ -22,6 +22,12 @@ static inline void netmem_clear_pp_magic(netmem_ref netmem)
> >  
> >  static inline bool netmem_is_pp(netmem_ref netmem)
> >  {
> > +	/* Use ->pp for net_iov to identify if it's pp, which requires
> > +	 * that non-pp net_iov should have ->pp NULL'd.
> > +	 */
> > +	if (netmem_is_net_iov(netmem))
> > +		return !!netmem_to_nmdesc(netmem)->pp;
> > +
> >  	return (netmem_get_pp_magic(netmem) & PP_MAGIC_MASK) == PP_SIGNATURE;
> >  }
> >  
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 1a5edec485f1..2756b78754b0 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -699,7 +699,13 @@ s32 page_pool_inflight(const struct page_pool *pool, bool strict)
> >  void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem)
> >  {
> >  	netmem_set_pp(netmem, pool);
> > -	netmem_or_pp_magic(netmem, PP_SIGNATURE);
> > +
> > +	/* For page-backed, pp_magic is used to identify if it's pp.
> > +	 * For net_iov, it's ensured nmdesc->pp is non-NULL if it's pp
> > +	 * and nmdesc->pp is NULL if it's not.
> > +	 */
> > +	if (!netmem_is_net_iov(netmem))
> > +		netmem_or_pp_magic(netmem, PP_SIGNATURE);
> >  
> >  	/* Ensuring all pages have been split into one fragment initially:
> >  	 * page_pool_set_pp_info() is only called once for every page when it
> > @@ -714,7 +720,13 @@ void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem)
> >  
> >  void page_pool_clear_pp_info(netmem_ref netmem)
> >  {
> > -	netmem_clear_pp_magic(netmem);
> > +	/* For page-backed, pp_magic is used to identify if it's pp.
> > +	 * For net_iov, it's ensured nmdesc->pp is non-NULL if it's pp
> > +	 * and nmdesc->pp is NULL if it's not.
> > +	 */
> > +	if (!netmem_is_net_iov(netmem))
> > +		netmem_clear_pp_magic(netmem);
> > +
> >  	netmem_set_pp(netmem, NULL);
> >  }
> >  
> > 
> > base-commit: e1f5bb196f0b0eee197e06d361f8ac5f091c2963
> > -- 
> > 2.17.1
     prev parent reply	other threads:[~2025-10-17  6:26 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-16  6:36 [PATCH net-next] page_pool: check if nmdesc->pp is !NULL to confirm its usage as pp for net_iov Byungchul Park
2025-10-16  7:21 ` Byungchul Park
2025-10-17 12:33   ` Pavel Begunkov
2025-10-17 15:13     ` Mina Almasry
2025-10-18  4:46       ` Byungchul Park
2025-10-18 15:05         ` Mina Almasry
2025-10-23  8:27     ` Byungchul Park
2025-10-16 23:31 ` Byungchul Park
2025-10-17  6:26   ` Byungchul Park [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox
  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):
  git send-email \
    --in-reply-to=20251017062614.GA57077@system.software.com \
    --to=byungchul@sk.com \
    --cc=almasrymina@google.com \
    --cc=ap420073@gmail.com \
    --cc=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=davem@davemloft.net \
    --cc=david@redhat.com \
    --cc=dtatulea@nvidia.com \
    --cc=dw@davidwei.uk \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=io-uring@vger.kernel.org \
    --cc=kernel_team@skhynix.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=max.byungchul.park@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=toke@redhat.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY
  https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
  Be sure your reply has a Subject: header at the top and a blank line
  before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox