public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH net v1 2/2] gve: use max allowed ring size for ZC page_pools
       [not found]               ` <k3h635mirxo3wichhpxosw4hxvfu67khqs2jyna3muhhj5pmvm@4t2gypnckuri>
@ 2025-11-08  2:04                 ` Jakub Kicinski
  0 siblings, 0 replies; only message in thread
From: Jakub Kicinski @ 2025-11-08  2:04 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: Mina Almasry, netdev, linux-kernel, Joshua Washington,
	Harshitha Ramamurthy, Andrew Lunn, David S. Miller, Eric Dumazet,
	Paolo Abeni, Jesper Dangaard Brouer, Ilias Apalodimas,
	Simon Horman, Willem de Bruijn, ziweixiao, Vedant Mathur,
	io-uring, David Wei

On Fri, 7 Nov 2025 13:35:44 +0000 Dragos Tatulea wrote:
> On Thu, Nov 06, 2025 at 05:18:33PM -0800, Jakub Kicinski wrote:
> > On Thu, 6 Nov 2025 17:25:43 +0000 Dragos Tatulea wrote:  
> > > I see a similar issue with io_uring as well: for a 9K MTU with 4K ring
> > > size there are ~1% allocation errors during a simple zcrx test.
> > > 
> > > mlx5 calculates 16K pages and the io_uring zcrx buffer matches exactly
> > > that size (16K * 4K). Increasing the buffer doesn't help because the
> > > pool size is still what the driver asked for (+ also the
> > > internal pool limit). Even worse: eventually ENOSPC is returned to the
> > > application. But maybe this error has a different fix.  
> > 
> > Hm, yes, did you trace it all the way to where it comes from?
> > page pool itself does not have any ENOSPC AFAICT. If the cache
> > is full we free the page back to the provider via .release_netmem
> >  
> Yes I did. It happens in io_cqe_cache_refill() when there are no more
> CQEs:
> https://elixir.bootlin.com/linux/v6.17.7/source/io_uring/io_uring.c#L775
> 
> Looking at the code in zcrx I see that the amount of RQ entries and CQ
> entries is 4K, which matches the device ring size, but doesn't match the
> amount of pages available in the buffer:
> https://github.com/isilence/liburing/blob/zcrx/rx-buf-len/examples/zcrx.c#L410
> https://github.com/isilence/liburing/blob/zcrx/rx-buf-len/examples/zcrx.c#L176
> 
> Doubling the CQs (or both RQ and CQ size) makes the ENOSPC go away.
> 
> > > Adapting the pool size to the io_uring buffer size works very well. The
> > > allocation errors are gone and performance is improved.
> > > 
> > > AFAIU, a page_pool with underlying pre-allocated memory is not really a
> > > cache. So it is useful to be able to adapt to the capacity reserved by
> > > the application.
> > > 
> > > Maybe one could argue that the zcrx example from liburing could also be
> > > improved. But one thing is sure: aligning the buffer size to the
> > > page_pool size calculated by the driver based on ring size and MTU
> > > is a hassle. If the application provides a large enough buffer, things
> > > should "just work".  
> > 
> > Yes, there should be no ENOSPC. I think io_uring is more thorough
> > in handling the corner cases so what you're describing is more of 
> > a concern..
> 
> Is this error something that io_uring should fix or is this similar to
> EAGAIN where the application has to retry?

Not sure.. let me CC them.

> > Keep in mind that we expect multiple page pools from one provider.
> > We want the pages to flow back to the MP level so other PPs can grab
> > them.
> >  
> Oh, right, I forgot... And this can happen now only for devmem though,
> right?

Right, tho I think David is also working on some queue sharing?

> Still, this is an additional reason to give more control to the MP
> over the page_pool config, right?

This one I'm really not sure needs to be exposed via MP vs just
netdev-nl. But yes, I'd imagine the driver default may be sub-optimal
in either direction so giving user control over the sizing would be
good.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2025-11-08  2:04 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20251105200801.178381-1-almasrymina@google.com>
     [not found] ` <20251105200801.178381-2-almasrymina@google.com>
     [not found]   ` <20251105171142.13095017@kernel.org>
     [not found]     ` <CAHS8izNg63A9W5GkGVgy0_v1U6_rPgCj1zu2_5QnUKcR9eTGFg@mail.gmail.com>
     [not found]       ` <20251105182210.7630c19e@kernel.org>
     [not found]         ` <CAHS8izP0y1t4LU3nBj4h=3zw126dMtMNHUiXASuqDNyVuyhFYQ@mail.gmail.com>
     [not found]           ` <qhi7uuq52irirmviv3xex6h5tc4w4x6kcjwhqh735un3kpcx5x@2phgy3mnmg4p>
     [not found]             ` <20251106171833.72fe18a9@kernel.org>
     [not found]               ` <k3h635mirxo3wichhpxosw4hxvfu67khqs2jyna3muhhj5pmvm@4t2gypnckuri>
2025-11-08  2:04                 ` [PATCH net v1 2/2] gve: use max allowed ring size for ZC page_pools Jakub Kicinski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox