public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
From: Stanislav Fomichev <stfomichev@gmail.com>
To: Pavel Begunkov <asml.silence@gmail.com>
Cc: Jakub Kicinski <kuba@kernel.org>,
	netdev@vger.kernel.org, io-uring@vger.kernel.org,
	Eric Dumazet <edumazet@google.com>,
	Willem de Bruijn <willemb@google.com>,
	Paolo Abeni <pabeni@redhat.com>,
	andrew+netdev@lunn.ch, horms@kernel.org, davem@davemloft.net,
	sdf@fomichev.me, almasrymina@google.com, dw@davidwei.uk,
	michael.chan@broadcom.com, dtatulea@nvidia.com,
	ap420073@gmail.com
Subject: Re: [RFC v1 00/22] Large rx buffer support for zcrx
Date: Mon, 28 Jul 2025 15:06:37 -0700	[thread overview]
Message-ID: <aIf0bXkt4bvA-0lC@mini-arch> (raw)
In-Reply-To: <0dbb74c0-fcd6-498f-8e1e-3a222985d443@gmail.com>

On 07/28, Pavel Begunkov wrote:
> On 7/28/25 21:21, Stanislav Fomichev wrote:
> > On 07/28, Pavel Begunkov wrote:
> > > On 7/28/25 18:13, Stanislav Fomichev wrote:
> ...>>> Supporting big buffers is the right direction, but I have the same
> > > > feedback:
> > > 
> > > Let me actually check the feedback for the queue config RFC...
> > > 
> > > it would be nice to fit a cohesive story for the devmem as well.
> > > 
> > > Only the last patch is zcrx specific, the rest is agnostic,
> > > devmem can absolutely reuse that. I don't think there are any
> > > issues wiring up devmem?
> > 
> > Right, but the patch number 2 exposes per-queue rx-buf-len which
> > I'm not sure is the right fit for devmem, see below. If all you
> 
> I guess you're talking about uapi setting it, because as an
> internal per queue parameter IMHO it does make sense for devmem.
> 
> > care is exposing it via io_uring, maybe don't expose it from netlink for
> 
> Sure, I can remove the set operation.
> 
> > now? Although I'm not sure I understand why you're also passing
> > this per-queue value via io_uring. Can you not inherit it from the
> > queue config?
> 
> It's not a great option. It complicates user space with netlink.
> And there are convenience configuration features in the future
> that requires io_uring to parse memory first. E.g. instead of
> user specifying a particular size, it can say "choose the largest
> length under 32K that the backing memory allows".

Don't you already need a bunch of netlink to setup rss and flow
steering? And if we end up adding queue api, you'll have to call that
one over netlink also.

> > > > We should also aim for another use-case where we allocate page pool
> > > > chunks from the huge page(s),
> > > 
> > > Separate huge page pool is a bit beyond the scope of this series.
> > > 
> > > this should push the perf even more.
> > > 
> > > And not sure about "even more" is from, you can already
> > > register a huge page with zcrx, and this will allow to chunk
> > > them to 32K or so for hardware. Is it in terms of applicability
> > > or you have some perf optimisation ideas?
> > 
> > What I'm looking for is a generic system-wide solution where we can
> > set up the host to use huge pages to back all (even non-zc) networking queues.
> > Not necessary needed, but might be an option to try.
> 
> Probably like what Jakub was once suggesting with the initial memory
> provider patch, got it.
> 
> > > > We need some way to express these things from the UAPI point of view.
> > > 
> > > Can you elaborate?
> > > 
> > > > Flipping the rx-buf-len value seems too fragile - there needs to be
> > > > something to request 32K chunks only for devmem case, not for the (default)
> > > > CPU memory. And the queues should go back to default 4K pages when the dmabuf
> > > > is detached from the queue.
> > > 
> > > That's what the per-queue config is solving. It's not default, zcrx
> > > configures it only for the specific queue it allocated, and the value
> > > is cleared on restart in netdev_rx_queue_restart(), if not even too
> > > aggressively. Maybe I should just stash it into mp_params to make
> > > sure it's not cleared if a provider is still attached on a spurious
> > > restart.
> > 
> > If we assume that at some point niov can be backed up by chunks larger
> > than PAGE_SIZE, the assumed workflow for devemem is:
> > 1. change rx-buf-len to 32K
> >    - this is needed only for devmem, but not for CPU RAM, but we'll have
> >      to refill the queues from the main memory anyway
> 
> Urgh, that's another reason why I prefer to just pass it through
> zcrx and not netlink. So maybe you can just pass the len to devmem
> on creation, and internally it sets up its queues with it.

But you still need to solve MAX_PAGE_ORDER/PAGE_ALLOC_COSTLY_ORDER I
think? We don't want the drivers to do PAGE_ALLOC_COSTLY_ORDER costly
allocation presumably?

  reply	other threads:[~2025-07-28 22:06 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-28 11:04 [RFC v1 00/22] Large rx buffer support for zcrx Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 01/22] docs: ethtool: document that rx_buf_len must control payload lengths Pavel Begunkov
2025-07-28 18:11   ` Mina Almasry
2025-07-28 21:36   ` Mina Almasry
2025-08-01 23:13     ` Jakub Kicinski
2025-07-28 11:04 ` [RFC v1 02/22] net: ethtool: report max value for rx-buf-len Pavel Begunkov
2025-07-29  5:00   ` Subbaraya Sundeep
2025-07-28 11:04 ` [RFC v1 03/22] net: use zero value to restore rx_buf_len to default Pavel Begunkov
2025-07-29  5:03   ` Subbaraya Sundeep
2025-07-28 11:04 ` [RFC v1 04/22] net: clarify the meaning of netdev_config members Pavel Begunkov
2025-07-28 21:44   ` Mina Almasry
2025-08-01 23:14     ` Jakub Kicinski
2025-07-28 11:04 ` [RFC v1 05/22] net: add rx_buf_len to netdev config Pavel Begunkov
2025-07-28 21:50   ` Mina Almasry
2025-08-01 23:18     ` Jakub Kicinski
2025-07-28 11:04 ` [RFC v1 06/22] eth: bnxt: read the page size from the adapter struct Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 07/22] eth: bnxt: set page pool page order based on rx_page_size Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 08/22] eth: bnxt: support setting size of agg buffers via ethtool Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 09/22] net: move netdev_config manipulation to dedicated helpers Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 10/22] net: reduce indent of struct netdev_queue_mgmt_ops members Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 11/22] net: allocate per-queue config structs and pass them thru the queue API Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 12/22] net: pass extack to netdev_rx_queue_restart() Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 13/22] net: add queue config validation callback Pavel Begunkov
2025-07-28 22:26   ` Mina Almasry
2025-07-28 11:04 ` [RFC v1 14/22] eth: bnxt: always set the queue mgmt ops Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 15/22] eth: bnxt: store the rx buf size per queue Pavel Begunkov
2025-07-28 22:33   ` Mina Almasry
2025-08-01 23:20     ` Jakub Kicinski
2025-07-28 11:04 ` [RFC v1 16/22] eth: bnxt: adjust the fill level of agg queues with larger buffers Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 17/22] netdev: add support for setting rx-buf-len per queue Pavel Begunkov
2025-07-28 23:10   ` Mina Almasry
2025-08-01 23:37     ` Jakub Kicinski
2025-07-28 11:04 ` [RFC v1 18/22] net: wipe the setting of deactived queues Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 19/22] eth: bnxt: use queue op config validate Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 20/22] eth: bnxt: support per queue configuration of rx-buf-len Pavel Begunkov
2025-07-28 11:04 ` [RFC v1 21/22] net: parametrise mp open with a queue config Pavel Begunkov
2025-08-02  0:10   ` Jakub Kicinski
2025-08-04 12:50     ` Pavel Begunkov
2025-08-05 22:43       ` Jakub Kicinski
2025-08-06  0:05       ` Jakub Kicinski
2025-08-06 16:48       ` Mina Almasry
2025-08-06 18:11         ` Jakub Kicinski
2025-08-06 18:30           ` Mina Almasry
2025-08-06 22:05             ` Jakub Kicinski
2025-07-28 11:04 ` [RFC v1 22/22] io_uring/zcrx: implement large rx buffer support Pavel Begunkov
2025-07-28 17:13 ` [RFC v1 00/22] Large rx buffer support for zcrx Stanislav Fomichev
2025-07-28 18:18   ` Pavel Begunkov
2025-07-28 20:21     ` Stanislav Fomichev
2025-07-28 21:28       ` Pavel Begunkov
2025-07-28 22:06         ` Stanislav Fomichev [this message]
2025-07-28 22:44           ` Pavel Begunkov
2025-07-29 16:33             ` Stanislav Fomichev
2025-07-30 14:16               ` Pavel Begunkov
2025-07-30 15:50                 ` Stanislav Fomichev
2025-07-31 19:34                   ` Mina Almasry
2025-07-31 19:57                     ` Pavel Begunkov
2025-07-31 20:05                       ` Mina Almasry
2025-08-01  9:48                         ` Pavel Begunkov
2025-08-01  9:58                     ` Pavel Begunkov
2025-07-28 23:22           ` Mina Almasry
2025-07-29 16:41             ` Stanislav Fomichev
2025-07-29 17:01               ` Mina Almasry
2025-07-28 18:54 ` Mina Almasry
2025-07-28 19:42   ` Pavel Begunkov
2025-07-28 20:23     ` Mina Almasry
2025-07-28 20:57       ` Pavel Begunkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aIf0bXkt4bvA-0lC@mini-arch \
    --to=stfomichev@gmail.com \
    --cc=almasrymina@google.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=ap420073@gmail.com \
    --cc=asml.silence@gmail.com \
    --cc=davem@davemloft.net \
    --cc=dtatulea@nvidia.com \
    --cc=dw@davidwei.uk \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=io-uring@vger.kernel.org \
    --cc=kuba@kernel.org \
    --cc=michael.chan@broadcom.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox