public inbox for [email protected]
 help / color / mirror / Atom feed
From: Willem de Bruijn <[email protected]>
To: Pavel Begunkov <[email protected]>,
	 Willem de Bruijn <[email protected]>,
	 David Wei <[email protected]>,
	 [email protected],  [email protected]
Cc: Jens Axboe <[email protected]>,  Jakub Kicinski <[email protected]>,
	 Paolo Abeni <[email protected]>,
	 "David S. Miller" <[email protected]>,
	 Eric Dumazet <[email protected]>,
	 Jesper Dangaard Brouer <[email protected]>,
	 David Ahern <[email protected]>,
	 Mina Almasry <[email protected]>,
	 [email protected],  [email protected]
Subject: Re: [RFC PATCH v3 07/20] io_uring: add interface queue
Date: Sun, 31 Dec 2023 17:25:13 -0500	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

Pavel Begunkov wrote:
> On 12/21/23 17:57, Willem de Bruijn wrote:
> > David Wei wrote:
> >> From: David Wei <[email protected]>
> >>
> >> This patch introduces a new object in io_uring called an interface queue
> >> (ifq) which contains:
> >>
> >> * A pool region allocated by userspace and registered w/ io_uring where
> >>    Rx data is written to.
> >> * A net device and one specific Rx queue in it that will be configured
> >>    for ZC Rx.
> >> * A pair of shared ringbuffers w/ userspace, dubbed registered buf
> >>    (rbuf) rings. Each entry contains a pool region id and an offset + len
> >>    within that region. The kernel writes entries into the completion ring
> >>    to tell userspace where RX data is relative to the start of a region.
> >>    Userspace writes entries into the refill ring to tell the kernel when
> >>    it is done with the data.
> >>
> >> For now, each io_uring instance has a single ifq, and each ifq has a
> >> single pool region associated with one Rx queue.
> >>
> >> Add a new opcode to io_uring_register that sets up an ifq. Size and
> >> offsets of shared ringbuffers are returned to userspace for it to mmap.
> >> The implementation will be added in a later patch.
> >>
> >> Signed-off-by: David Wei <[email protected]>
> > 
> > This is quite similar to AF_XDP, of course. Is it at all possible to
> > reuse all or some of that? If not, why not?
> 
> Let me rather ask what do you have in mind for reuse? I'm not too
> intimately familiar with xdp, but I don't see what we can take.

At a high level all points in this commit message:

	* A pool region allocated by userspace and registered w/ io_uring where
	  Rx data is written to.
	* A net device and one specific Rx queue in it that will be configured
	  for ZC Rx.
	* A pair of shared ringbuffers w/ userspace, dubbed registered buf
	  (rbuf) rings. Each entry contains a pool region id and an offset + len
	  within that region. The kernel writes entries into the completion ring
	  to tell userspace where RX data is relative to the start of a region.
	  Userspace writes entries into the refill ring to tell the kernel when
	  it is done with the data.

	For now, each io_uring instance has a single ifq, and each ifq has a
	single pool region associated with one Rx queue.

AF_XDP allows shared pools, but otherwise this sounds like the same
feature set.

> Queue formats will be different

I'd like to makes sure that this is for a reason. Not just divergence
because we did not consider reusing existing user/kernel queue formats.

> there won't be a separate CQ
> for zc all they will lend in the main io_uring CQ in next revisions.

Okay, that's different.

> io_uring also supports multiple sockets per zc ifq and other quirks
> reflected in the uapi.
> 
> Receive has to work with generic sockets and skbs if we want
> to be able to reuse the protocol stack. Queue allocation and
> mapping is similar but that one thing that should be bound to
> the API (i.e. io_uring vs af xdp) together with locking and
> synchronisation. Wakeups are different as well.
> 
> And IIUC AF_XDP is still operates with raw packets quite early
> in the stack, while io_uring completes from a syscall, that
> would definitely make sync diverging a lot.

The difference is in frame payload, not in the queue structure:
a fixed frame buffer pool plus sets of post + completion queues that
store a relative offset and length into that pool.

I don't intend to ask for the impossible, to be extra clear: If there
are reasons the structures need to be different, so be it. And no
intention to complicate development. Anything not ABI can be
refactored later, too, if overlap becomes clear. But for ABI it's
worth asking now whether these queue formats really are different for
a concrete reason.

> I don't see many opportunities here.
> 
> > As a side effect, unification would also show a path of moving AF_XDP
> > from its custom allocator to the page_pool infra.
> 
> I assume it's about xsk_buff_alloc() and likes of it. I'm lacking
> here, I it's much better to ask XDP guys what they think about
> moving to pp, whether it's needed, etc. And if so, it'd likely
> be easier to base it on raw page pool providers api than the io_uring
> provider implementation, probably having some common helpers if
> things come to that.

Fair enough, on giving it some more thought and reviewing a recent
use case of the AF_XDP allocation APIs including xsk_buff_alloc.

> 
> > Related: what is the story wrt the process crashing while user memory
> > is posted to the NIC or present in the kernel stack.
> 
> Buffers are pinned by io_uring. If the process crashes closing the
> ring, io_uring will release the pp provider and wait for all buffer
> to come back before unpinning pages and freeing the rest. I.e.
> it's not going to unpin before pp's ->destroy is called.

Great. That's how all page pools work iirc. There is some potential
concern with unbound delay until all buffers are recycled. But that
is not unique to the io_uring provider.

> > SO_DEVMEM already demonstrates zerocopy into user buffers using usdma.
> > To a certain extent that and asyncronous I/O with iouring are two
> > independent goals. SO_DEVMEM imposes limitations on the stack because
> > it might hold opaque device mem. That is too strong for this case.
> 
> Basing it onto ppiov simplifies refcounting a lot, with that we
> don't need any dirty hacks nor adding any extra changes in the stack,
> and I think it's aligned with the net stack goals.

Great to hear.

> What I think
> we can do on top is allowing ppiov's to optionally have pages
> (via a callback ->get_page), and use it it in those rare cases
> when someone has to peek at the payload.
> 
> > But for this iouring provider, is there anything ioring specific about
> > it beyond being user memory? If not, maybe just call it a umem
> > provider, and anticipate it being usable for AF_XDP in the future too?
> 
> Queue formats with a set of features, synchronisation, mostly
> answered above, but I also think it should as easy to just have
> a separate provider and reuse some code later if there is anything
> to reuse.
> 
> > Besides delivery up to the intended socket, packets may also end up
> > in other code paths, such as packet sockets or forwarding. All of
> > this is simpler with userspace backed buffers than with device mem.
> > But good to call out explicitly how this is handled. MSG_ZEROCOPY
> > makes a deep packet copy in unexpected code paths, for instance. To
> > avoid indefinite latency to buffer reclaim.
> 
> Yeah, that's concerning, I intend to add something for the sockets
> we used, but there is nothing for truly unexpected paths. How devmem
> handles it?

MSG_ZEROCOPY handles this by copying to regular kernel memory using
skb_orphan_frags_rx whenever a tx packet could get looped onto an rx
queue and thus held indefinitely. This is not allowed for MSG_ZEROCOPY
as it causes a potentially unbound latency before data can be reused
by the application. Called from __netif_receive_skb_core,
dev_queue_xmit_nit and a few others.

SO_DEVMEM does allow data to enter packet sockets, but instruments
each point that might reference memory to not do this. For instance:

	@@ -2156,7 +2156,7 @@  static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
			}
		}
	 
	-	snaplen = skb->len;
	+	snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb);

https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/

Either approach could be extended to cover io_uring packets.

Multicast is a perhaps an interesting other receive case. I have not
given that much thought.

> It's probably not a huge worry for now, I expect killing the
> task/sockets should resolve dependencies, but would be great to find
> such scenarios. I'd appreciate any pointers if you have some in mind.
> 
> -- 
> Pavel Begunkov



  reply	other threads:[~2023-12-31 22:25 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-19 21:03 [RFC PATCH v3 00/20] Zero copy Rx using io_uring David Wei
2023-12-19 21:03 ` [RFC PATCH v3 01/20] net: page_pool: add ppiov mangling helper David Wei
2023-12-19 23:22   ` Mina Almasry
2023-12-19 23:59     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 02/20] tcp: don't allow non-devmem originated ppiov David Wei
2023-12-19 23:24   ` Mina Almasry
2023-12-20  1:29     ` Pavel Begunkov
2024-01-02 16:11       ` Mina Almasry
2023-12-19 21:03 ` [RFC PATCH v3 03/20] net: page pool: rework ppiov life cycle David Wei
2023-12-19 23:35   ` Mina Almasry
2023-12-20  0:49     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 04/20] net: enable napi_pp_put_page for ppiov David Wei
2023-12-19 21:03 ` [RFC PATCH v3 05/20] net: page_pool: add ->scrub mem provider callback David Wei
2023-12-19 21:03 ` [RFC PATCH v3 06/20] io_uring: separate header for exported net bits David Wei
2023-12-20 16:01   ` Jens Axboe
2023-12-19 21:03 ` [RFC PATCH v3 07/20] io_uring: add interface queue David Wei
2023-12-20 16:13   ` Jens Axboe
2023-12-20 16:23     ` Pavel Begunkov
2023-12-21  1:44     ` David Wei
2023-12-21 17:57   ` Willem de Bruijn
2023-12-30 16:25     ` Pavel Begunkov
2023-12-31 22:25       ` Willem de Bruijn [this message]
2023-12-19 21:03 ` [RFC PATCH v3 08/20] io_uring: add mmap support for shared ifq ringbuffers David Wei
2023-12-20 16:13   ` Jens Axboe
2023-12-19 21:03 ` [RFC PATCH v3 09/20] netdev: add XDP_SETUP_ZC_RX command David Wei
2023-12-19 21:03 ` [RFC PATCH v3 10/20] io_uring: setup ZC for an Rx queue when registering an ifq David Wei
2023-12-20 16:06   ` Jens Axboe
2023-12-20 16:24     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 11/20] io_uring/zcrx: implement socket registration David Wei
2023-12-19 21:03 ` [RFC PATCH v3 12/20] io_uring: add ZC buf and pool David Wei
2023-12-19 21:03 ` [RFC PATCH v3 13/20] io_uring: implement pp memory provider for zc rx David Wei
2023-12-19 23:44   ` Mina Almasry
2023-12-20  0:39     ` Pavel Begunkov
2023-12-21 19:36   ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 14/20] net: page pool: add io_uring memory provider David Wei
2023-12-19 23:39   ` Mina Almasry
2023-12-20  0:04     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 15/20] io_uring: add io_recvzc request David Wei
2023-12-20 16:27   ` Jens Axboe
2023-12-20 17:04     ` Pavel Begunkov
2023-12-20 18:09       ` Jens Axboe
2023-12-21 18:59         ` Pavel Begunkov
2023-12-21 21:32           ` Jens Axboe
2023-12-30 21:15             ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 16/20] net: execute custom callback from napi David Wei
2023-12-19 21:03 ` [RFC PATCH v3 17/20] io_uring/zcrx: add copy fallback David Wei
2023-12-19 21:03 ` [RFC PATCH v3 18/20] veth: add support for io_uring zc rx David Wei
2023-12-19 21:03 ` [RFC PATCH v3 19/20] net: page pool: generalise ppiov dma address get David Wei
2023-12-21 19:51   ` Mina Almasry
2023-12-19 21:03 ` [RFC PATCH v3 20/20] bnxt: enable io_uring zc page pool David Wei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6591ea497b5f_21420c29454@willemb.c.googlers.com.notmuch \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox