public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
From: Joanne Koong <joannelkoong@gmail.com>
To: Pavel Begunkov <asml.silence@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>,
	axboe@kernel.dk, io-uring@vger.kernel.org,
	 csander@purestorage.com, krisman@suse.de, bernd@bsbernd.com,
	 linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH v1 03/11] io_uring/kbuf: add support for kernel-managed buffer rings
Date: Tue, 24 Feb 2026 14:19:56 -0800	[thread overview]
Message-ID: <CAJnrk1a1FAARebZ0Aqw18zxtOy8WTMb2UfcAK6jQaigXiZbTfQ@mail.gmail.com> (raw)
In-Reply-To: <94ae832e-209a-4427-925c-d4e2f8217f5a@gmail.com>

On Mon, Feb 23, 2026 at 12:00 PM Pavel Begunkov <asml.silence@gmail.com> wrote:
>
> On 2/21/26 02:14, Joanne Koong wrote:
> > On Fri, Feb 20, 2026 at 4:53 AM Pavel Begunkov <asml.silence@gmail.com> wrote:
> ...
> >> So I'm asking whether you expect that a server or other user space
> >> program should be able to issue a READ_OP_RECV, READ_OP_READ or any
> >> other similar request, which would consume buffers/entries from the
> >> km ring without any fuse kernel code involved? Do you have some
> >> use case for that in mind?
> >
> > Thanks for clarifying your question. Yes, this would be a useful
> > optimization in the future for fuse servers with certain workload
> > characteristics (eg network-backed servers with high concurrency and
> > unpredictable latencies). I don't think the concept of kmbufrings is
> > exclusively fuse-specific though (for example, Christoph's use case
> > being a recent instance);
>
> Sorry, I don't see relevance b/w km rings and what Christoph wants.
> I explained why in some sub-thread, but maybe someone can tell
> what I'm missing.
>
> > I think other subsystems/users that'll use
> > kmbuf rings would also generically find it useful to have the option
> > of READ_OP_RECV/READ_OP_READ operating directly on the ring.
>
> Yep, it could be, potentially, it's just the patchset doesn't plumb
> it to other requests and uses it within fuse. It's just cases like

This patchset just represents the most basic foundation. The
optimization patches (eg incremental buffer consumption, plumbing it
to other io-uring requests, etc) were to be follow-up patchsets that
would be on top of this.

> that always make me wonder, here it was why what is basically an
> internal kernel fuse API is exposed as an io_uring uapi. Maybe there

It's not really an internal kernel fuse API. There's nothing
fuse-specific about it - the infrastructure that's added is the
infrastructure for a generic buffer ring.

The memory that backs the buffers for the buf ring needs to be
io-uring specific. io-uring already has all the infrastructure for
buffer rings. So I'm not really fully understanding why it's better in
this case to just have the fuse kernel code re-implement all the logic
for a buffer ring and go through these layers of indirection to use
registered buffers, instead of just leveraging what's already in
io-uring.

> was a discussion about it I missed?
>
> >> So you already can do all that using the mmap()'ed region user
> >> pointer, and you just want it to be more efficient, right?
> >> For that let's just reuse registered buffers, we don't need a
> >> new mechanism that needs to be propagated to all request types.
> >> And registered buffer are already optimised for I/O in a bunch
> >> of ways. And as a bonus, it'll be similar to the zero-copy
> >> internally registered buffers if you still plan to add them.
> >>
> >> The simplest way to do that is to create a registered buffer out
> >> of the mmap'ed region pointer. Pseudo code:
> >>
> >> // mmap'ed if it's kernel allocated.
> >> {region_ptr, region_size} = create_region();
> >>
> >> struct iovec iov;
> >> iov.iov_base = region_ptr;
> >> iov.iov_len = region_size;
> >> io_uring_register_buffers(ring, &iov, 1);
> >>
> >> // later instead of this:
> >> ptr = region_ptr + off;
> >> io_uring_prep_read(sqe, fd, ptr, ...);
> >>
> >> // you use registered buffers as usual:
> >> io_uring_prep_read_fixed(sqe, fd, off, regbuf_idx, ...);
> >>
> >
> > I feel like this design makes the interface more convoluted and now
> > muddies different concepts together by adding new complexity /
> > relationships between them whereas they were otherwise cleanly
> > isolated. Maybe I'm just not seeing/understanding the overarching
> > vision for why conceptually it makes sense for them to be tied
> > together besides as a mechanism to tell io-uring requests where to
> > copy from by reusing what exists for fixed buffer ids. There's more
> > complexity now on the kernel side (eg having to detect if the buffer
> > passed in is kernel-allocated to know whether to pin the pages /
> > charge it against the user's RLIMIT_MEMLOCK limit) but I'm not
> > understanding what we gain from it.
>
> That would avoid doing a large revamp of uapi and plumbing it
> to each every request type when there is already a uapi that does
> what you want, does it well and have lots of things figured out.
> Keeping the I/O path sane is important, io_uring already has 3
> different ways of passing buffers, let's not add a 4th one
> unless it achieves something meaningful.
>
> > I got the sense from your previous
> > comments that memory regions are the de facto way to go and should be
>
> Sorry, maybe I wasn't clear. With what I see you're trying to do,
> i.e. copying client's data into user space (server), I think
> registered buffers would be a better abstraction. However, I just
> went with your design on top of regions, since it's not the first
> iteration of the series and I wasn't following previous ones, and
> IIRC you was already using registered buffers in previous revisions
> but moved from that for some reason. IOW, I was taking you main I/O
> path and was trying to make the setup path a bit more flexible and
> reusable.
>
> > decoupled from other structures, so if that's the case, why doesn't it
> > make sense for io-uring to add native support for using memory regions
> > for io-uring requests? I feel like from the userspace side it makes
> > things more confusing with this extra layer of indirection that now
> > has to go through a fixed buffer.
>
> There is a high bar for adding a new interface for passing buffers
> that needs to be propagated to a good number of request handlers,
> and there is already one that gives you all you need to write
> efficient user space.
>
> >> IIRC the registration would fail because it doesn't allow file
> >> backed pages, but it should be fine if we know it's io_uring
> >> region memory, so that would need to be patched.
> >>
> >> There might be a bunch of other ways you can do that like
> >> create a kernel allocated registered buffer like what Cristoph
> >> wants, and then register it as a region. Or allow creating
> >> registered buffers out of a region. etc.
> >>
> >> I wanted to unify registered buffers and regions internally
> >> at some point, but then drifted away from active io_uring core
> >> infrastructure development, so I guess that could've been useful.
> >>
> >>> Right now there's only a uapi to register a memory region and none to
> >>> unregister one. Is it guaranteed that io-uring will never add
> >>> something in the future that will let userspace unregister the memory
> >>> region or at least unregister it while it's being used (eg if we add
> >>> future refcounting to it to track active uses of it)?
> >>
> >> Let's talk about it when it's needed or something changes, but if
> >> you do registered buffers instead as per above, they'll be holding
> >> page references and or have to pin the region in some other way.
> >
> > I don't think we can guarantee that the caller will register the
> > memory region as a fixed buffer (eg if it doesn't need/want to use the
> > buffer for normal io-uring requests). On the kernel side, the internal
>
> It's up to the user (i.e. fuse server) to either use OP_READ/etc. using
> user addresses that you have in your design from mmap()ing regions, or
> registering it and using OP_READ_FIXED.

Yes but I don't think this solves the concern of userspace being able
to unregister the memory region at any time (eg while not doing
io-uring requests) while the kernel still points to those addresses
for the backing buffers of the bufring, since there's no callback that
gets triggered in the subsystem when a memory region is unregistered,
which means there will need to be extra per I/O overhead for having to
ensure the memory region is still valid. Though since there's no uapi
for unregistering a memory region this is not a concern, unless this
is planned to be added in the future.

Thanks,
Joanne

>
> > buffer entry uses the kaddr of the registered memory region buffer for
> > any memcpys. If it's not guaranteed that registered memory regions
> > persist for the lifetime of the ring, there'll have to be extra
> > overhead for every I/O (eg grab the io-uring lock, checking if the mem
> > region is still registered, grab a refcount to that mem region, unlock
> > the ring, do the memcpy to the kaddr, then grab the io-uring lock
> > again, decrement the refcount, and unlock). Or I guess we could add
> > pinning to a registered memory region.
>
>
>
> --
> Pavel Begunkov
>

  reply	other threads:[~2026-02-24 22:20 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-10  0:28 [PATCH v1 00/11] io_uring: add kernel-managed buffer rings Joanne Koong
2026-02-10  0:28 ` [PATCH v1 01/11] io_uring/kbuf: refactor io_register_pbuf_ring() logic into generic helpers Joanne Koong
2026-02-10  0:28 ` [PATCH v1 02/11] io_uring/kbuf: rename io_unregister_pbuf_ring() to io_unregister_buf_ring() Joanne Koong
2026-02-10  0:28 ` [PATCH v1 03/11] io_uring/kbuf: add support for kernel-managed buffer rings Joanne Koong
2026-02-10 16:34   ` Pavel Begunkov
2026-02-10 19:39     ` Joanne Koong
2026-02-11 12:01       ` Pavel Begunkov
2026-02-11 22:06         ` Joanne Koong
2026-02-12 10:07           ` Christoph Hellwig
2026-02-12 10:52             ` Pavel Begunkov
2026-02-12 17:29               ` Joanne Koong
2026-02-13  7:27                 ` Christoph Hellwig
2026-02-13 15:31                   ` Pavel Begunkov
2026-02-13 15:48                     ` Pavel Begunkov
2026-02-13 19:09                     ` Joanne Koong
2026-02-13 19:30                       ` Bernd Schubert
2026-02-13 19:38                         ` Joanne Koong
2026-02-17  5:36                       ` Christoph Hellwig
2026-02-13 19:14                   ` Joanne Koong
2026-02-17  5:38                     ` Christoph Hellwig
2026-02-18  9:51                       ` Pavel Begunkov
2026-02-13 16:27                 ` Pavel Begunkov
2026-02-13  7:21               ` Christoph Hellwig
2026-02-13 13:18                 ` Pavel Begunkov
2026-02-13 15:26           ` Pavel Begunkov
2026-02-11 15:45     ` Christoph Hellwig
2026-02-12 10:44       ` Pavel Begunkov
2026-02-13  7:18         ` Christoph Hellwig
2026-02-13 12:41           ` Pavel Begunkov
2026-02-13 22:04             ` Joanne Koong
2026-02-18 12:36               ` Pavel Begunkov
2026-02-18 21:43                 ` Joanne Koong
2026-02-20 12:53                   ` Pavel Begunkov
2026-02-21  2:14                     ` Joanne Koong
2026-02-23 20:00                       ` Pavel Begunkov
2026-02-24 22:19                         ` Joanne Koong [this message]
2026-02-10  0:28 ` [PATCH v1 04/11] io_uring/kbuf: add mmap " Joanne Koong
2026-02-10  1:02   ` Jens Axboe
2026-02-10  0:28 ` [PATCH v1 05/11] io_uring/kbuf: support kernel-managed buffer rings in buffer selection Joanne Koong
2026-02-10  0:28 ` [PATCH v1 06/11] io_uring/kbuf: add buffer ring pinning/unpinning Joanne Koong
2026-02-10  1:07   ` Jens Axboe
2026-02-10 17:57     ` Caleb Sander Mateos
2026-02-10 18:00       ` Jens Axboe
2026-02-10  0:28 ` [PATCH v1 07/11] io_uring/kbuf: add recycling for kernel managed buffer rings Joanne Koong
2026-02-10  0:52   ` Jens Axboe
2026-02-10  0:28 ` [PATCH v1 08/11] io_uring/kbuf: add io_uring_is_kmbuf_ring() Joanne Koong
2026-02-10  0:28 ` [PATCH v1 09/11] io_uring/kbuf: export io_ring_buffer_select() Joanne Koong
2026-02-10  0:28 ` [PATCH v1 10/11] io_uring/kbuf: return buffer id in buffer selection Joanne Koong
2026-02-10  0:53   ` Jens Axboe
2026-02-10 22:36     ` Joanne Koong
2026-02-10  0:28 ` [PATCH v1 11/11] io_uring/cmd: set selected buffer index in __io_uring_cmd_done() Joanne Koong
2026-02-10  0:55 ` [PATCH v1 00/11] io_uring: add kernel-managed buffer rings Jens Axboe
2026-02-10 22:45   ` Joanne Koong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJnrk1a1FAARebZ0Aqw18zxtOy8WTMb2UfcAK6jQaigXiZbTfQ@mail.gmail.com \
    --to=joannelkoong@gmail.com \
    --cc=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=bernd@bsbernd.com \
    --cc=csander@purestorage.com \
    --cc=hch@infradead.org \
    --cc=io-uring@vger.kernel.org \
    --cc=krisman@suse.de \
    --cc=linux-fsdevel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox