From: Caleb Sander Mateos <csander@purestorage.com>
To: Joanne Koong <joannelkoong@gmail.com>
Cc: miklos@szeredi.hu, axboe@kernel.dk, bschubert@ddn.com,
asml.silence@gmail.com, io-uring@vger.kernel.org,
xiaobing.li@samsung.com, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH v3 21/25] io_uring/rsrc: split io_buffer_register_request() logic
Date: Thu, 8 Jan 2026 13:04:01 -0800 [thread overview]
Message-ID: <CADUfDZo56Bgv4PnKnE-nBbZ8WF1N-42RoBZ6DOXVRyqwksg2Xg@mail.gmail.com> (raw)
In-Reply-To: <20251223003522.3055912-22-joannelkoong@gmail.com>
On Mon, Dec 22, 2025 at 4:36 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> Split the main initialization logic in io_buffer_register_request() into
> a helper function.
>
> This is a preparatory patch for supporting kernel-populated buffers in
> fuse io-uring, which will be reusing this logic.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> io_uring/rsrc.c | 89 ++++++++++++++++++++++++++++++-------------------
> 1 file changed, 54 insertions(+), 35 deletions(-)
>
> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> index b25b418e5c11..5fe2695dafb6 100644
> --- a/io_uring/rsrc.c
> +++ b/io_uring/rsrc.c
> @@ -936,67 +936,86 @@ int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
> return ret;
> }
>
> -int io_buffer_register_request(struct io_uring_cmd *cmd, struct request *rq,
> - void (*release)(void *), unsigned int index,
> - unsigned int issue_flags)
> +static struct io_mapped_ubuf *io_kernel_buffer_init(struct io_ring_ctx *ctx,
> + unsigned int nr_bvecs,
> + unsigned int total_bytes,
> + u8 dir,
> + void (*release)(void *),
> + void *priv,
> + unsigned int index)
> {
> - struct io_ring_ctx *ctx = cmd_to_io_kiocb(cmd)->ctx;
> struct io_rsrc_data *data = &ctx->buf_table;
> - struct req_iterator rq_iter;
> struct io_mapped_ubuf *imu;
> struct io_rsrc_node *node;
> - struct bio_vec bv;
> - unsigned int nr_bvecs = 0;
> - int ret = 0;
>
> - io_ring_submit_lock(ctx, issue_flags);
> - if (index >= data->nr) {
> - ret = -EINVAL;
> - goto unlock;
> - }
> + if (index >= data->nr)
> + return ERR_PTR(-EINVAL);
> index = array_index_nospec(index, data->nr);
>
> - if (data->nodes[index]) {
> - ret = -EBUSY;
> - goto unlock;
> - }
> + if (data->nodes[index])
> + return ERR_PTR(-EBUSY);
>
> node = io_rsrc_node_alloc(ctx, IORING_RSRC_BUFFER);
> - if (!node) {
> - ret = -ENOMEM;
> - goto unlock;
> - }
> + if (!node)
> + return ERR_PTR(-ENOMEM);
>
> - /*
> - * blk_rq_nr_phys_segments() may overestimate the number of bvecs
> - * but avoids needing to iterate over the bvecs
> - */
> - imu = io_alloc_imu(ctx, blk_rq_nr_phys_segments(rq));
> + imu = io_alloc_imu(ctx, nr_bvecs);
> if (!imu) {
> kfree(node);
> - ret = -ENOMEM;
> - goto unlock;
> + return ERR_PTR(-ENOMEM);
> }
>
> imu->ubuf = 0;
> - imu->len = blk_rq_bytes(rq);
> + imu->len = total_bytes;
> imu->acct_pages = 0;
> imu->folio_shift = PAGE_SHIFT;
> + imu->nr_bvecs = nr_bvecs;
> refcount_set(&imu->refs, 1);
> imu->release = release;
> - imu->priv = rq;
> + imu->priv = priv;
> imu->is_kbuf = true;
> - imu->dir = 1 << rq_data_dir(rq);
> + imu->dir = 1 << dir;
>
> + node->buf = imu;
> + data->nodes[index] = node;
> +
> + return imu;
> +}
> +
> +int io_buffer_register_request(struct io_uring_cmd *cmd, struct request *rq,
> + void (*release)(void *), unsigned int index,
> + unsigned int issue_flags)
> +{
> + struct io_ring_ctx *ctx = cmd_to_io_kiocb(cmd)->ctx;
> + struct req_iterator rq_iter;
> + struct io_mapped_ubuf *imu;
> + struct bio_vec bv;
> + unsigned int nr_bvecs;
> + unsigned int total_bytes;
> +
> + /*
> + * blk_rq_nr_phys_segments() may overestimate the number of bvecs
> + * but avoids needing to iterate over the bvecs
> + */
> + nr_bvecs = blk_rq_nr_phys_segments(rq);
> + total_bytes = blk_rq_bytes(rq);
Could combine these initializations with the variable declarations
> +
> + io_ring_submit_lock(ctx, issue_flags);
> +
> + imu = io_kernel_buffer_init(ctx, nr_bvecs, total_bytes, rq_data_dir(rq),
> + release, rq, index);
> + if (IS_ERR(imu)) {
> + io_ring_submit_unlock(ctx, issue_flags);
I would prefer to leave the existing goto unlock; pattern. The goto
pattern is easier to extend in the future with additional resource
acquisitions. And keeping it would make the diff slightly smaller. For
new functions with a single early return path, I don't feel all that
strongly, but it seems like unnecessary refactoring of this existing
code.
Other than that,
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
> + return PTR_ERR(imu);
> + }
> +
> + nr_bvecs = 0;
> rq_for_each_bvec(bv, rq, rq_iter)
> imu->bvec[nr_bvecs++] = bv;
> imu->nr_bvecs = nr_bvecs;
>
> - node->buf = imu;
> - data->nodes[index] = node;
> -unlock:
> io_ring_submit_unlock(ctx, issue_flags);
> - return ret;
> + return 0;
> }
> EXPORT_SYMBOL_GPL(io_buffer_register_request);
>
> --
> 2.47.3
>
next prev parent reply other threads:[~2026-01-08 21:04 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-23 0:34 [PATCH v3 00/25] fuse/io-uring: add kernel-managed buffer rings and zero-copy Joanne Koong
2025-12-23 0:34 ` [PATCH v3 01/25] io_uring/kbuf: refactor io_buf_pbuf_register() logic into generic helpers Joanne Koong
2025-12-23 0:34 ` [PATCH v3 02/25] io_uring/kbuf: rename io_unregister_pbuf_ring() to io_unregister_buf_ring() Joanne Koong
2025-12-23 0:35 ` [PATCH v3 03/25] io_uring/kbuf: add support for kernel-managed buffer rings Joanne Koong
2025-12-23 0:35 ` [PATCH v3 04/25] io_uring/kbuf: add mmap " Joanne Koong
2025-12-23 0:35 ` [PATCH v3 05/25] io_uring/kbuf: support kernel-managed buffer rings in buffer selection Joanne Koong
2026-01-03 22:45 ` Caleb Sander Mateos
2026-01-09 0:56 ` Joanne Koong
2025-12-23 0:35 ` [PATCH v3 06/25] io_uring/kbuf: add buffer ring pinning/unpinning Joanne Koong
2025-12-29 21:07 ` Gabriel Krisman Bertazi
2025-12-30 1:27 ` Joanne Koong
2025-12-30 17:54 ` Gabriel Krisman Bertazi
2026-01-02 17:57 ` Joanne Koong
2026-01-08 18:40 ` Caleb Sander Mateos
2026-01-08 19:18 ` Caleb Sander Mateos
2026-01-09 1:04 ` Joanne Koong
2025-12-23 0:35 ` [PATCH v3 07/25] io_uring/kbuf: add recycling for kernel managed buffer rings Joanne Koong
2025-12-29 22:00 ` Gabriel Krisman Bertazi
2025-12-29 22:20 ` Gabriel Krisman Bertazi
2025-12-30 1:15 ` Joanne Koong
2026-01-05 18:49 ` Gabriel Krisman Bertazi
2026-01-08 20:37 ` Caleb Sander Mateos
2026-01-09 1:07 ` Joanne Koong
2025-12-23 0:35 ` [PATCH v3 08/25] io_uring: add io_uring_cmd_fixed_index_get() and io_uring_cmd_fixed_index_put() Joanne Koong
2026-01-08 19:02 ` Caleb Sander Mateos
2026-01-08 20:44 ` Caleb Sander Mateos
2026-01-09 0:55 ` Joanne Koong
2026-01-09 1:08 ` Caleb Sander Mateos
2025-12-23 0:35 ` [PATCH v3 09/25] io_uring/kbuf: add io_uring_cmd_is_kmbuf_ring() Joanne Koong
2025-12-23 0:35 ` [PATCH v3 10/25] io_uring/kbuf: export io_ring_buffer_select() Joanne Koong
2026-01-08 20:34 ` Caleb Sander Mateos
2026-01-09 0:38 ` Joanne Koong
2026-01-09 2:43 ` Caleb Sander Mateos
2025-12-23 0:35 ` [PATCH v3 11/25] io_uring/kbuf: return buffer id in buffer selection Joanne Koong
2025-12-23 0:35 ` [PATCH v3 12/25] io_uring/cmd: set selected buffer index in __io_uring_cmd_done() Joanne Koong
2025-12-23 0:35 ` [PATCH v3 13/25] fuse: refactor io-uring logic for getting next fuse request Joanne Koong
2025-12-23 0:35 ` [PATCH v3 14/25] fuse: refactor io-uring header copying to ring Joanne Koong
2026-01-11 16:03 ` Bernd Schubert
2026-01-16 22:33 ` Joanne Koong
2026-01-27 23:06 ` Bernd Schubert
2025-12-23 0:35 ` [PATCH v3 15/25] fuse: refactor io-uring header copying from ring Joanne Koong
2025-12-23 0:35 ` [PATCH v3 16/25] fuse: use enum types for header copying Joanne Koong
2025-12-23 0:35 ` [PATCH v3 17/25] fuse: refactor setting up copy state for payload copying Joanne Koong
2025-12-23 0:35 ` [PATCH v3 18/25] fuse: support buffer copying for kernel addresses Joanne Koong
2025-12-23 0:35 ` [PATCH v3 19/25] fuse: add io-uring kernel-managed buffer ring Joanne Koong
2026-02-03 23:58 ` Bernd Schubert
2026-02-05 20:24 ` Joanne Koong
2026-02-05 20:49 ` Bernd Schubert
2026-02-05 21:29 ` Joanne Koong
2026-02-05 21:48 ` Bernd Schubert
2026-02-05 22:19 ` Joanne Koong
2025-12-23 0:35 ` [PATCH v3 20/25] io_uring/rsrc: rename io_buffer_register_bvec()/io_buffer_unregister_bvec() Joanne Koong
2026-01-08 20:52 ` Caleb Sander Mateos
2025-12-23 0:35 ` [PATCH v3 21/25] io_uring/rsrc: split io_buffer_register_request() logic Joanne Koong
2026-01-08 21:04 ` Caleb Sander Mateos [this message]
2026-01-09 0:18 ` Joanne Koong
2025-12-23 0:35 ` [PATCH v3 22/25] io_uring/rsrc: Allow buffer release callback to be optional Joanne Koong
2025-12-23 0:35 ` [PATCH v3 23/25] io_uring/rsrc: add io_buffer_register_bvec() Joanne Koong
2026-01-08 21:09 ` Caleb Sander Mateos
2026-01-09 0:10 ` Joanne Koong
2025-12-23 0:35 ` [PATCH v3 24/25] fuse: add zero-copy over io-uring Joanne Koong
2026-01-08 21:15 ` Caleb Sander Mateos
2026-01-09 0:07 ` Joanne Koong
2025-12-23 0:35 ` [PATCH v3 25/25] docs: fuse: add io-uring bufring and zero-copy documentation Joanne Koong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CADUfDZo56Bgv4PnKnE-nBbZ8WF1N-42RoBZ6DOXVRyqwksg2Xg@mail.gmail.com \
--to=csander@purestorage.com \
--cc=asml.silence@gmail.com \
--cc=axboe@kernel.dk \
--cc=bschubert@ddn.com \
--cc=io-uring@vger.kernel.org \
--cc=joannelkoong@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=miklos@szeredi.hu \
--cc=xiaobing.li@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox