From: Keith Busch <[email protected]>
To: Pavel Begunkov <[email protected]>
Cc: Keith Busch <[email protected]>,
[email protected], [email protected],
[email protected], [email protected]
Subject: Re: [PATCH 6/6] io_uring: cache nodes and mapped buffers
Date: Fri, 7 Feb 2025 08:33:40 -0700 [thread overview]
Message-ID: <Z6Yn1GOPlMfpZqsf@kbusch-mbp> (raw)
In-Reply-To: <[email protected]>
On Fri, Feb 07, 2025 at 12:41:17PM +0000, Pavel Begunkov wrote:
> On 2/3/25 15:45, Keith Busch wrote:
> > From: Keith Busch <[email protected]>
> >
> > Frequent alloc/free cycles on these is pretty costly. Use an io cache to
> > more efficiently reuse these buffers.
> >
> > Signed-off-by: Keith Busch <[email protected]>
> > ---
> > include/linux/io_uring_types.h | 16 ++---
> > io_uring/filetable.c | 2 +-
> > io_uring/rsrc.c | 108 ++++++++++++++++++++++++---------
> > io_uring/rsrc.h | 2 +-
> > 4 files changed, 92 insertions(+), 36 deletions(-)
> >
> > diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
> > index aa661ebfd6568..c0e0c1f92e5b1 100644
> > --- a/include/linux/io_uring_types.h
> > +++ b/include/linux/io_uring_types.h
> > @@ -67,8 +67,17 @@ struct io_file_table {
> > unsigned int alloc_hint;
> > };
> > +struct io_alloc_cache {
> > + void **entries;
> > + unsigned int nr_cached;
> > + unsigned int max_cached;
> > + size_t elem_size;
> > +};
> > +
> > struct io_buf_table {
> > struct io_rsrc_data data;
> > + struct io_alloc_cache node_cache;
> > + struct io_alloc_cache imu_cache;
>
> We can avoid all churn if you kill patch 5/6 and place put the
> caches directly into struct io_ring_ctx. It's a bit better for
> future cache improvements and we can even reuse the node cache
> for files.
>
> ...
> > diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> > index 864c2eabf8efd..5434b0d992d62 100644
> > --- a/io_uring/rsrc.c
> > +++ b/io_uring/rsrc.c
> > @@ -117,23 +117,39 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
> > unpin_user_page(imu->bvec[i].bv_page);
> > if (imu->acct_pages)
> > io_unaccount_mem(ctx, imu->acct_pages);
> > - kvfree(imu);
> > + if (struct_size(imu, bvec, imu->nr_bvecs) >
> > + ctx->buf_table.imu_cache.elem_size ||
>
> It could be quite a large allocation, let's not cache it if
> it hasn't came from the cache for now. We can always improve
> on top.
Eh? This already skips inserting into the cache if it wasn't allocated
out of the cache.
I picked an arbitrary size, 512b, as the threshold for caching. If you
need more bvecs than fit in that, it falls back to a kvmalloc/kvfree.
The allocation overhead is pretty insignificant when you're transferring
large payloads like that, and 14 vectors was chosen as the tipping point
because it fits in a nice round number.
> And can we invert how it's calculated? See below. You'll have
> fewer calculations in the fast path, and I don't really like
> users looking at ->elem_size when it's not necessary.
>
>
> #define IO_CACHED_BVEC_SEGS N
Yah, that's fine.
next prev parent reply other threads:[~2025-02-07 15:33 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-03 15:45 [PATCH 0/6] ublk zero-copy support Keith Busch
2025-02-03 15:45 ` [PATCH 1/6] block: const blk_rq_nr_phys_segments request Keith Busch
2025-02-03 15:45 ` [PATCH 2/6] io_uring: use node for import Keith Busch
2025-02-03 15:45 ` [PATCH 3/6] io_uring: add support for kernel registered bvecs Keith Busch
2025-02-07 14:08 ` Pavel Begunkov
2025-02-07 15:17 ` Keith Busch
2025-02-08 15:49 ` Pavel Begunkov
2025-02-10 14:12 ` Ming Lei
2025-02-10 15:05 ` Keith Busch
2025-02-03 15:45 ` [PATCH 4/6] ublk: zc register/unregister bvec Keith Busch
2025-02-08 5:50 ` Ming Lei
2025-02-03 15:45 ` [PATCH 5/6] io_uring: add abstraction for buf_table rsrc data Keith Busch
2025-02-03 15:45 ` [PATCH 6/6] io_uring: cache nodes and mapped buffers Keith Busch
2025-02-07 12:41 ` Pavel Begunkov
2025-02-07 15:33 ` Keith Busch [this message]
2025-02-08 14:00 ` Pavel Begunkov
2025-02-07 15:59 ` Keith Busch
2025-02-08 14:24 ` Pavel Begunkov
2025-02-06 15:28 ` [PATCH 0/6] ublk zero-copy support Keith Busch
2025-02-07 3:51 ` Ming Lei
2025-02-07 14:06 ` Keith Busch
2025-02-08 5:44 ` Ming Lei
2025-02-08 14:16 ` Pavel Begunkov
2025-02-08 20:13 ` Keith Busch
2025-02-08 21:40 ` Pavel Begunkov
2025-02-08 7:52 ` Ming Lei
2025-02-08 0:51 ` Bernd Schubert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z6Yn1GOPlMfpZqsf@kbusch-mbp \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox