From: Keith Busch <[email protected]>
To: Pavel Begunkov <[email protected]>
Cc: Keith Busch <[email protected]>,
[email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
Subject: Re: [PATCHv4 5/5] io_uring: cache nodes and mapped buffers
Date: Mon, 24 Feb 2025 14:04:58 -0700 [thread overview]
Message-ID: <Z7ze-kzDuoP_XPBx@kbusch-mbp> (raw)
In-Reply-To: <[email protected]>
On Thu, Feb 20, 2025 at 04:06:21PM +0000, Pavel Begunkov wrote:
> On 2/20/25 15:24, Keith Busch wrote:
> > > > + node = io_cache_alloc(&ctx->buf_table.node_cache, GFP_KERNEL);
> > >
> > > That's why node allocators shouldn't be a part of the buffer table.
> >
> > Are you saying you want file nodes to also subscribe to the cache? The
>
> Yes, but it might be easier for you to focus on finalising the essential
> parts, and then we can improve later.
>
> > two tables can be resized independently of each other, we don't know how
> > many elements the cache needs to hold.
>
> I wouldn't try to correlate table sizes with desired cache sizes,
> users can have quite different patterns like allocating a barely used
> huge table. And you care about the speed of node change, which at
> extremes is rather limited by CPU and performance and not spatiality
> of the table. And you can also reallocate it as well.
Having the cache size and lifetime match a table that it's providing
seems as simple as I can make this. This is still an optimization at the
end of the day, so it's not strictly necessary to take the last two
patches from this series to make zero copy work if you don't want to
include it from the beginning.
next prev parent reply other threads:[~2025-02-24 21:05 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-18 22:42 [PATCHv4 0/5] ublk zero-copy support Keith Busch
2025-02-18 22:42 ` [PATCHv4 1/5] io_uring: move fixed buffer import to issue path Keith Busch
2025-02-19 1:27 ` Caleb Sander Mateos
2025-02-19 4:23 ` Ming Lei
2025-02-19 16:48 ` Pavel Begunkov
2025-02-19 17:15 ` Pavel Begunkov
2025-02-20 1:25 ` Keith Busch
2025-02-20 10:12 ` Pavel Begunkov
2025-02-18 22:42 ` [PATCHv4 2/5] io_uring: add support for kernel registered bvecs Keith Busch
2025-02-19 1:54 ` Caleb Sander Mateos
2025-02-19 17:23 ` Pavel Begunkov
2025-02-20 10:31 ` Pavel Begunkov
2025-02-20 10:38 ` Pavel Begunkov
2025-02-18 22:42 ` [PATCHv4 3/5] ublk: zc register/unregister bvec Keith Busch
2025-02-19 2:36 ` Caleb Sander Mateos
2025-02-20 11:11 ` Pavel Begunkov
2025-02-24 21:02 ` Keith Busch
2025-02-18 22:42 ` [PATCHv4 4/5] io_uring: add abstraction for buf_table rsrc data Keith Busch
2025-02-19 3:04 ` Caleb Sander Mateos
2025-02-18 22:42 ` [PATCHv4 5/5] io_uring: cache nodes and mapped buffers Keith Busch
2025-02-19 4:22 ` Caleb Sander Mateos
2025-02-24 21:01 ` Keith Busch
2025-02-24 21:39 ` Caleb Sander Mateos
2025-02-20 11:08 ` Pavel Begunkov
2025-02-20 15:24 ` Keith Busch
2025-02-20 16:06 ` Pavel Begunkov
2025-02-24 21:04 ` Keith Busch [this message]
2025-02-25 13:06 ` Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z7ze-kzDuoP_XPBx@kbusch-mbp \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox