From: Pavel Begunkov <[email protected]>
To: Gabriel Krisman Bertazi <[email protected]>
Cc: [email protected], Jens Axboe <[email protected]>,
[email protected]
Subject: Re: [PATCH 10/11] io_uring/rsrc: cache struct io_rsrc_node
Date: Tue, 4 Apr 2023 14:21:41 +0100 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 4/1/23 01:04, Gabriel Krisman Bertazi wrote:
> Pavel Begunkov <[email protected]> writes:
>
>> On 3/31/23 15:09, Gabriel Krisman Bertazi wrote:
>>> Pavel Begunkov <[email protected]> writes:
>>>
>>>> Add allocation cache for struct io_rsrc_node, it's always allocated and
>>>> put under ->uring_lock, so it doesn't need any extra synchronisation
>>>> around caches.
>>> Hi Pavel,
>>> I'm curious if you considered using kmem_cache instead of the custom
>>> cache for this case? I'm wondering if this provokes visible difference in
>>> performance in your benchmark.
>>
>> I didn't try it, but kmem_cache vs kmalloc, IIRC, doesn't bring us
>> much, definitely doesn't spare from locking, and the overhead
>> definitely wasn't satisfactory for requests before.
>
> There is no locks in the fast path of slub, as far as I know. it has a
> per-cpu cache that is refilled once empty, quite similar to the fastpath
> of this cache. I imagine the performance hit in slub comes from the
> barrier and atomic operations?
Yeah, I mean all kinds of synchronisation. And I don't think
that's the main offender here, the test is single threaded without
contention and the system was mostly idle.
> kmem_cache works fine for most hot paths of the kernel. I think this
It doesn't for io_uring. There are caches for the net side and now
in the block layer as well. I wouldn't say it necessarily halves
performance but definitely takes a share of CPU.
> custom cache makes sense for the request cache, where objects are
> allocated at an incredibly high rate. but is this level of update
> frequency a valid use case here?
I can think of some. For example it was of interest before to
install a file for just 2-3 IO operations and also fully bypassing
the normal file table. I rather don't see why we wouldn't use it.
> If it is indeed a significant performance improvement, I guess it is
> fine to have another user of the cache. But I'd be curious to know how
> much of the performance improvement you mentioned in the cover letter is
> due to this patch!
It was definitely sticking out in profiles, 5-10% of cycles, maybe more
--
Pavel Begunkov
next prev parent reply other threads:[~2023-04-04 13:22 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-30 14:53 [RFC 00/11] optimise registered buffer/file updates Pavel Begunkov
2023-03-30 14:53 ` [PATCH 01/11] io_uring/rsrc: use non-pcpu refcounts for nodes Pavel Begunkov
2023-03-30 14:53 ` [PATCH 02/11] io_uring/rsrc: keep cached refs per node Pavel Begunkov
2023-03-30 14:53 ` [PATCH 03/11] io_uring: don't put nodes under spinlocks Pavel Begunkov
2023-03-30 14:53 ` [PATCH 04/11] io_uring: io_free_req() via tw Pavel Begunkov
2023-03-30 14:53 ` [PATCH 05/11] io_uring/rsrc: protect node refs with uring_lock Pavel Begunkov
2023-03-30 14:53 ` [PATCH 06/11] io_uring/rsrc: kill rsrc_ref_lock Pavel Begunkov
2023-03-30 14:53 ` [PATCH 07/11] io_uring/rsrc: rename rsrc_list Pavel Begunkov
2023-03-30 14:53 ` [PATCH 08/11] io_uring/rsrc: optimise io_rsrc_put allocation Pavel Begunkov
2023-03-30 14:53 ` [PATCH 09/11] io_uring/rsrc: don't offload node free Pavel Begunkov
2023-03-30 14:53 ` [PATCH 10/11] io_uring/rsrc: cache struct io_rsrc_node Pavel Begunkov
2023-03-31 14:09 ` Gabriel Krisman Bertazi
2023-03-31 16:27 ` Pavel Begunkov
2023-04-01 0:04 ` Gabriel Krisman Bertazi
2023-04-04 13:21 ` Pavel Begunkov [this message]
2023-04-04 15:48 ` Gabriel Krisman Bertazi
2023-04-04 15:52 ` Jens Axboe
2023-04-04 16:53 ` Gabriel Krisman Bertazi
2023-04-04 18:26 ` Pavel Begunkov
2023-03-30 14:53 ` [PATCH 11/11] io_uring/rsrc: add lockdep sanity checks Pavel Begunkov
2023-03-31 13:35 ` [RFC 00/11] optimise registered buffer/file updates Gabriel Krisman Bertazi
2023-03-31 16:21 ` Pavel Begunkov
2023-03-31 15:18 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox