public inbox for [email protected]
 help / color / mirror / Atom feed
From: Gabriel Krisman Bertazi <[email protected]>
To: Jens Axboe <[email protected]>
Cc: Pavel Begunkov <[email protected]>,
	[email protected], [email protected]
Subject: Re: [PATCH 10/11] io_uring/rsrc: cache struct io_rsrc_node
Date: Tue, 04 Apr 2023 13:53:01 -0300	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]> (Jens Axboe's message of "Tue, 4 Apr 2023 09:52:55 -0600")

Jens Axboe <[email protected]> writes:

> On 4/4/23 9:48?AM, Gabriel Krisman Bertazi wrote:
>> Pavel Begunkov <[email protected]> writes:
>> 
>>> On 4/1/23 01:04, Gabriel Krisman Bertazi wrote:
>>>> Pavel Begunkov <[email protected]> writes:
>> 
>>>>> I didn't try it, but kmem_cache vs kmalloc, IIRC, doesn't bring us
>>>>> much, definitely doesn't spare from locking, and the overhead
>>>>> definitely wasn't satisfactory for requests before.
>>>> There is no locks in the fast path of slub, as far as I know.  it has
>>>> a
>>>> per-cpu cache that is refilled once empty, quite similar to the fastpath
>>>> of this cache.  I imagine the performance hit in slub comes from the
>>>> barrier and atomic operations?
>>>
>>> Yeah, I mean all kinds of synchronisation. And I don't think
>>> that's the main offender here, the test is single threaded without
>>> contention and the system was mostly idle.
>>>
>>>> kmem_cache works fine for most hot paths of the kernel.  I think this
>>>
>>> It doesn't for io_uring. There are caches for the net side and now
>>> in the block layer as well. I wouldn't say it necessarily halves
>>> performance but definitely takes a share of CPU.
>> 
>> Right.  My point is that all these caches (block, io_uring) duplicate
>> what the slab cache is meant to do.  Since slab became a bottleneck, I'm
>> looking at how to improve the situation on their side, to see if we can
>> drop the caching here and in block/.
>
> That would certainly be a worthy goal, and I do agree that these caches
> are (largely) working around deficiencies. One important point that you
> may miss is that most of this caching gets its performance from both
> avoiding atomics in slub, but also because we can guarantee that both
> alloc and free happen from process context. The block IRQ bits are a bit
> different, but apart from that, it's true elsewhere. Caching that needs
> to even disable IRQs locally generally doesn't beat out slub by much,
> the big wins are the cases where we know free+alloc is done in process
> context.

Yes, I noticed that.  I was thinking of exposing a flag at kmem_cache
creation-time to tell slab the user promises not to use it in IRQ
context, so it doesn't need to worry about nested invocation in the
allocation/free path.  Then, for those caches, have a
kmem_cache_alloc_locked variant, where the synchronization is maintained
by the caller (i.e. by ->uring_lock here), so it can manipulate the
cache without atomics.

I was looking at your implementation of the block cache for inspiration
and saw how you kept a second list for IRQ.  I'm thinking how to fit a
similar change inside slub.  But for now, I want to get the simpler
case, which is all io_uring need.

I'll try to get a prototype before lsfmm and see if I get the MM folks
input there.

-- 
Gabriel Krisman Bertazi

  reply	other threads:[~2023-04-04 16:53 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-30 14:53 [RFC 00/11] optimise registered buffer/file updates Pavel Begunkov
2023-03-30 14:53 ` [PATCH 01/11] io_uring/rsrc: use non-pcpu refcounts for nodes Pavel Begunkov
2023-03-30 14:53 ` [PATCH 02/11] io_uring/rsrc: keep cached refs per node Pavel Begunkov
2023-03-30 14:53 ` [PATCH 03/11] io_uring: don't put nodes under spinlocks Pavel Begunkov
2023-03-30 14:53 ` [PATCH 04/11] io_uring: io_free_req() via tw Pavel Begunkov
2023-03-30 14:53 ` [PATCH 05/11] io_uring/rsrc: protect node refs with uring_lock Pavel Begunkov
2023-03-30 14:53 ` [PATCH 06/11] io_uring/rsrc: kill rsrc_ref_lock Pavel Begunkov
2023-03-30 14:53 ` [PATCH 07/11] io_uring/rsrc: rename rsrc_list Pavel Begunkov
2023-03-30 14:53 ` [PATCH 08/11] io_uring/rsrc: optimise io_rsrc_put allocation Pavel Begunkov
2023-03-30 14:53 ` [PATCH 09/11] io_uring/rsrc: don't offload node free Pavel Begunkov
2023-03-30 14:53 ` [PATCH 10/11] io_uring/rsrc: cache struct io_rsrc_node Pavel Begunkov
2023-03-31 14:09   ` Gabriel Krisman Bertazi
2023-03-31 16:27     ` Pavel Begunkov
2023-04-01  0:04       ` Gabriel Krisman Bertazi
2023-04-04 13:21         ` Pavel Begunkov
2023-04-04 15:48           ` Gabriel Krisman Bertazi
2023-04-04 15:52             ` Jens Axboe
2023-04-04 16:53               ` Gabriel Krisman Bertazi [this message]
2023-04-04 18:26                 ` Pavel Begunkov
2023-03-30 14:53 ` [PATCH 11/11] io_uring/rsrc: add lockdep sanity checks Pavel Begunkov
2023-03-31 13:35 ` [RFC 00/11] optimise registered buffer/file updates Gabriel Krisman Bertazi
2023-03-31 16:21   ` Pavel Begunkov
2023-03-31 15:18 ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox