From: Jens Axboe <[email protected]>
To: Pavel Begunkov <[email protected]>, [email protected]
Subject: Re: [PATCH RFC 00/17] playing around req alloc
Date: Wed, 10 Feb 2021 07:27:24 -0700 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 2/10/21 4:53 AM, Pavel Begunkov wrote:
> On 10/02/2021 03:23, Jens Axboe wrote:
>> On 2/9/21 8:14 PM, Pavel Begunkov wrote:
>>> On 10/02/2021 02:08, Jens Axboe wrote:
>>>> On 2/9/21 5:03 PM, Pavel Begunkov wrote:
>>>>> Unfolding previous ideas on persistent req caches. 4-7 including
>>>>> slashed ~20% of overhead for nops benchmark, haven't done benchmarking
>>>>> personally for this yet, but according to perf should be ~30-40% in
>>>>> total. That's for IOPOLL + inline completion cases, obviously w/o
>>>>> async/IRQ completions.
>>>>
>>>> And task_work, which is sort-of inline.
>>>>
>>>>> Jens,
>>>>> 1. 11/17 removes deallocations on end of submit_sqes. Looks you
>>>>> forgot or just didn't do that.
>>>
>>> And without the patches I added, it wasn't even necessary, so
>>> nevermind
>>
>> OK good, I was a bit confused about that one...
>>
>>>>> 2. lists are slow and not great cache-wise, that why at I want at least
>>>>> a combined approach from 12/17.
>>>>
>>>> This is only true if you're browsing a full list. If you do add-to-front
>>>> for a cache, and remove-from-front, then cache footprint of lists are
>>>> good.
>>>
>>> Ok, good point, but still don't think it's great. E.g. 7/17 did improve
>>> performance a bit for me, as I mentioned in the related RFC. And that
>>> was for inline-completed nops, and going over the list/array and
>>> always touching all reqs.
>>
>> Agree, array is always a bit better. Just saying that it's not a huge
>> deal unless you're traversing the list, in which case lists are indeed
>> horrible. But for popping off the first entry (or adding one), it's not
>> bad at all.
>
> btw, looks can be replaced with a singly-linked list (stack).
It could, yes.
>>>>> 3. Instead of lists in "use persistent request cache" I had in mind a
>>>>> slightly different way: to grow the req alloc cache to 32-128 (or hint
>>>>> from the userspace), batch-alloc by 8 as before, and recycle _all_ reqs
>>>>> right into it. If overflows, do kfree().
>>>>> It should give probabilistically high hit rate, amortising out most of
>>>>> allocations. Pros: it doesn't grow ~infinitely as lists can. Cons: there
>>>>> are always counter examples. But as I don't have numbers to back it, I
>>>>> took your implementation. Maybe, we'll reconsider later.
>>>>
>>>> It shouldn't grow bigger than what was used, but the downside is that
>>>> it will grow as big as the biggest usage ever. We could prune, if need
>>>> be, of course.
>>>
>>> Yeah, that was the point. But not a deal-breaker in either case.
>>
>> Agree
>>
>>>> As far as I'm concerned, the hint from user space is the submit count.
>>>
>>> I mean hint on setup, like max QD, then we can allocate req cache
>>> accordingly. Not like it matters
>>
>> I'd rather grow it dynamically, only the first few iterations will hit
>> the alloc. Which is fine, and better than pre-populating. Assuming I
>> understood you correctly here...
>
> I guess not, it's not about number of requests perse, but space in
> alloc cache. Like that
>
> struct io_submit_state {
> ...
> void *reqs[userspace_hint];
> };
>
>>
>>>>
>>>>> I'll revise tomorrow on a fresh head + do some performance testing,
>>>>> and is leaving it RFC until then.
>>>>
>>>> I'll look too and test this, thanks!
>>
>> Tests out good for me with the suggested edits I made. nops are
>> massively improved, as suspected. But also realistic workloads benefit
>> nicely.
>>
>> I'll send out a few patches I have on top tomorrow. Not fixes, but just
>> further improvements/features/accounting.
>
> Sounds great!
> btw, "lock_free_list" which is not a "lock-free" list can be
> confusing, I'd suggest to rename it to free_list_lock.
Or maybe just locked_free_list would make it more understandable. The
current name is misleading.
--
Jens Axboe
prev parent reply other threads:[~2021-02-10 14:28 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-10 0:03 [PATCH RFC 00/17] playing around req alloc Pavel Begunkov
2021-02-10 0:03 ` [PATCH 01/17] io_uring: replace force_nonblock with flags Pavel Begunkov
2021-02-10 0:03 ` [PATCH 02/17] io_uring: make op handlers always take issue flags Pavel Begunkov
2021-02-10 0:03 ` [PATCH 03/17] io_uring: don't propagate io_comp_state Pavel Begunkov
2021-02-10 14:00 ` Pavel Begunkov
2021-02-10 14:27 ` Jens Axboe
2021-02-10 0:03 ` [PATCH 04/17] io_uring: don't keep submit_state on stack Pavel Begunkov
2021-02-10 0:03 ` [PATCH 05/17] io_uring: remove ctx from comp_state Pavel Begunkov
2021-02-10 0:03 ` [PATCH 06/17] io_uring: don't reinit submit state every time Pavel Begunkov
2021-02-10 0:03 ` [PATCH 07/17] io_uring: replace list with array for compl batch Pavel Begunkov
2021-02-10 0:03 ` [PATCH 08/17] io_uring: submit-completion free batching Pavel Begunkov
2021-02-10 0:03 ` [PATCH 09/17] io_uring: remove fallback_req Pavel Begunkov
2021-02-10 0:03 ` [PATCH 10/17] io_uring: count ctx refs separately from reqs Pavel Begunkov
2021-02-10 0:03 ` [PATCH 11/17] io_uring: persistent req cache Pavel Begunkov
2021-02-10 0:03 ` [PATCH 12/17] io_uring: feed reqs back into alloc cache Pavel Begunkov
2021-02-10 0:03 ` [PATCH 13/17] io_uring: use persistent request cache Pavel Begunkov
2021-02-10 2:14 ` Jens Axboe
2021-02-10 0:03 ` [PATCH 14/17] io_uring: provide FIFO ordering for task_work Pavel Begunkov
2021-02-10 0:03 ` [PATCH 15/17] io_uring: enable req cache for task_work items Pavel Begunkov
2021-02-10 0:03 ` [PATCH 16/17] io_uring: take comp_state from ctx Pavel Begunkov
2021-02-10 0:03 ` [PATCH 17/17] io_uring: defer flushing cached reqs Pavel Begunkov
2021-02-10 2:10 ` Jens Axboe
2021-02-10 2:08 ` [PATCH RFC 00/17] playing around req alloc Jens Axboe
2021-02-10 3:14 ` Pavel Begunkov
2021-02-10 3:23 ` Jens Axboe
2021-02-10 11:53 ` Pavel Begunkov
2021-02-10 14:27 ` Jens Axboe [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox