From: Jens Axboe <[email protected]>
To: Pavel Begunkov <[email protected]>, [email protected]
Cc: [email protected]
Subject: Re: [RFC 0/2] optimise local-tw task resheduling
Date: Sun, 12 Mar 2023 09:31:28 -0600 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 3/11/23 1:53?PM, Pavel Begunkov wrote:
> On 3/11/23 20:45, Pavel Begunkov wrote:
>> On 3/11/23 17:24, Jens Axboe wrote:
>>> On 3/10/23 12:04?PM, Pavel Begunkov wrote:
>>>> io_uring extensively uses task_work, but when a task is waiting
>>>> for multiple CQEs it causes lots of rescheduling. This series
>>>> is an attempt to optimise it and be a base for future improvements.
>>>>
>>>> For some zc network tests eventually waiting for a portion of
>>>> buffers I've got 10x descrease in the number of context switches,
>>>> which reduced the CPU consumption more than twice (17% -> 8%).
>>>> It also helps storage cases, while running fio/t/io_uring against
>>>> a low performant drive it got 2x descrease of the number of context
>>>> switches for QD8 and ~4 times for QD32.
>>>>
>>>> Not for inclusion yet, I want to add an optimisation for when
>>>> waiting for 1 CQE.
>>>
>>> Ran this on the usual peak benchmark, using IRQ. IOPS is around ~70M for
>>> that, and I see context rates of around 8.1-8.3M/sec with the current
>>> kernel.
>>>
>>> Applied the two patches, but didn't see much of a change? Performance is
>>> about the same, and cx rate ditto. Confused... As you probably know,
>>> this test waits for 32 ios at the time.
>>
>> If I'd to guess it already has perfect batching, for which case
>> the patch does nothing. Maybe it's due to SSD coalescing +
>> small ro I/O + consistency and small latencies of Optanes,
>> or might be on the scheduling and the kernel side to be slow
>> to react.
>
> And if that's that, I have to note that it's quite a sterile
> case, the last time I asked the usual batching we're currently
> getting for networking cases is 1-2.
I can definitely see this being very useful for the more
non-deterministic cases where "completions" come in more sporadically.
But for the networking case, if this is eg receives, you'd trigger the
wakeup anyway to do the actual receive? And then the cqe posting doesn't
trigger another wakeup.
--
Jens Axboe
next prev parent reply other threads:[~2023-03-12 15:31 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-10 19:04 [RFC 0/2] optimise local-tw task resheduling Pavel Begunkov
2023-03-10 19:04 ` [RFC 1/2] io_uring: add tw add flags Pavel Begunkov
2023-03-10 19:04 ` [RFC 2/2] io_uring: reduce sheduling due to tw Pavel Begunkov
2023-03-11 17:24 ` [RFC 0/2] optimise local-tw task resheduling Jens Axboe
2023-03-11 20:45 ` Pavel Begunkov
2023-03-11 20:53 ` Pavel Begunkov
2023-03-12 15:31 ` Jens Axboe [this message]
2023-03-13 3:52 ` Pavel Begunkov
2023-03-12 15:30 ` Jens Axboe
2023-03-13 3:45 ` Pavel Begunkov
2023-03-13 14:16 ` Jens Axboe
2023-03-13 17:50 ` Pavel Begunkov
2023-03-13 22:01 ` Jens Axboe
2023-03-16 12:25 ` Pavel Begunkov
2023-03-15 2:35 ` Ming Lei
2023-03-15 16:53 ` Pavel Begunkov
2023-03-16 1:25 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox