public inbox for [email protected]
 help / color / mirror / Atom feed
From: Hao Xu <[email protected]>
To: Pavel Begunkov <[email protected]>, Jens Axboe <[email protected]>
Cc: [email protected], Joseph Qi <[email protected]>
Subject: Re: [RFC 0/2] io_task_work optimization
Date: Thu, 26 Aug 2021 01:26:54 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

在 2021/8/26 上午12:46, Pavel Begunkov 写道:
> On 8/25/21 5:39 PM, Hao Xu wrote:
>> 在 2021/8/25 下午11:58, Jens Axboe 写道:
>>> On 8/23/21 12:36 PM, Hao Xu wrote:
>>>> running task_work may not be a big bottleneck now, but it's never worse
>>>> to make it move forward a little bit.
>>>> I'm trying to construct tests to prove it is better in some cases where
>>>> it should be theoretically.
>>>> Currently only prove it is not worse by running fio tests(sometimes a
>>>> little bit better). So just put it here for comments and suggestion.
>>>
>>> I think this is interesting, particularly for areas where we have a mix
>>> of task_work uses because obviously it won't really matter if the
>>> task_work being run is homogeneous.
>>>
>>> That said, would be nice to have some numbers associated with it. We
>>> have a few classes of types of task_work:
>>>
>>> 1) Work completes really fast, we want to just do those first
>>> 2) Work is pretty fast, like async buffered read copy
>>> 3) Work is more expensive, might require a full retry of the operation
>>>
>>> Might make sense to make this classification explicit. Problem is, with
>>> any kind of scheduling like that, you risk introducing latency bubbles
>>> because the prio1 list grows really fast, for example.
>> Yes, this may intrpduce latency if overwhelming 1) comes in short time.
>> I'll try more tests to see if the problem exists and if there is a
>> better way, like put limited number of 1) to the front. Anyway, I'll
>> update this thread when I get some data.
> 
> Not sure, but it looks that IRQ completion batching is coming to
> 5.15. With that you may also want to flush completions after the
> IRQ sublist is exhausted.
> 
> May be worth to consider having 2 lists in the future
I'll think about that, and there may be a way to reduce lock cost if
there are multiple lists.
lists.
> 


      reply	other threads:[~2021-08-25 17:26 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-23 18:36 [RFC 0/2] io_task_work optimization Hao Xu
2021-08-23 18:36 ` [PATCH 1/2] io_uring: run task_work when sqthread is waken up Hao Xu
2021-08-23 18:36 ` [PATCH 2/2] io_uring: add irq completion work to the head of task_list Hao Xu
2021-08-23 18:41   ` Hao Xu
2021-08-24 12:57   ` Pavel Begunkov
2021-08-25  3:19     ` Hao Xu
2021-08-25 11:18       ` Pavel Begunkov
2021-08-25 15:58 ` [RFC 0/2] io_task_work optimization Jens Axboe
2021-08-25 16:39   ` Hao Xu
2021-08-25 16:46     ` Pavel Begunkov
2021-08-25 17:26       ` Hao Xu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=db742c7c-851b-3c46-5403-ec6b00b573d3@linux.alibaba.com \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox