public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Hao Xu <[email protected]>, io-uring <[email protected]>
Cc: Pavel Begunkov <[email protected]>, [email protected]
Subject: Re: [RFC] a new way to achieve asynchronous IO
Date: Thu, 23 Jun 2022 08:08:08 -0600	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 6/23/22 7:31 AM, Hao Xu wrote:
> On 6/20/22 21:41, Jens Axboe wrote:
>> On 6/20/22 6:01 AM, Hao Xu wrote:
>>> Hi,
>>> I've some thought on the way of doing async IO. The current model is:
>>> (given we are using SQPOLL mode)
>>>
>>> the sqthread does:
>>> (a) Issue a request with nowait/nonblock flag.
>>> (b) If it would block, reutrn -EAGAIN
>>> (c) The io_uring layer captures this -EAGAIN and wake up/create
>>> a io-worker to execute the request synchronously.
>>> (d) Try to issue other requests in the above steps again.
>>>
>>> This implementation has two downsides:
>>> (1) we have to find all the block point in the IO stack manually and
>>> change them into "nowait/nonblock friendly".
>>> (2) when we raise another io-worker to do the request, we submit the
>>> request from the very beginning. This isn't a little bit inefficient.
>>>
>>>
>>> While I think we can actually do it in a reverse way:
>>> (given we are using SQPOLL mode)
>>>
>>> the sqthread1 does:
>>> (a) Issue a request in the synchronous way
>>> (b) If it is blocked/scheduled soon, raise another sqthread2
>>> (c) sqthread2 tries to issue other requests in the same way.
>>>
>>> This solves problem (1), and may solve (2).
>>> For (1), we just do the sqthread waken-up at the beginning of schedule()
>>> just like what the io-worker and system-worker do. No need to find all
>>> the block point.
>>> For (2), we continue the blocked request from where it is blocked when
>>> resource is satisfied.
>>>
>>> What we need to take care is making sure there is only one task
>>> submitting the requests.
>>>
>>> To achieve this, we can maintain a pool of sqthread just like the iowq.
>>>
>>> I've done a very simple/ugly POC to demonstrate this:
>>>
>>> https://github.com/HowHsu/linux/commit/183be142493b5a816b58bd95ae4f0926227b587b
>>>
>>> I also wrote a simple test to test it, which submits two sqes, one
>>> read(pipe), one nop request. The first one will be block since no data
>>> in the pipe. Then a new sqthread was created/waken up to submit the
>>> second one and then some data is written to the pipe(by a unrelated
>>> user thread), soon the first sqthread is waken up and continues the
>>> request.
>>>
>>> If the idea sounds no fatal issue I'll change the POC to real patches.
>>> Any comments are welcome!
>>
>> One thing I've always wanted to try out is kind of similar to this, but
>> a superset of it. Basically io-wq isn't an explicit offload mechanism,
>> it just happens automatically if the issue blocks. This applies to both
>> SQPOLL and non-SQPOLL.
>>
>> This takes a page out of the old syslet/threadlet that Ingo Molnar did
>> way back in the day [1], but it never really went anywhere. But the
>> pass-on-block primitive would apply very nice to io_uring.
> 
> I've read a part of the syslet/threadlet patchset, seems it has
> something that I need, my first idea about the new iowq offload is
> just like syslet----if blocked, trigger a new worker, deliver the
> context to it, and then update the current context so that we return
> to the place of sqe submission. But I just didn't know how to do it.

Exactly, what you mentioned was very close to what I had considered in
the past, and what the syslet/threadlet attempted to do. Except it flips
it upside down a bit, which I do think is probably the saner way to do
it rather than have the original block and fork a new one.

> By the way, may I ask why the syslet/threadlet is not merged to the
> mainline. The mail thread is very long, haven't gotten a chance to
> read all of it.

Not quite sure, it's been a long time. IMHO it's a good idea looking for
the right interface, which we now have. So the time may be ripe to do
something like this, finally.
> 
> For the approach I posted, I found it is actually SQPOLL-nonrelated.
> The original conext just wake up a worker in the pool to do the
> submission, and if one blocks, another one wakes up to do the
> submission. It is definitely easier to implement than something like
> syslet(context delivery) since the new worker naturally goes to the
> place of submission thus no context delivery needed. but a downside is
> every time we call io_uring_enter to submit a batch of sqes, there is a
> wakeup at the beginning.
> 
> I'll try if I can implement a context delivery version.

Sounds good, thanks.

-- 
Jens Axboe


  reply	other threads:[~2022-06-23 14:08 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-20 12:01 [RFC] a new way to achieve asynchronous IO Hao Xu
2022-06-20 12:03 ` Hao Xu
2022-06-20 13:41 ` Jens Axboe
2022-06-21  3:38   ` Hao Xu
2022-06-23 13:31   ` Hao Xu
2022-06-23 14:08     ` Jens Axboe [this message]
2022-06-27  7:11       ` Hao Xu
2022-06-28 13:33         ` Hao Xu
2022-07-12  7:11       ` Hao Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox