From: Hao Xu <[email protected]>
To: Pavel Begunkov <[email protected]>, Jens Axboe <[email protected]>
Cc: [email protected], Joseph Qi <[email protected]>
Subject: Re: [PATCH 2/2] io_uring: implementation of IOSQE_ASYNC_HYBRID logic
Date: Mon, 11 Oct 2021 16:58:08 +0800 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
在 2021/10/11 下午4:55, Hao Xu 写道:
> 在 2021/10/9 下午8:46, Pavel Begunkov 写道:
>> On 10/8/21 13:36, Hao Xu wrote:
>>> The process of this kind of requests is:
>>>
>>> step1: original context:
>>> queue it to io-worker
>>> step2: io-worker context:
>>> nonblock try(the old logic is a synchronous try here)
>>> |
>>> |--fail--> arm poll
>>> |
>>> |--(fail/ready)-->synchronous issue
>>> |
>>> |--(succeed)-->worker finish it's job, tw
>>> take over the req
>>>
>>> This works much better than IOSQE_ASYNC in cases where cpu resources
>>> are scarce or unbound max_worker is small. In these cases, number of
>>> io-worker eazily increments to max_worker, new worker cannot be created
>>> and running workers stuck there handling old works in IOSQE_ASYNC mode.
>>>
>>> In my machine, set unbound max_worker to 20, run echo-server, turns out:
>>> (arguments: register_file, connetion number is 1000, message size is 12
>>> Byte)
>>> IOSQE_ASYNC: 76664.151 tps
>>> IOSQE_ASYNC_HYBRID: 166934.985 tps
>>>
>>> Suggested-by: Jens Axboe <[email protected]>
>>> Signed-off-by: Hao Xu <[email protected]>
>>> ---
>>> fs/io_uring.c | 42 ++++++++++++++++++++++++++++++++++++++----
>>> 1 file changed, 38 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index a99f7f46e6d4..024cef09bc12 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -1409,7 +1409,7 @@ static void io_prep_async_work(struct io_kiocb
>>> *req)
>>> req->work.list.next = NULL;
>>> req->work.flags = 0;
>>> - if (req->flags & REQ_F_FORCE_ASYNC)
>>> + if (req->flags & (REQ_F_FORCE_ASYNC | REQ_F_ASYNC_HYBRID))
>>> req->work.flags |= IO_WQ_WORK_CONCURRENT;
>>> if (req->flags & REQ_F_ISREG) {
>>> @@ -5575,7 +5575,13 @@ static int io_arm_poll_handler(struct io_kiocb
>>> *req)
>>> req->apoll = apoll;
>>> req->flags |= REQ_F_POLLED;
>>> ipt.pt._qproc = io_async_queue_proc;
>>> - io_req_set_refcount(req);
>>> + /*
>>> + * REQ_F_REFCOUNT set indicate we are in io-worker context,
>>> where we
>>
>> Nope, it indicates that needs more complex refcounting. It includes
>> linked
>> timeouts but also poll because of req_ref_get for double poll. fwiw, with
>> some work it can be removed for polls, harder (and IMHO not necessary)
>> to do
>> for timeouts.Agree, I now realize that the explanation I put here is
^ it is messed up here..
>> not good at all,
> I actually want to say that the io-worker already set refs = 2 (also
> possible that prep_link_out set 1, and io-worker adds the other 1,
> previously I miss this situation). One will be put at completion time,
> the other one will be put in io_wq_free_work(). So no need to set the
> refcount here again. I looked into io_req_set_refcount(), since it does
> nothing if refcount is already not zero, I should be ok to keep this one
> as it was.
>>
>>> + * already explicitly set the submittion and completion ref. So no
>>
>> I'd say there is no notion of submission vs completion refs anymore.
>>
>>> + * need to set refcount here if that is the case.
>>> + */
>>> + if (!(req->flags & REQ_F_REFCOUNT))
>>
>> Compare it with io_req_set_refcount(), that "if" is a a no-op
>>
>>> + io_req_set_refcount(req);
>>> ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
>>> io_async_wake);
>>> @@ -6704,8 +6710,11 @@ static void io_wq_submit_work(struct
>>> io_wq_work *work)
>>> ret = -ECANCELED;
>>> if (!ret) {
>>> + bool need_poll = req->flags & REQ_F_ASYNC_HYBRID;
>>> +
>>> do {
>>> - ret = io_issue_sqe(req, 0);
>>> +issue_sqe:
>>> + ret = io_issue_sqe(req, need_poll ? IO_URING_F_NONBLOCK
>>> : 0);
>>
>> It's buggy, you will get all kinds of kernel crashes and leaks.
>> Currently IO_URING_F_NONBLOCK has dual meaning: obvious nonblock but
>> also whether we hold uring_lock or not. You'd need to split the flag
>> into two, i.e. IO_URING_F_LOCKED
> I'll look into it. I was thinking about to do the first nowait try in
> the original context, but then I thought it doesn't make sense to bring
> up a worker just for poll infra arming since thread creating and
> scheduling has its overhead.
>>
next prev parent reply other threads:[~2021-10-11 8:58 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-08 12:36 [PATCH for-5.16 0/2] async hybrid, a new way for pollable requests Hao Xu
2021-10-08 12:36 ` [PATCH 1/2] io_uring: add IOSQE_ASYNC_HYBRID flag " Hao Xu
2021-10-08 12:36 ` [PATCH 2/2] io_uring: implementation of IOSQE_ASYNC_HYBRID logic Hao Xu
2021-10-09 12:46 ` Pavel Begunkov
2021-10-11 8:55 ` Hao Xu
2021-10-11 8:58 ` Hao Xu [this message]
2021-10-09 12:51 ` [PATCH for-5.16 0/2] async hybrid, a new way for pollable requests Pavel Begunkov
2021-10-11 3:08 ` Hao Xu
2021-10-12 11:39 ` Pavel Begunkov
2021-10-14 8:53 ` Hao Xu
2021-10-14 9:20 ` Hao Xu
2021-10-14 13:53 ` Hao Xu
2021-10-14 14:17 ` Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5cb4d15a-efe5-6ebe-90eb-d0fd86f714b8@linux.alibaba.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox