public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Hao Xu <[email protected]>, Jens Axboe <[email protected]>
Cc: [email protected], Joseph Qi <[email protected]>
Subject: Re: [PATCH for-5.13] io_uring: maintain drain requests' logic
Date: Thu, 1 Apr 2021 11:25:47 +0100	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 01/04/2021 07:53, Hao Xu wrote:
> 在 2021/4/1 上午6:06, Pavel Begunkov 写道:
>>
>>
>> On 31/03/2021 10:01, Hao Xu wrote:
>>> Now that we have multishot poll requests, one sqe can emit multiple
>>> cqes. given below example:
>>>      sqe0(multishot poll)-->sqe1-->sqe2(drain req)
>>> sqe2 is designed to issue after sqe0 and sqe1 completed, but since sqe0
>>> is a multishot poll request, sqe2 may be issued after sqe0's event
>>> triggered twice before sqe1 completed. This isn't what users leverage
>>> drain requests for.
>>> Here a simple solution is to ignore all multishot poll cqes, which means
>>> drain requests  won't wait those request to be done.
>>>
>>> Signed-off-by: Hao Xu <[email protected]>
>>> ---
>>>   fs/io_uring.c | 9 +++++++--
>>>   1 file changed, 7 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index 513096759445..cd6d44cf5940 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -455,6 +455,7 @@ struct io_ring_ctx {
>>>       struct callback_head        *exit_task_work;
>>>         struct wait_queue_head        hash_wait;
>>> +    unsigned                        multishot_cqes;
>>>         /* Keep this last, we don't need it for the fast path */
>>>       struct work_struct        exit_work;
>>> @@ -1181,8 +1182,8 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
>>>       if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
>>>           struct io_ring_ctx *ctx = req->ctx;
>>>   -        return seq != ctx->cached_cq_tail
>>> -                + READ_ONCE(ctx->cached_cq_overflow);
>>> +        return seq + ctx->multishot_cqes != ctx->cached_cq_tail
>>> +            + READ_ONCE(ctx->cached_cq_overflow);
>>>       }
>>>         return false;
>>> @@ -4897,6 +4898,7 @@ static bool io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
>>>   {
>>>       struct io_ring_ctx *ctx = req->ctx;
>>>       unsigned flags = IORING_CQE_F_MORE;
>>> +    bool multishot_poll = !(req->poll.events & EPOLLONESHOT);
>>>         if (!error && req->poll.canceled) {
>>>           error = -ECANCELED;
>>> @@ -4911,6 +4913,9 @@ static bool io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
>>>           req->poll.done = true;
>>>           flags = 0;
>>>       }
>>> +    if (multishot_poll)
>>> +        ctx->multishot_cqes++;
>>> +
>>
>> We need to make sure we do that only for a non-final complete, i.e.
>> not killing request, otherwise it'll double account the last one.
> Hi Pavel, I saw a killing request like iopoll_remove or async_cancel call io_cqring_fill_event() to create an ECANCELED cqe for the original poll request. So there could be cases like(even for single poll request):
>   (1). add poll --> cancel poll, an ECANCELED cqe.
>                                                   1sqe:1cqe   all good
>   (2). add poll --> trigger event(queued to task_work) --> cancel poll,            an ECANCELED cqe --> task_work runs, another ECANCELED cqe.
>                                                   1sqe:2cqes

Those should emit a CQE on behalf of the request they're cancelling
only when it's definitely cancelled and not going to fill it
itself. E.g. if io_poll_cancel() found it and removed from
all the list and core's poll infra.

At least before multi-cqe it should have been working fine.

> I suggest we shall only emit one ECANCELED cqe.
> Currently I only account cqe through io_poll_complete(), so ECANCELED cqe from io_poll_remove or async_cancel etc are not counted in.
>> E.g. is failed __io_cqring_fill_event() in io_poll_complete() fine?
>> Other places?
> a failed __io_cqring_fill_event() doesn't produce a cqe but increment ctx->cached_cq_overflow, as long as a cqe is produced or cached_cq_overflow is +=1, it is ok.

Not claiming that the case is broken, but cached_cq_overflow is
considered in req_need_defer() as well, so from its perspective there
is no much difference between succeed fill_event() or not.

>>
>> Btw, we can use some tests :)
> I'll do more tests.

Perfect!

>>
>>
>>>       io_commit_cqring(ctx);
>>>       return !(flags & IORING_CQE_F_MORE);
>>>   }
>>>
>>
> 

-- 
Pavel Begunkov

  reply	other threads:[~2021-04-01 10:30 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-31  9:01 [PATCH for-5.13] io_uring: maintain drain requests' logic Hao Xu
2021-03-31 15:36 ` Jens Axboe
2021-04-01  6:58   ` Hao Xu
2021-03-31 22:06 ` Pavel Begunkov
2021-04-01  6:53   ` Hao Xu
2021-04-01 10:25     ` Pavel Begunkov [this message]
     [not found]       ` <[email protected]>
2021-04-01 22:29         ` Pavel Begunkov
2021-04-03  6:58           ` Hao Xu
2021-04-04 23:07             ` Jens Axboe
2021-04-05 16:11               ` Hao Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox