From: Xiaoguang Wang <[email protected]>
To: Jens Axboe <[email protected]>,
Pavel Begunkov <[email protected]>,
[email protected]
Cc: [email protected]
Subject: Re: [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work()
Date: Wed, 23 Dec 2020 10:12:05 +0800 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
hi,
> On 12/20/20 12:36 PM, Pavel Begunkov wrote:
>> On 20/12/2020 19:34, Pavel Begunkov wrote:
>>> On 14/12/2020 15:49, Xiaoguang Wang wrote:
>>>> io_iopoll_complete() does not hold completion_lock to complete polled
>>>> io, so in io_wq_submit_work(), we can not call io_req_complete() directly,
>>>> to complete polled io, otherwise there maybe concurrent access to cqring,
>>>> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
>>>> let io_iopoll_complete() complete polled io") has fixed this issue, but
>>>> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
>>>> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is
>>>> not good.
>>>>
>>>> Given that io_iopoll_complete() is always called under uring_lock, so here
>>>> for polled io, we can also get uring_lock to fix this issue.
>>>
>>> This returns it to the state it was before fixing + mutex locking for
>>> IOPOLL, and it's much better than having it half-broken as it is now.
>>
>> btw, comments are over 80, but that's minor.
>
> I fixed that up, but I don't particularly like how 'req' is used after
> calling complete. How about the below variant - same as before, just
> using the ctx instead to determine if we need to lock it or not.
It looks better, thanks.
Regards,
Xiaoguang Wang
>
>
> commit 253b60e7d8adcb980be91f77e64968a58d836b5e
> Author: Xiaoguang Wang <[email protected]>
> Date: Mon Dec 14 23:49:41 2020 +0800
>
> io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()
>
> io_iopoll_complete() does not hold completion_lock to complete polled io,
> so in io_wq_submit_work(), we can not call io_req_complete() directly, to
> complete polled io, otherwise there maybe concurrent access to cqring,
> defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
> let io_iopoll_complete() complete polled io") has fixed this issue, but
> Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
> IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is not
> good.
>
> Given that io_iopoll_complete() is always called under uring_lock, so here
> for polled io, we can also get uring_lock to fix this issue.
>
> Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
> Cc: <[email protected]> # 5.5+
> Signed-off-by: Xiaoguang Wang <[email protected]>
> Reviewed-by: Pavel Begunkov <[email protected]>
> [axboe: don't deref 'req' after completing it']
> Signed-off-by: Jens Axboe <[email protected]>
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index b27f61e3e0d6..0a8cf3fad955 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6332,19 +6332,28 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
> }
>
> if (ret) {
> + struct io_ring_ctx *lock_ctx = NULL;
> +
> + if (req->ctx->flags & IORING_SETUP_IOPOLL)
> + lock_ctx = req->ctx;
> +
> /*
> - * io_iopoll_complete() does not hold completion_lock to complete
> - * polled io, so here for polled io, just mark it done and still let
> - * io_iopoll_complete() complete it.
> + * io_iopoll_complete() does not hold completion_lock to
> + * complete polled io, so here for polled io, we can not call
> + * io_req_complete() directly, otherwise there maybe concurrent
> + * access to cqring, defer_list, etc, which is not safe. Given
> + * that io_iopoll_complete() is always called under uring_lock,
> + * so here for polled io, we also get uring_lock to complete
> + * it.
> */
> - if (req->ctx->flags & IORING_SETUP_IOPOLL) {
> - struct kiocb *kiocb = &req->rw.kiocb;
> + if (lock_ctx)
> + mutex_lock(&lock_ctx->uring_lock);
>
> - kiocb_done(kiocb, ret, NULL);
> - } else {
> - req_set_fail_links(req);
> - io_req_complete(req, ret);
> - }
> + req_set_fail_links(req);
> + io_req_complete(req, ret);
> +
> + if (lock_ctx)
> + mutex_unlock(&lock_ctx->uring_lock);
> }
>
> return io_steal_work(req);
>
prev parent reply other threads:[~2020-12-23 2:14 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-14 15:49 [PATCH] io_uring: hold uring_lock to complete faild polled io in io_wq_submit_work() Xiaoguang Wang
2020-12-14 17:48 ` Pavel Begunkov
2020-12-15 2:28 ` Xiaoguang Wang
2020-12-15 11:08 ` Pavel Begunkov
2020-12-20 19:34 ` Pavel Begunkov
2020-12-20 19:36 ` Pavel Begunkov
2020-12-22 23:41 ` Jens Axboe
2020-12-23 2:12 ` Xiaoguang Wang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a3f8e499-012c-e66a-9e48-1e0a6b0c66b8@linux.alibaba.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox