From: Ming Lei <[email protected]>
To: Jens Axboe <[email protected]>
Cc: [email protected], [email protected],
David Howells <[email protected]>,
Pavel Begunkov <[email protected]>,
Chengming Zhou <[email protected]>,
[email protected], [email protected],
[email protected], [email protected]
Subject: Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()
Date: Fri, 8 Sep 2023 23:25:08 +0800 [thread overview]
Message-ID: <ZPs81IAYfB8J78Pv@fedora> (raw)
In-Reply-To: <[email protected]>
On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote:
> On 9/8/23 8:34 AM, Ming Lei wrote:
> > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote:
> >> On 9/8/23 3:30 AM, Ming Lei wrote:
> >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> >>> index ad636954abae..95a3d31a1ef1 100644
> >>> --- a/io_uring/io_uring.c
> >>> +++ b/io_uring/io_uring.c
> >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work)
> >>> }
> >>> }
> >>>
> >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> >>> + issue_flags |= IO_URING_F_NONBLOCK;
> >>> +
> >>
> >> I think this comment deserves to be more descriptive. Normally we
> >> absolutely cannot block for polled IO, it's only OK here because io-wq
> >
> > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED
> > imply REQ_NOWAIT") which actually push the responsibility/risk up to
> > io_uring.
> >
> >> is the issuer and not necessarily the poller of it. That generally falls
> >> upon the original issuer to poll these requests.
> >>
> >> I think this should be a separate commit, coming before the main fix
> >> which is below.
> >
> > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the
> > approach in V2 doesn't need this change.
> >
> >>
> >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
> >>> finish_wait(&tctx->wait, &wait);
> >>> } while (1);
> >>>
> >>> + /*
> >>> + * Reap events from each ctx, otherwise these requests may take
> >>> + * resources and prevent other contexts from being moved on.
> >>> + */
> >>> + xa_for_each(&tctx->xa, index, node)
> >>> + io_iopoll_try_reap_events(node->ctx);
> >>
> >> The main issue here is that if someone isn't polling for them, then we
> >
> > That is actually what this patch is addressing, :-)
>
> Right, that part is obvious :)
>
> >> get to wait for a timeout before they complete. This can delay exit, for
> >> example, as we're now just waiting 30 seconds (or whatever the timeout
> >> is on the underlying device) for them to get timed out before exit can
> >> finish.
> >
> > For the issue on null_blk, device timeout handler provides
> > forward-progress, such as requests are released, so new IO can be
> > handled.
> >
> > However, not all devices support timeout, such as virtio device.
>
> That's a bug in the driver, you cannot sanely support polled IO and not
> be able to deal with timeouts. Someone HAS to reap the requests and
> there are only two things that can do that - the application doing the
> polled IO, or if that doesn't happen, a timeout.
OK, then device driver timeout handler has new responsibility of covering
userspace accident, :-)
We may document this requirement for driver.
So far the only one should be virtio-blk, and the two virtio storage
drivers never implement timeout handler.
>
> > Here we just call io_iopoll_try_reap_events() to poll submitted IOs
> > for releasing resources, so no need to rely on device timeout handler
> > any more, and the extra exit delay can be avoided.
> >
> > But io_iopoll_try_reap_events() may not be enough because io_wq
> > associated with current context can get released resource immediately,
> > then new IOs are submitted successfully, but who can poll these new
> > submitted IOs, then all device resources can be held by this (freed)io_wq
> > for nothing.
> >
> > I guess we may have to take the approach in patch V2 by only canceling
> > polled IO for avoiding the thread_exit regression, or other ideas?
>
> Ideally the behavior seems like it should be that if a task goes away,
> any pending polled IO it has should be reaped. With the above notion
> that a driver supporting poll absolutely must be able to deal with
> timeouts, it's not a strict requirement as we know that requests will be
> reaped.
Then looks the io_uring fix is less important, and I will see if one
easy fix can be figured out, one way is to reap event when exiting both
current task and the associated io_wq.
Thanks,
Ming
next prev parent reply other threads:[~2023-09-08 15:26 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-08 9:30 [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() Ming Lei
2023-09-08 13:49 ` Jens Axboe
2023-09-08 14:34 ` Ming Lei
2023-09-08 14:44 ` Jens Axboe
2023-09-08 15:25 ` Ming Lei [this message]
2023-09-15 7:04 ` Jason Wang
2023-09-25 21:17 ` Stefan Hajnoczi
2023-09-26 1:28 ` Ming Lei
2023-09-26 14:55 ` Stefan Hajnoczi
2023-09-08 15:46 ` Pavel Begunkov
2023-09-09 1:43 ` Ming Lei
2023-09-13 12:53 ` Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZPs81IAYfB8J78Pv@fedora \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox