* [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() @ 2023-09-08 9:30 Ming Lei 2023-09-08 13:49 ` Jens Axboe 0 siblings, 1 reply; 12+ messages in thread From: Ming Lei @ 2023-09-08 9:30 UTC (permalink / raw) To: Jens Axboe, io-uring, linux-block Cc: Ming Lei, David Howells, Pavel Begunkov, Chengming Zhou io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit(). Meantime io_wq IO code path may share resource with normal iopoll code path. So if any HIPRI request is submitted via io_wq, this request may not get resource for moving on, given iopoll isn't possible in io_wq_put_and_exit(). The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0' with default null_blk parameters. Fix it by the following approaches: - switch to IO_URING_F_NONBLOCK for submitting POLLED IO from io_wq, so that requests can be canceled when submitting from exiting io_wq - reap completed events before exiting io wq, so that these completed requests won't hold resource and prevent other contexts from moving on Closes: https://lore.kernel.org/linux-block/[email protected]/ Reported-by: David Howells <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Chengming Zhou <[email protected]> Signed-off-by: Ming Lei <[email protected]> --- V3: - take new approach and fix regression on thread_exit in liburing tests - pass liburing tests(make runtests) V2: - avoid to mess up io_uring_cancel_generic() by adding one new helper for canceling io_wq requests io_uring/io_uring.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index ad636954abae..95a3d31a1ef1 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) } } + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) + issue_flags |= IO_URING_F_NONBLOCK; + do { ret = io_issue_sqe(req, issue_flags); if (ret != -EAGAIN) @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) finish_wait(&tctx->wait, &wait); } while (1); + /* + * Reap events from each ctx, otherwise these requests may take + * resources and prevent other contexts from being moved on. + */ + xa_for_each(&tctx->xa, index, node) + io_iopoll_try_reap_events(node->ctx); io_uring_clean_tctx(tctx); if (cancel_all) { /* -- 2.40.1 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-08 9:30 [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() Ming Lei @ 2023-09-08 13:49 ` Jens Axboe 2023-09-08 14:34 ` Ming Lei 2023-09-08 15:46 ` Pavel Begunkov 0 siblings, 2 replies; 12+ messages in thread From: Jens Axboe @ 2023-09-08 13:49 UTC (permalink / raw) To: Ming Lei, io-uring, linux-block Cc: David Howells, Pavel Begunkov, Chengming Zhou On 9/8/23 3:30 AM, Ming Lei wrote: > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > index ad636954abae..95a3d31a1ef1 100644 > --- a/io_uring/io_uring.c > +++ b/io_uring/io_uring.c > @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > } > } > > + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > + issue_flags |= IO_URING_F_NONBLOCK; > + I think this comment deserves to be more descriptive. Normally we absolutely cannot block for polled IO, it's only OK here because io-wq is the issuer and not necessarily the poller of it. That generally falls upon the original issuer to poll these requests. I think this should be a separate commit, coming before the main fix which is below. > @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > finish_wait(&tctx->wait, &wait); > } while (1); > > + /* > + * Reap events from each ctx, otherwise these requests may take > + * resources and prevent other contexts from being moved on. > + */ > + xa_for_each(&tctx->xa, index, node) > + io_iopoll_try_reap_events(node->ctx); The main issue here is that if someone isn't polling for them, then we get to wait for a timeout before they complete. This can delay exit, for example, as we're now just waiting 30 seconds (or whatever the timeout is on the underlying device) for them to get timed out before exit can finish. Do we just want to move this a bit higher up where we iterate ctx's anyway? Not that important I suspect. -- Jens Axboe ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-08 13:49 ` Jens Axboe @ 2023-09-08 14:34 ` Ming Lei 2023-09-08 14:44 ` Jens Axboe 2023-09-08 15:46 ` Pavel Begunkov 1 sibling, 1 reply; 12+ messages in thread From: Ming Lei @ 2023-09-08 14:34 UTC (permalink / raw) To: Jens Axboe Cc: io-uring, linux-block, David Howells, Pavel Begunkov, Chengming Zhou, ming.lei On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: > On 9/8/23 3:30 AM, Ming Lei wrote: > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > > index ad636954abae..95a3d31a1ef1 100644 > > --- a/io_uring/io_uring.c > > +++ b/io_uring/io_uring.c > > @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > > } > > } > > > > + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > > + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > > + issue_flags |= IO_URING_F_NONBLOCK; > > + > > I think this comment deserves to be more descriptive. Normally we > absolutely cannot block for polled IO, it's only OK here because io-wq Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED imply REQ_NOWAIT") which actually push the responsibility/risk up to io_uring. > is the issuer and not necessarily the poller of it. That generally falls > upon the original issuer to poll these requests. > > I think this should be a separate commit, coming before the main fix > which is below. Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the approach in V2 doesn't need this change. > > > @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > > finish_wait(&tctx->wait, &wait); > > } while (1); > > > > + /* > > + * Reap events from each ctx, otherwise these requests may take > > + * resources and prevent other contexts from being moved on. > > + */ > > + xa_for_each(&tctx->xa, index, node) > > + io_iopoll_try_reap_events(node->ctx); > > The main issue here is that if someone isn't polling for them, then we That is actually what this patch is addressing, :-) > get to wait for a timeout before they complete. This can delay exit, for > example, as we're now just waiting 30 seconds (or whatever the timeout > is on the underlying device) for them to get timed out before exit can > finish. For the issue on null_blk, device timeout handler provides forward-progress, such as requests are released, so new IO can be handled. However, not all devices support timeout, such as virtio device. Here we just call io_iopoll_try_reap_events() to poll submitted IOs for releasing resources, so no need to rely on device timeout handler any more, and the extra exit delay can be avoided. But io_iopoll_try_reap_events() may not be enough because io_wq associated with current context can get released resource immediately, then new IOs are submitted successfully, but who can poll these new submitted IOs, then all device resources can be held by this (freed)io_wq for nothing. I guess we may have to take the approach in patch V2 by only canceling polled IO for avoiding the thread_exit regression, or other ideas? > > Do we just want to move this a bit higher up where we iterate ctx's > anyway? Not that important I suspect. I think it isn't needed, here we only focus on io_wq and polled io, not same with what the iteration code covers, otherwise io_uring_try_cancel_requests could become less readable. Thanks, Ming ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-08 14:34 ` Ming Lei @ 2023-09-08 14:44 ` Jens Axboe 2023-09-08 15:25 ` Ming Lei 0 siblings, 1 reply; 12+ messages in thread From: Jens Axboe @ 2023-09-08 14:44 UTC (permalink / raw) To: Ming Lei Cc: io-uring, linux-block, David Howells, Pavel Begunkov, Chengming Zhou On 9/8/23 8:34 AM, Ming Lei wrote: > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: >> On 9/8/23 3:30 AM, Ming Lei wrote: >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c >>> index ad636954abae..95a3d31a1ef1 100644 >>> --- a/io_uring/io_uring.c >>> +++ b/io_uring/io_uring.c >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) >>> } >>> } >>> >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) >>> + issue_flags |= IO_URING_F_NONBLOCK; >>> + >> >> I think this comment deserves to be more descriptive. Normally we >> absolutely cannot block for polled IO, it's only OK here because io-wq > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED > imply REQ_NOWAIT") which actually push the responsibility/risk up to > io_uring. > >> is the issuer and not necessarily the poller of it. That generally falls >> upon the original issuer to poll these requests. >> >> I think this should be a separate commit, coming before the main fix >> which is below. > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the > approach in V2 doesn't need this change. > >> >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) >>> finish_wait(&tctx->wait, &wait); >>> } while (1); >>> >>> + /* >>> + * Reap events from each ctx, otherwise these requests may take >>> + * resources and prevent other contexts from being moved on. >>> + */ >>> + xa_for_each(&tctx->xa, index, node) >>> + io_iopoll_try_reap_events(node->ctx); >> >> The main issue here is that if someone isn't polling for them, then we > > That is actually what this patch is addressing, :-) Right, that part is obvious :) >> get to wait for a timeout before they complete. This can delay exit, for >> example, as we're now just waiting 30 seconds (or whatever the timeout >> is on the underlying device) for them to get timed out before exit can >> finish. > > For the issue on null_blk, device timeout handler provides > forward-progress, such as requests are released, so new IO can be > handled. > > However, not all devices support timeout, such as virtio device. That's a bug in the driver, you cannot sanely support polled IO and not be able to deal with timeouts. Someone HAS to reap the requests and there are only two things that can do that - the application doing the polled IO, or if that doesn't happen, a timeout. > Here we just call io_iopoll_try_reap_events() to poll submitted IOs > for releasing resources, so no need to rely on device timeout handler > any more, and the extra exit delay can be avoided. > > But io_iopoll_try_reap_events() may not be enough because io_wq > associated with current context can get released resource immediately, > then new IOs are submitted successfully, but who can poll these new > submitted IOs, then all device resources can be held by this (freed)io_wq > for nothing. > > I guess we may have to take the approach in patch V2 by only canceling > polled IO for avoiding the thread_exit regression, or other ideas? Ideally the behavior seems like it should be that if a task goes away, any pending polled IO it has should be reaped. With the above notion that a driver supporting poll absolutely must be able to deal with timeouts, it's not a strict requirement as we know that requests will be reaped. >> Do we just want to move this a bit higher up where we iterate ctx's >> anyway? Not that important I suspect. > > I think it isn't needed, here we only focus on io_wq and polled io, > not same with what the iteration code covers, otherwise > io_uring_try_cancel_requests could become less readable. Yeah, this part isn't a big deal at all, more of a stylistic thing. -- Jens Axboe ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-08 14:44 ` Jens Axboe @ 2023-09-08 15:25 ` Ming Lei 2023-09-15 7:04 ` Jason Wang 0 siblings, 1 reply; 12+ messages in thread From: Ming Lei @ 2023-09-08 15:25 UTC (permalink / raw) To: Jens Axboe Cc: io-uring, linux-block, David Howells, Pavel Begunkov, Chengming Zhou, virtualization, mst, jasowang, ming.lei On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote: > On 9/8/23 8:34 AM, Ming Lei wrote: > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: > >> On 9/8/23 3:30 AM, Ming Lei wrote: > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > >>> index ad636954abae..95a3d31a1ef1 100644 > >>> --- a/io_uring/io_uring.c > >>> +++ b/io_uring/io_uring.c > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > >>> } > >>> } > >>> > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > >>> + issue_flags |= IO_URING_F_NONBLOCK; > >>> + > >> > >> I think this comment deserves to be more descriptive. Normally we > >> absolutely cannot block for polled IO, it's only OK here because io-wq > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED > > imply REQ_NOWAIT") which actually push the responsibility/risk up to > > io_uring. > > > >> is the issuer and not necessarily the poller of it. That generally falls > >> upon the original issuer to poll these requests. > >> > >> I think this should be a separate commit, coming before the main fix > >> which is below. > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the > > approach in V2 doesn't need this change. > > > >> > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > >>> finish_wait(&tctx->wait, &wait); > >>> } while (1); > >>> > >>> + /* > >>> + * Reap events from each ctx, otherwise these requests may take > >>> + * resources and prevent other contexts from being moved on. > >>> + */ > >>> + xa_for_each(&tctx->xa, index, node) > >>> + io_iopoll_try_reap_events(node->ctx); > >> > >> The main issue here is that if someone isn't polling for them, then we > > > > That is actually what this patch is addressing, :-) > > Right, that part is obvious :) > > >> get to wait for a timeout before they complete. This can delay exit, for > >> example, as we're now just waiting 30 seconds (or whatever the timeout > >> is on the underlying device) for them to get timed out before exit can > >> finish. > > > > For the issue on null_blk, device timeout handler provides > > forward-progress, such as requests are released, so new IO can be > > handled. > > > > However, not all devices support timeout, such as virtio device. > > That's a bug in the driver, you cannot sanely support polled IO and not > be able to deal with timeouts. Someone HAS to reap the requests and > there are only two things that can do that - the application doing the > polled IO, or if that doesn't happen, a timeout. OK, then device driver timeout handler has new responsibility of covering userspace accident, :-) We may document this requirement for driver. So far the only one should be virtio-blk, and the two virtio storage drivers never implement timeout handler. > > > Here we just call io_iopoll_try_reap_events() to poll submitted IOs > > for releasing resources, so no need to rely on device timeout handler > > any more, and the extra exit delay can be avoided. > > > > But io_iopoll_try_reap_events() may not be enough because io_wq > > associated with current context can get released resource immediately, > > then new IOs are submitted successfully, but who can poll these new > > submitted IOs, then all device resources can be held by this (freed)io_wq > > for nothing. > > > > I guess we may have to take the approach in patch V2 by only canceling > > polled IO for avoiding the thread_exit regression, or other ideas? > > Ideally the behavior seems like it should be that if a task goes away, > any pending polled IO it has should be reaped. With the above notion > that a driver supporting poll absolutely must be able to deal with > timeouts, it's not a strict requirement as we know that requests will be > reaped. Then looks the io_uring fix is less important, and I will see if one easy fix can be figured out, one way is to reap event when exiting both current task and the associated io_wq. Thanks, Ming ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-08 15:25 ` Ming Lei @ 2023-09-15 7:04 ` Jason Wang 2023-09-25 21:17 ` Stefan Hajnoczi 0 siblings, 1 reply; 12+ messages in thread From: Jason Wang @ 2023-09-15 7:04 UTC (permalink / raw) To: Ming Lei Cc: Jens Axboe, io-uring, linux-block, David Howells, Pavel Begunkov, Chengming Zhou, virtualization, mst, Stefan Hajnoczi On Fri, Sep 8, 2023 at 11:25 PM Ming Lei <[email protected]> wrote: > > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote: > > On 9/8/23 8:34 AM, Ming Lei wrote: > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: > > >> On 9/8/23 3:30 AM, Ming Lei wrote: > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > > >>> index ad636954abae..95a3d31a1ef1 100644 > > >>> --- a/io_uring/io_uring.c > > >>> +++ b/io_uring/io_uring.c > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > > >>> } > > >>> } > > >>> > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > > >>> + issue_flags |= IO_URING_F_NONBLOCK; > > >>> + > > >> > > >> I think this comment deserves to be more descriptive. Normally we > > >> absolutely cannot block for polled IO, it's only OK here because io-wq > > > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to > > > io_uring. > > > > > >> is the issuer and not necessarily the poller of it. That generally falls > > >> upon the original issuer to poll these requests. > > >> > > >> I think this should be a separate commit, coming before the main fix > > >> which is below. > > > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the > > > approach in V2 doesn't need this change. > > > > > >> > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > > >>> finish_wait(&tctx->wait, &wait); > > >>> } while (1); > > >>> > > >>> + /* > > >>> + * Reap events from each ctx, otherwise these requests may take > > >>> + * resources and prevent other contexts from being moved on. > > >>> + */ > > >>> + xa_for_each(&tctx->xa, index, node) > > >>> + io_iopoll_try_reap_events(node->ctx); > > >> > > >> The main issue here is that if someone isn't polling for them, then we > > > > > > That is actually what this patch is addressing, :-) > > > > Right, that part is obvious :) > > > > >> get to wait for a timeout before they complete. This can delay exit, for > > >> example, as we're now just waiting 30 seconds (or whatever the timeout > > >> is on the underlying device) for them to get timed out before exit can > > >> finish. > > > > > > For the issue on null_blk, device timeout handler provides > > > forward-progress, such as requests are released, so new IO can be > > > handled. > > > > > > However, not all devices support timeout, such as virtio device. > > > > That's a bug in the driver, you cannot sanely support polled IO and not > > be able to deal with timeouts. Someone HAS to reap the requests and > > there are only two things that can do that - the application doing the > > polled IO, or if that doesn't happen, a timeout. > > OK, then device driver timeout handler has new responsibility of covering > userspace accident, :-) > > We may document this requirement for driver. > > So far the only one should be virtio-blk, and the two virtio storage > drivers never implement timeout handler. > Adding Stefan for more comments. Thanks ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-15 7:04 ` Jason Wang @ 2023-09-25 21:17 ` Stefan Hajnoczi 2023-09-26 1:28 ` Ming Lei 0 siblings, 1 reply; 12+ messages in thread From: Stefan Hajnoczi @ 2023-09-25 21:17 UTC (permalink / raw) To: Jason Wang Cc: Ming Lei, Jens Axboe, io-uring, linux-block, David Howells, Pavel Begunkov, Chengming Zhou, virtualization, mst [-- Attachment #1: Type: text/plain, Size: 4165 bytes --] On Fri, Sep 15, 2023 at 03:04:05PM +0800, Jason Wang wrote: > On Fri, Sep 8, 2023 at 11:25 PM Ming Lei <[email protected]> wrote: > > > > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote: > > > On 9/8/23 8:34 AM, Ming Lei wrote: > > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: > > > >> On 9/8/23 3:30 AM, Ming Lei wrote: > > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > > > >>> index ad636954abae..95a3d31a1ef1 100644 > > > >>> --- a/io_uring/io_uring.c > > > >>> +++ b/io_uring/io_uring.c > > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > > > >>> } > > > >>> } > > > >>> > > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > > > >>> + issue_flags |= IO_URING_F_NONBLOCK; > > > >>> + > > > >> > > > >> I think this comment deserves to be more descriptive. Normally we > > > >> absolutely cannot block for polled IO, it's only OK here because io-wq > > > > > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED > > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to > > > > io_uring. > > > > > > > >> is the issuer and not necessarily the poller of it. That generally falls > > > >> upon the original issuer to poll these requests. > > > >> > > > >> I think this should be a separate commit, coming before the main fix > > > >> which is below. > > > > > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the > > > > approach in V2 doesn't need this change. > > > > > > > >> > > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > > > >>> finish_wait(&tctx->wait, &wait); > > > >>> } while (1); > > > >>> > > > >>> + /* > > > >>> + * Reap events from each ctx, otherwise these requests may take > > > >>> + * resources and prevent other contexts from being moved on. > > > >>> + */ > > > >>> + xa_for_each(&tctx->xa, index, node) > > > >>> + io_iopoll_try_reap_events(node->ctx); > > > >> > > > >> The main issue here is that if someone isn't polling for them, then we > > > > > > > > That is actually what this patch is addressing, :-) > > > > > > Right, that part is obvious :) > > > > > > >> get to wait for a timeout before they complete. This can delay exit, for > > > >> example, as we're now just waiting 30 seconds (or whatever the timeout > > > >> is on the underlying device) for them to get timed out before exit can > > > >> finish. > > > > > > > > For the issue on null_blk, device timeout handler provides > > > > forward-progress, such as requests are released, so new IO can be > > > > handled. > > > > > > > > However, not all devices support timeout, such as virtio device. > > > > > > That's a bug in the driver, you cannot sanely support polled IO and not > > > be able to deal with timeouts. Someone HAS to reap the requests and > > > there are only two things that can do that - the application doing the > > > polled IO, or if that doesn't happen, a timeout. > > > > OK, then device driver timeout handler has new responsibility of covering > > userspace accident, :-) Sorry, I don't have enough context so this is probably a silly question: When an application doesn't reap a polled request, why doesn't the block layer take care of this in a generic way? I don't see anything driver-specific about this. Driver-specific behavior would be sending an abort/cancel upon timeout. virtio-blk cannot do that because there is no such command in the device specification at the moment. So simply waiting for the polled request to complete is the only thing that can be done (aside from resetting the device), and it's generic behavior. Thanks, Stefan > > > > We may document this requirement for driver. > > > > So far the only one should be virtio-blk, and the two virtio storage > > drivers never implement timeout handler. > > > > Adding Stefan for more comments. > > Thanks > [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-25 21:17 ` Stefan Hajnoczi @ 2023-09-26 1:28 ` Ming Lei 2023-09-26 14:55 ` Stefan Hajnoczi 0 siblings, 1 reply; 12+ messages in thread From: Ming Lei @ 2023-09-26 1:28 UTC (permalink / raw) To: Stefan Hajnoczi Cc: Jason Wang, Jens Axboe, io-uring, linux-block, David Howells, Pavel Begunkov, Chengming Zhou, virtualization, mst, ming.lei On Mon, Sep 25, 2023 at 05:17:10PM -0400, Stefan Hajnoczi wrote: > On Fri, Sep 15, 2023 at 03:04:05PM +0800, Jason Wang wrote: > > On Fri, Sep 8, 2023 at 11:25 PM Ming Lei <[email protected]> wrote: > > > > > > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote: > > > > On 9/8/23 8:34 AM, Ming Lei wrote: > > > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: > > > > >> On 9/8/23 3:30 AM, Ming Lei wrote: > > > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > > > > >>> index ad636954abae..95a3d31a1ef1 100644 > > > > >>> --- a/io_uring/io_uring.c > > > > >>> +++ b/io_uring/io_uring.c > > > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > > > > >>> } > > > > >>> } > > > > >>> > > > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > > > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > > > > >>> + issue_flags |= IO_URING_F_NONBLOCK; > > > > >>> + > > > > >> > > > > >> I think this comment deserves to be more descriptive. Normally we > > > > >> absolutely cannot block for polled IO, it's only OK here because io-wq > > > > > > > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED > > > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to > > > > > io_uring. > > > > > > > > > >> is the issuer and not necessarily the poller of it. That generally falls > > > > >> upon the original issuer to poll these requests. > > > > >> > > > > >> I think this should be a separate commit, coming before the main fix > > > > >> which is below. > > > > > > > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the > > > > > approach in V2 doesn't need this change. > > > > > > > > > >> > > > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > > > > >>> finish_wait(&tctx->wait, &wait); > > > > >>> } while (1); > > > > >>> > > > > >>> + /* > > > > >>> + * Reap events from each ctx, otherwise these requests may take > > > > >>> + * resources and prevent other contexts from being moved on. > > > > >>> + */ > > > > >>> + xa_for_each(&tctx->xa, index, node) > > > > >>> + io_iopoll_try_reap_events(node->ctx); > > > > >> > > > > >> The main issue here is that if someone isn't polling for them, then we > > > > > > > > > > That is actually what this patch is addressing, :-) > > > > > > > > Right, that part is obvious :) > > > > > > > > >> get to wait for a timeout before they complete. This can delay exit, for > > > > >> example, as we're now just waiting 30 seconds (or whatever the timeout > > > > >> is on the underlying device) for them to get timed out before exit can > > > > >> finish. > > > > > > > > > > For the issue on null_blk, device timeout handler provides > > > > > forward-progress, such as requests are released, so new IO can be > > > > > handled. > > > > > > > > > > However, not all devices support timeout, such as virtio device. > > > > > > > > That's a bug in the driver, you cannot sanely support polled IO and not > > > > be able to deal with timeouts. Someone HAS to reap the requests and > > > > there are only two things that can do that - the application doing the > > > > polled IO, or if that doesn't happen, a timeout. > > > > > > OK, then device driver timeout handler has new responsibility of covering > > > userspace accident, :-) > > Sorry, I don't have enough context so this is probably a silly question: > > When an application doesn't reap a polled request, why doesn't the block > layer take care of this in a generic way? I don't see anything > driver-specific about this. block layer doesn't have knowledge to handle that, io_uring knows the application is exiting, and can help to reap the events. But the big question is that if there is really IO timeout for virtio-blk. If there is, the reap done in io_uring may never return and cause other issue, so if it is done in io_uring, that can be just thought as sort of improvement. The real bug fix is still in device driver, usually only the driver timeout handler can provide forward progress guarantee. > > Driver-specific behavior would be sending an abort/cancel upon timeout. > virtio-blk cannot do that because there is no such command in the device > specification at the moment. So simply waiting for the polled request to > complete is the only thing that can be done (aside from resetting the > device), and it's generic behavior. Then looks not safe to support IO polling for virtio-blk, maybe disable it at default now until the virtio-blk spec starts to support IO abort? Thanks, Ming ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-26 1:28 ` Ming Lei @ 2023-09-26 14:55 ` Stefan Hajnoczi 0 siblings, 0 replies; 12+ messages in thread From: Stefan Hajnoczi @ 2023-09-26 14:55 UTC (permalink / raw) To: Ming Lei, Suwan Kim Cc: Jason Wang, Jens Axboe, io-uring, linux-block, David Howells, Pavel Begunkov, Chengming Zhou, virtualization, mst [-- Attachment #1: Type: text/plain, Size: 7756 bytes --] On Tue, Sep 26, 2023 at 09:28:15AM +0800, Ming Lei wrote: > On Mon, Sep 25, 2023 at 05:17:10PM -0400, Stefan Hajnoczi wrote: > > On Fri, Sep 15, 2023 at 03:04:05PM +0800, Jason Wang wrote: > > > On Fri, Sep 8, 2023 at 11:25 PM Ming Lei <[email protected]> wrote: > > > > > > > > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote: > > > > > On 9/8/23 8:34 AM, Ming Lei wrote: > > > > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: > > > > > >> On 9/8/23 3:30 AM, Ming Lei wrote: > > > > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > > > > > >>> index ad636954abae..95a3d31a1ef1 100644 > > > > > >>> --- a/io_uring/io_uring.c > > > > > >>> +++ b/io_uring/io_uring.c > > > > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > > > > > >>> } > > > > > >>> } > > > > > >>> > > > > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > > > > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > > > > > >>> + issue_flags |= IO_URING_F_NONBLOCK; > > > > > >>> + > > > > > >> > > > > > >> I think this comment deserves to be more descriptive. Normally we > > > > > >> absolutely cannot block for polled IO, it's only OK here because io-wq > > > > > > > > > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED > > > > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to > > > > > > io_uring. > > > > > > > > > > > >> is the issuer and not necessarily the poller of it. That generally falls > > > > > >> upon the original issuer to poll these requests. > > > > > >> > > > > > >> I think this should be a separate commit, coming before the main fix > > > > > >> which is below. > > > > > > > > > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the > > > > > > approach in V2 doesn't need this change. > > > > > > > > > > > >> > > > > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > > > > > >>> finish_wait(&tctx->wait, &wait); > > > > > >>> } while (1); > > > > > >>> > > > > > >>> + /* > > > > > >>> + * Reap events from each ctx, otherwise these requests may take > > > > > >>> + * resources and prevent other contexts from being moved on. > > > > > >>> + */ > > > > > >>> + xa_for_each(&tctx->xa, index, node) > > > > > >>> + io_iopoll_try_reap_events(node->ctx); > > > > > >> > > > > > >> The main issue here is that if someone isn't polling for them, then we > > > > > > > > > > > > That is actually what this patch is addressing, :-) > > > > > > > > > > Right, that part is obvious :) > > > > > > > > > > >> get to wait for a timeout before they complete. This can delay exit, for > > > > > >> example, as we're now just waiting 30 seconds (or whatever the timeout > > > > > >> is on the underlying device) for them to get timed out before exit can > > > > > >> finish. > > > > > > > > > > > > For the issue on null_blk, device timeout handler provides > > > > > > forward-progress, such as requests are released, so new IO can be > > > > > > handled. > > > > > > > > > > > > However, not all devices support timeout, such as virtio device. > > > > > > > > > > That's a bug in the driver, you cannot sanely support polled IO and not > > > > > be able to deal with timeouts. Someone HAS to reap the requests and > > > > > there are only two things that can do that - the application doing the > > > > > polled IO, or if that doesn't happen, a timeout. > > > > > > > > OK, then device driver timeout handler has new responsibility of covering > > > > userspace accident, :-) > > > > Sorry, I don't have enough context so this is probably a silly question: > > > > When an application doesn't reap a polled request, why doesn't the block > > layer take care of this in a generic way? I don't see anything > > driver-specific about this. > > block layer doesn't have knowledge to handle that, io_uring knows the > application is exiting, and can help to reap the events. I thought the discussion was about I/O timeouts in general but here you're only mentioning application exit. Are we talking about I/O timeouts or purely about cleaning up I/O requests when an application exits? > > But the big question is that if there is really IO timeout for virtio-blk. > If there is, the reap done in io_uring may never return and cause other > issue, so if it is done in io_uring, that can be just thought as sort of > improvement. virtio-blk drivers have no way of specifying timeouts on the device or aborting/canceling requests. virtio-blk devices may fail requests if they implement an internal timeout mechanism (e.g. the host kernel fails requests after a host timeout), but this is not controlled by the driver and there is no guarantee that the device has an internal timeout. The driver will not treat these timed out requests in a special way - the application will see EIO errors. > > The real bug fix is still in device driver, usually only the driver timeout > handler can provide forward progress guarantee. The only recourse for hung I/O on a virtio-blk device is device reset, but that is often implemented as a synchronous operation and is likely to block until in-flight I/O finishes. An admin virtqueue could be added to virtio-blk along with an abort command, but existing devices will not support the new hardware interface. However, I'm not sure a new abort command would solve the problem. virtio-blk devices are often implemented as userspace processes and are limited by the availability of I/O cancellation APIs. Maybe my understanding is outdated, but I believe userspace processes cannot force I/O to abort. For example, the man page says the following for IORING_OP_ASYNC_CANCEL: In general, requests that are interruptible (like socket IO) will get canceled, while disk IO requests cannot be canceled if already started. Even if an abort command is added to virtio-blk, won't we just end up in this situation: 1. The guest kernel invokes ->timeout() on virtio_blk.ko. 2. virtio_blk.ko sends an abort command to the device and resets the timeout. 3. The device submits IORING_OP_ASYNC_CANCEL but it cannot cancel an in-flight disk I/O request. 4. ...time passes... 5. The guest kernel invokes ->timeout() again and virtio_blk.ko decides abort was ineffective. The entire device must be reset. ? (I based this on the ->timeout() logic in the nvme driver.) If we're effectively just going to wait for twice the timeout duration and then reset the device, then why go through the trouble of sending the abort command? I'm hoping you'll tell me that IORING_OP_ASYNC_CANCEL is in fact able to cancel disk I/O nowadays :). > > > > > Driver-specific behavior would be sending an abort/cancel upon timeout. > > virtio-blk cannot do that because there is no such command in the device > > specification at the moment. So simply waiting for the polled request to > > complete is the only thing that can be done (aside from resetting the > > device), and it's generic behavior. > > Then looks not safe to support IO polling for virtio-blk, maybe disable it > at default now until the virtio-blk spec starts to support IO abort? The virtio_blk.ko poll_queues module parameter is already set to 0 by default. Poll queues are only available when the user has explicitly set the module parameter. I have added Suwan Kim to the email thread. Suwan Kim added poll queue support to the virtio-blk driver and may have a preference for how to proceed. Stefan [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-08 13:49 ` Jens Axboe 2023-09-08 14:34 ` Ming Lei @ 2023-09-08 15:46 ` Pavel Begunkov 2023-09-09 1:43 ` Ming Lei 1 sibling, 1 reply; 12+ messages in thread From: Pavel Begunkov @ 2023-09-08 15:46 UTC (permalink / raw) To: Jens Axboe, Ming Lei, io-uring, linux-block; +Cc: David Howells, Chengming Zhou On 9/8/23 14:49, Jens Axboe wrote: > On 9/8/23 3:30 AM, Ming Lei wrote: >> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c >> index ad636954abae..95a3d31a1ef1 100644 >> --- a/io_uring/io_uring.c >> +++ b/io_uring/io_uring.c >> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) >> } >> } >> >> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ >> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) >> + issue_flags |= IO_URING_F_NONBLOCK; >> + > > I think this comment deserves to be more descriptive. Normally we > absolutely cannot block for polled IO, it's only OK here because io-wq > is the issuer and not necessarily the poller of it. That generally falls > upon the original issuer to poll these requests. > > I think this should be a separate commit, coming before the main fix > which is below. > >> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) >> finish_wait(&tctx->wait, &wait); >> } while (1); >> >> + /* >> + * Reap events from each ctx, otherwise these requests may take >> + * resources and prevent other contexts from being moved on. >> + */ >> + xa_for_each(&tctx->xa, index, node) >> + io_iopoll_try_reap_events(node->ctx); > > The main issue here is that if someone isn't polling for them, then we > get to wait for a timeout before they complete. This can delay exit, for > example, as we're now just waiting 30 seconds (or whatever the timeout > is on the underlying device) for them to get timed out before exit can > finish. Ok, our case is that userspace crashes and doesn't poll for its IO. How would that block io-wq termination? We send a signal and workers should exit, either by queueing up the request for iopoll (and then we queue it into the io_uring iopoll list and the worker immediately returns back and presumably exits), or it fails because of the signal and returns back. That should kill all io-wq and make exit go forward. Then the io_uring file will be destroyed and the ring exit work will be polling via io_ring_exit_work(); -- io_uring_try_cancel_requests(); -- io_iopoll_try_reap_events(); What I'm missing? Does the blocking change make io-wq iopolling completions inside the block? Was it by any chance with the recent "do_exit() waiting for ring destruction" patches? -- Pavel Begunkov ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-08 15:46 ` Pavel Begunkov @ 2023-09-09 1:43 ` Ming Lei 2023-09-13 12:53 ` Pavel Begunkov 0 siblings, 1 reply; 12+ messages in thread From: Ming Lei @ 2023-09-09 1:43 UTC (permalink / raw) To: Pavel Begunkov Cc: Jens Axboe, io-uring, linux-block, David Howells, Chengming Zhou On Fri, Sep 08, 2023 at 04:46:15PM +0100, Pavel Begunkov wrote: > On 9/8/23 14:49, Jens Axboe wrote: > > On 9/8/23 3:30 AM, Ming Lei wrote: > > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > > > index ad636954abae..95a3d31a1ef1 100644 > > > --- a/io_uring/io_uring.c > > > +++ b/io_uring/io_uring.c > > > @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > > > } > > > } > > > + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > > > + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > > > + issue_flags |= IO_URING_F_NONBLOCK; > > > + > > > > I think this comment deserves to be more descriptive. Normally we > > absolutely cannot block for polled IO, it's only OK here because io-wq > > is the issuer and not necessarily the poller of it. That generally falls > > upon the original issuer to poll these requests. > > > > I think this should be a separate commit, coming before the main fix > > which is below. > > > > > @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > > > finish_wait(&tctx->wait, &wait); > > > } while (1); > > > + /* > > > + * Reap events from each ctx, otherwise these requests may take > > > + * resources and prevent other contexts from being moved on. > > > + */ > > > + xa_for_each(&tctx->xa, index, node) > > > + io_iopoll_try_reap_events(node->ctx); > > > > The main issue here is that if someone isn't polling for them, then we > > get to wait for a timeout before they complete. This can delay exit, for > > example, as we're now just waiting 30 seconds (or whatever the timeout > > is on the underlying device) for them to get timed out before exit can > > finish. > > Ok, our case is that userspace crashes and doesn't poll for its IO. > How would that block io-wq termination? We send a signal and workers > should exit, either by queueing up the request for iopoll (and then It depends on how userspace handles the signal, such as, t/io_uring, s->finish is set as true in INT signal handler, two cases may happen: 1) s->finish is observed immediately, then this pthread exits, and leave polled requests in ctx->iopoll_list 2) s->finish isn't observed immediately, and just submit & polling; if any IO can't be submitted because of no enough resource, there can be one busy spin because submitter_uring_fn() waits for inflight IO. So if there are two pthreads(A, B), each setup its own io_uring context and submit & poll IO on same block device. If 1) happens in A, all device tags can be held for nothing. If 2) happens in B, the busy spin prevents exit() of this pthread B. Then the hang is caused, exit work can't be scheduled at all, because pthread B doesn't exit. > we queue it into the io_uring iopoll list and the worker immediately > returns back and presumably exits), or it fails because of the signal > and returns back. > > That should kill all io-wq and make exit go forward. Then the io_uring > file will be destroyed and the ring exit work will be polling via > > io_ring_exit_work(); > -- io_uring_try_cancel_requests(); > -- io_iopoll_try_reap_events(); > > What I'm missing? Does the blocking change make io-wq iopolling > completions inside the block? Was it by any chance with the recent > "do_exit() waiting for ring destruction" patches? In short, it is one resource dependency issue for polled IO. Thanks, Ming ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() 2023-09-09 1:43 ` Ming Lei @ 2023-09-13 12:53 ` Pavel Begunkov 0 siblings, 0 replies; 12+ messages in thread From: Pavel Begunkov @ 2023-09-13 12:53 UTC (permalink / raw) To: Ming Lei; +Cc: Jens Axboe, io-uring, linux-block, David Howells, Chengming Zhou On 9/9/23 02:43, Ming Lei wrote: > On Fri, Sep 08, 2023 at 04:46:15PM +0100, Pavel Begunkov wrote: >> On 9/8/23 14:49, Jens Axboe wrote: >>> On 9/8/23 3:30 AM, Ming Lei wrote: >>>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c >>>> index ad636954abae..95a3d31a1ef1 100644 >>>> --- a/io_uring/io_uring.c >>>> +++ b/io_uring/io_uring.c >>>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) >>>> } >>>> } >>>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ >>>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) >>>> + issue_flags |= IO_URING_F_NONBLOCK; >>>> + >>> >>> I think this comment deserves to be more descriptive. Normally we >>> absolutely cannot block for polled IO, it's only OK here because io-wq >>> is the issuer and not necessarily the poller of it. That generally falls >>> upon the original issuer to poll these requests. >>> >>> I think this should be a separate commit, coming before the main fix >>> which is below. >>> >>>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) >>>> finish_wait(&tctx->wait, &wait); >>>> } while (1); >>>> + /* >>>> + * Reap events from each ctx, otherwise these requests may take >>>> + * resources and prevent other contexts from being moved on. >>>> + */ >>>> + xa_for_each(&tctx->xa, index, node) >>>> + io_iopoll_try_reap_events(node->ctx); >>> >>> The main issue here is that if someone isn't polling for them, then we >>> get to wait for a timeout before they complete. This can delay exit, for >>> example, as we're now just waiting 30 seconds (or whatever the timeout >>> is on the underlying device) for them to get timed out before exit can >>> finish. >> >> Ok, our case is that userspace crashes and doesn't poll for its IO. >> How would that block io-wq termination? We send a signal and workers >> should exit, either by queueing up the request for iopoll (and then > > It depends on how userspace handles the signal, such as, t/io_uring, > s->finish is set as true in INT signal handler, two cases may happen: > > 1) s->finish is observed immediately, then this pthread exits, and leave > polled requests in ctx->iopoll_list fwiw, I'm in favour of trying to iopoll there just because it's nicer this way, but I still want to get to the bottom of it. > 2) s->finish isn't observed immediately, and just submit & polling; > if any IO can't be submitted because of no enough resource, there can > be one busy spin because submitter_uring_fn() waits for inflight IO. > > So if there are two pthreads(A, B), each setup its own io_uring context > and submit & poll IO on same block device. If 1) happens in A, all > device tags can be held for nothing. If 2) happens in B, the busy spin > prevents exit() of this pthread B. Thanks, that sounds clear now. So, nobody closes the first ring, hence it's not destroyed even after pthread A exits and the 2nd ring cannot progress. I agree with the judgement about timeouts and that it looks like a user mismanagement. -- Pavel Begunkov ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-09-26 16:15 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-09-08 9:30 [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() Ming Lei 2023-09-08 13:49 ` Jens Axboe 2023-09-08 14:34 ` Ming Lei 2023-09-08 14:44 ` Jens Axboe 2023-09-08 15:25 ` Ming Lei 2023-09-15 7:04 ` Jason Wang 2023-09-25 21:17 ` Stefan Hajnoczi 2023-09-26 1:28 ` Ming Lei 2023-09-26 14:55 ` Stefan Hajnoczi 2023-09-08 15:46 ` Pavel Begunkov 2023-09-09 1:43 ` Ming Lei 2023-09-13 12:53 ` Pavel Begunkov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox