* [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests @ 2025-01-12 14:33 Bui Quang Minh 2025-01-12 15:45 ` lizetao 2025-01-13 7:49 ` lizetao 0 siblings, 2 replies; 7+ messages in thread From: Bui Quang Minh @ 2025-01-12 14:33 UTC (permalink / raw) To: linux-kernel Cc: Bui Quang Minh, Jens Axboe, Pavel Begunkov, io-uring, syzbot+3c750be01dab672c513d, Li Zetao In io_uring_try_cancel_requests, we check whether sq_data->thread == current to determine if the function is called by the SQPOLL thread to do iopoll when IORING_SETUP_SQPOLL is set. This check can race with the SQPOLL thread termination. io_uring_cancel_generic is used in 2 places: io_uring_cancel_generic and io_ring_exit_work. In io_uring_cancel_generic, we have the information whether the current is SQPOLL thread already. In io_ring_exit_work, in case the SQPOLL thread reaches this path, we don't need to iopoll and leave that for io_uring_cancel_generic to handle. So to avoid the racy check, this commit adds a boolean flag to io_uring_try_cancel_requests to determine if we need to do iopoll inside the function and only sets this flag in io_uring_cancel_generic when the current is SQPOLL thread. Reported-by: [email protected] Reported-by: Li Zetao <[email protected]> Signed-off-by: Bui Quang Minh <[email protected]> --- io_uring/io_uring.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index ff691f37462c..f28ea1254143 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -143,7 +143,8 @@ struct io_defer_entry { static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, struct io_uring_task *tctx, - bool cancel_all); + bool cancel_all, + bool force_iopoll); static void io_queue_sqe(struct io_kiocb *req); @@ -2898,7 +2899,12 @@ static __cold void io_ring_exit_work(struct work_struct *work) if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) io_move_task_work_from_local(ctx); - while (io_uring_try_cancel_requests(ctx, NULL, true)) + /* + * Even if SQPOLL thread reaches this path, don't force + * iopoll here, let the io_uring_cancel_generic handle + * it. + */ + while (io_uring_try_cancel_requests(ctx, NULL, true, false)) cond_resched(); if (ctx->sq_data) { @@ -3066,7 +3072,8 @@ static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx) static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, struct io_uring_task *tctx, - bool cancel_all) + bool cancel_all, + bool force_iopoll) { struct io_task_cancel cancel = { .tctx = tctx, .all = cancel_all, }; enum io_wq_cancel cret; @@ -3096,7 +3103,7 @@ static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, /* SQPOLL thread does its own polling */ if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) || - (ctx->sq_data && ctx->sq_data->thread == current)) { + force_iopoll) { while (!wq_list_empty(&ctx->iopoll_list)) { io_iopoll_try_reap_events(ctx); ret = true; @@ -3169,13 +3176,15 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) continue; loop |= io_uring_try_cancel_requests(node->ctx, current->io_uring, - cancel_all); + cancel_all, + false); } } else { list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) loop |= io_uring_try_cancel_requests(ctx, current->io_uring, - cancel_all); + cancel_all, + true); } if (loop) { -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* RE: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests 2025-01-12 14:33 [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests Bui Quang Minh @ 2025-01-12 15:45 ` lizetao 2025-01-12 16:14 ` Bui Quang Minh 2025-01-13 7:49 ` lizetao 1 sibling, 1 reply; 7+ messages in thread From: lizetao @ 2025-01-12 15:45 UTC (permalink / raw) To: Bui Quang Minh, [email protected] Cc: Jens Axboe, Pavel Begunkov, [email protected], [email protected] Hi, > -----Original Message----- > From: Bui Quang Minh <[email protected]> > Sent: Sunday, January 12, 2025 10:34 PM > To: [email protected] > Cc: Bui Quang Minh <[email protected]>; Jens Axboe > <[email protected]>; Pavel Begunkov <[email protected]>; io- > [email protected]; > [email protected]; lizetao > <[email protected]> > Subject: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling > requests > > In io_uring_try_cancel_requests, we check whether sq_data->thread == > current to determine if the function is called by the SQPOLL thread to do iopoll > when IORING_SETUP_SQPOLL is set. This check can race with the SQPOLL > thread termination. > > io_uring_cancel_generic is used in 2 places: io_uring_cancel_generic and > io_ring_exit_work. In io_uring_cancel_generic, we have the information > whether the current is SQPOLL thread already. In io_ring_exit_work, in case > the SQPOLL thread reaches this path, we don't need to iopoll and leave that for > io_uring_cancel_generic to handle. > > So to avoid the racy check, this commit adds a boolean flag to > io_uring_try_cancel_requests to determine if we need to do iopoll inside the > function and only sets this flag in io_uring_cancel_generic when the current is > SQPOLL thread. > > Reported-by: [email protected] > Reported-by: Li Zetao <[email protected]> > Signed-off-by: Bui Quang Minh <[email protected]> > --- > io_uring/io_uring.c | 21 +++++++++++++++------ > 1 file changed, 15 insertions(+), 6 deletions(-) > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index > ff691f37462c..f28ea1254143 100644 > --- a/io_uring/io_uring.c > +++ b/io_uring/io_uring.c > @@ -143,7 +143,8 @@ struct io_defer_entry { > > static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, > struct io_uring_task *tctx, > - bool cancel_all); > + bool cancel_all, > + bool force_iopoll); > > static void io_queue_sqe(struct io_kiocb *req); > > @@ -2898,7 +2899,12 @@ static __cold void io_ring_exit_work(struct > work_struct *work) > if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) > io_move_task_work_from_local(ctx); > > - while (io_uring_try_cancel_requests(ctx, NULL, true)) > + /* > + * Even if SQPOLL thread reaches this path, don't force > + * iopoll here, let the io_uring_cancel_generic handle > + * it. Just curious, will sq_thread enter this io_ring_exit_work path? > + */ > + while (io_uring_try_cancel_requests(ctx, NULL, true, false)) > cond_resched(); > > if (ctx->sq_data) { > @@ -3066,7 +3072,8 @@ static __cold bool io_uring_try_cancel_iowq(struct > io_ring_ctx *ctx) > > static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, > struct io_uring_task *tctx, > - bool cancel_all) > + bool cancel_all, > + bool force_iopoll) > { > struct io_task_cancel cancel = { .tctx = tctx, .all = cancel_all, }; > enum io_wq_cancel cret; > @@ -3096,7 +3103,7 @@ static __cold bool > io_uring_try_cancel_requests(struct io_ring_ctx *ctx, > > /* SQPOLL thread does its own polling */ > if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) || > - (ctx->sq_data && ctx->sq_data->thread == current)) { > + force_iopoll) { > while (!wq_list_empty(&ctx->iopoll_list)) { > io_iopoll_try_reap_events(ctx); > ret = true; > @@ -3169,13 +3176,15 @@ __cold void io_uring_cancel_generic(bool > cancel_all, struct io_sq_data *sqd) > continue; > loop |= io_uring_try_cancel_requests(node- > >ctx, > current->io_uring, > - cancel_all); > + cancel_all, > + false); > } > } else { > list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) > loop |= io_uring_try_cancel_requests(ctx, > current- > >io_uring, > - cancel_all); > + cancel_all, > + true); > } > > if (loop) { > -- > 2.43.0 > Maybe you miss something, just like Begunkov mentioned in your last version patch: io_uring_cancel_generic WARN_ON_ONCE(sqd && sqd->thread != current); This WARN_ON_ONCE will never be triggered, so you could remove it. --- Li Zetao ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests 2025-01-12 15:45 ` lizetao @ 2025-01-12 16:14 ` Bui Quang Minh 2025-01-12 21:15 ` Pavel Begunkov 0 siblings, 1 reply; 7+ messages in thread From: Bui Quang Minh @ 2025-01-12 16:14 UTC (permalink / raw) To: lizetao, [email protected] Cc: Jens Axboe, Pavel Begunkov, [email protected], [email protected] On 1/12/25 22:45, lizetao wrote: > Hi, > >> -----Original Message----- >> From: Bui Quang Minh <[email protected]> >> Sent: Sunday, January 12, 2025 10:34 PM >> To: [email protected] >> Cc: Bui Quang Minh <[email protected]>; Jens Axboe >> <[email protected]>; Pavel Begunkov <[email protected]>; io- >> [email protected]; >> [email protected]; lizetao >> <[email protected]> >> Subject: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling >> requests >> >> In io_uring_try_cancel_requests, we check whether sq_data->thread == >> current to determine if the function is called by the SQPOLL thread to do iopoll >> when IORING_SETUP_SQPOLL is set. This check can race with the SQPOLL >> thread termination. >> >> io_uring_cancel_generic is used in 2 places: io_uring_cancel_generic and >> io_ring_exit_work. In io_uring_cancel_generic, we have the information >> whether the current is SQPOLL thread already. In io_ring_exit_work, in case >> the SQPOLL thread reaches this path, we don't need to iopoll and leave that for >> io_uring_cancel_generic to handle. >> >> So to avoid the racy check, this commit adds a boolean flag to >> io_uring_try_cancel_requests to determine if we need to do iopoll inside the >> function and only sets this flag in io_uring_cancel_generic when the current is >> SQPOLL thread. >> >> Reported-by: [email protected] >> Reported-by: Li Zetao <[email protected]> >> Signed-off-by: Bui Quang Minh <[email protected]> >> --- >> io_uring/io_uring.c | 21 +++++++++++++++------ >> 1 file changed, 15 insertions(+), 6 deletions(-) >> >> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index >> ff691f37462c..f28ea1254143 100644 >> --- a/io_uring/io_uring.c >> +++ b/io_uring/io_uring.c >> @@ -143,7 +143,8 @@ struct io_defer_entry { >> >> static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, >> struct io_uring_task *tctx, >> - bool cancel_all); >> + bool cancel_all, >> + bool force_iopoll); >> >> static void io_queue_sqe(struct io_kiocb *req); >> >> @@ -2898,7 +2899,12 @@ static __cold void io_ring_exit_work(struct >> work_struct *work) >> if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) >> io_move_task_work_from_local(ctx); >> >> - while (io_uring_try_cancel_requests(ctx, NULL, true)) >> + /* >> + * Even if SQPOLL thread reaches this path, don't force >> + * iopoll here, let the io_uring_cancel_generic handle >> + * it. > > Just curious, will sq_thread enter this io_ring_exit_work path? AFAIK, yes. The SQPOLL thread is created with create_io_thread, this function creates a new task with CLONE_FILES. So all the open files is shared. There will be case that the parent closes its io_uring file and SQPOLL thread become the only owner of that file. So it can reach this path when terminating. >> + */ >> + while (io_uring_try_cancel_requests(ctx, NULL, true, false)) >> cond_resched(); >> >> if (ctx->sq_data) { >> @@ -3066,7 +3072,8 @@ static __cold bool io_uring_try_cancel_iowq(struct >> io_ring_ctx *ctx) >> >> static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, >> struct io_uring_task *tctx, >> - bool cancel_all) >> + bool cancel_all, >> + bool force_iopoll) >> { >> struct io_task_cancel cancel = { .tctx = tctx, .all = cancel_all, }; >> enum io_wq_cancel cret; >> @@ -3096,7 +3103,7 @@ static __cold bool >> io_uring_try_cancel_requests(struct io_ring_ctx *ctx, >> >> /* SQPOLL thread does its own polling */ >> if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) || >> - (ctx->sq_data && ctx->sq_data->thread == current)) { >> + force_iopoll) { >> while (!wq_list_empty(&ctx->iopoll_list)) { >> io_iopoll_try_reap_events(ctx); >> ret = true; >> @@ -3169,13 +3176,15 @@ __cold void io_uring_cancel_generic(bool >> cancel_all, struct io_sq_data *sqd) >> continue; >> loop |= io_uring_try_cancel_requests(node- >>> ctx, >> current->io_uring, >> - cancel_all); >> + cancel_all, >> + false); >> } >> } else { >> list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) >> loop |= io_uring_try_cancel_requests(ctx, >> current- >>> io_uring, >> - cancel_all); >> + cancel_all, >> + true); >> } >> >> if (loop) { >> -- >> 2.43.0 >> > > Maybe you miss something, just like Begunkov mentioned in your last version patch: > > io_uring_cancel_generic > WARN_ON_ONCE(sqd && sqd->thread != current); > > This WARN_ON_ONCE will never be triggered, so you could remove it. He meant that we don't need to annotate sqd->thread access in this debug check. The io_uring_cancel_generic function has assumption that the sgd is not NULL only when it's called by a SQPOLL thread. So the check means to ensure this assumption. A data race happens only when this function is called by other tasks than the SQPOLL thread, so it can race with the SQPOLL termination. However, the sgd is not NULL only when this function is called by SQPOLL thread. In normal situation following the io_uring_cancel_generic's assumption, the data race cannot happen. And in case the assumption is broken, the warning almost always is triggered even if data race happens. So we can ignore the race here. Thanks, Quang Minh. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests 2025-01-12 16:14 ` Bui Quang Minh @ 2025-01-12 21:15 ` Pavel Begunkov 2025-01-13 4:40 ` lizetao 2025-01-13 15:33 ` Bui Quang Minh 0 siblings, 2 replies; 7+ messages in thread From: Pavel Begunkov @ 2025-01-12 21:15 UTC (permalink / raw) To: Bui Quang Minh, lizetao, [email protected] Cc: Jens Axboe, [email protected], [email protected] On 1/12/25 16:14, Bui Quang Minh wrote: ... >>> @@ -2898,7 +2899,12 @@ static __cold void io_ring_exit_work(struct >>> work_struct *work) >>> if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) >>> io_move_task_work_from_local(ctx); >>> >>> - while (io_uring_try_cancel_requests(ctx, NULL, true)) >>> + /* >>> + * Even if SQPOLL thread reaches this path, don't force >>> + * iopoll here, let the io_uring_cancel_generic handle >>> + * it. >> >> Just curious, will sq_thread enter this io_ring_exit_work path? > > AFAIK, yes. The SQPOLL thread is created with create_io_thread, this function creates a new task with CLONE_FILES. So all the open files is shared. There will be case that the parent closes its io_uring file and SQPOLL thread become the only owner of that file. So it can reach this path when terminating. The function is run by a separate kthread, the sqpoll task doesn't call it directly. [...] >>>> io_uring, >>> - cancel_all); >>> + cancel_all, >>> + true); >>> } >>> >>> if (loop) { >>> -- >>> 2.43.0 >>> >> >> Maybe you miss something, just like Begunkov mentioned in your last version patch: >> >> io_uring_cancel_generic >> WARN_ON_ONCE(sqd && sqd->thread != current); >> >> This WARN_ON_ONCE will never be triggered, so you could remove it. > > He meant that we don't need to annotate sqd->thread access in this debug check. The io_uring_cancel_generic function has assumption that the sgd is not NULL only when it's called by a SQPOLL thread. So the check means to ensure this assumption. A data race happens only when this function is called by other tasks than the SQPOLL thread, so it can race with the SQPOLL termination. However, the sgd is not NULL only when this function is called by SQPOLL thread. In normal situation following the io_uring_cancel_generic's assumption, the data race cannot happen. And in case the assumption is broken, the warning almost always is triggered even if data race happens. So we can ignore the race here. Right. And that's the point of warnings, they're supposed to be untriggerable, otherwise there is a problem with the code that needs to be fixed. -- Pavel Begunkov ^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests 2025-01-12 21:15 ` Pavel Begunkov @ 2025-01-13 4:40 ` lizetao 2025-01-13 15:33 ` Bui Quang Minh 1 sibling, 0 replies; 7+ messages in thread From: lizetao @ 2025-01-13 4:40 UTC (permalink / raw) To: Pavel Begunkov, Bui Quang Minh, [email protected] Cc: Jens Axboe, [email protected], [email protected] Hi, > -----Original Message----- > From: Pavel Begunkov <[email protected]> > Sent: Monday, January 13, 2025 5:16 AM > To: Bui Quang Minh <[email protected]>; lizetao > <[email protected]>; [email protected] > Cc: Jens Axboe <[email protected]>; [email protected]; > [email protected] > Subject: Re: [PATCH] io_uring: simplify the SQPOLL thread check when > cancelling requests > > On 1/12/25 16:14, Bui Quang Minh wrote: > ... > >>> @@ -2898,7 +2899,12 @@ static __cold void io_ring_exit_work(struct > >>> work_struct *work) > >>> if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) > >>> io_move_task_work_from_local(ctx); > >>> > >>> - while (io_uring_try_cancel_requests(ctx, NULL, true)) > >>> + /* > >>> + * Even if SQPOLL thread reaches this path, don't force > >>> + * iopoll here, let the io_uring_cancel_generic handle > >>> + * it. > >> > >> Just curious, will sq_thread enter this io_ring_exit_work path? > > > > AFAIK, yes. The SQPOLL thread is created with create_io_thread, this function > creates a new task with CLONE_FILES. So all the open files is shared. There will > be case that the parent closes its io_uring file and SQPOLL thread become the > only owner of that file. So it can reach this path when terminating. > > The function is run by a separate kthread, the sqpoll task doesn't call it directly. I also think so, the sqpoll task may not call io_ring_exit_work(). > > [...] > >>>> io_uring, > >>> - cancel_all); > >>> + cancel_all, > >>> + true); > >>> } > >>> > >>> if (loop) { > >>> -- > >>> 2.43.0 > >>> > >> > >> Maybe you miss something, just like Begunkov mentioned in your last > version patch: > >> > >> io_uring_cancel_generic > >> WARN_ON_ONCE(sqd && sqd->thread != current); > >> > >> This WARN_ON_ONCE will never be triggered, so you could remove it. > > > > He meant that we don't need to annotate sqd->thread access in this debug > check. The io_uring_cancel_generic function has assumption that the sgd is not > NULL only when it's called by a SQPOLL thread. So the check means to ensure > this assumption. A data race happens only when this function is called by other > tasks than the SQPOLL thread, so it can race with the SQPOLL termination. > However, the sgd is not NULL only when this function is called by SQPOLL > thread. In normal situation following the io_uring_cancel_generic's assumption, > the data race cannot happen. And in case the assumption is broken, the > warning almost always is triggered even if data race happens. So we can ignore > the race here. > > Right. And that's the point of warnings, they're supposed to be untriggerable, > otherwise there is a problem with the code that needs to be fixed. Okay, I understand the meaning of this WARN. > > -- > Pavel Begunkov --- Li Zetao ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests 2025-01-12 21:15 ` Pavel Begunkov 2025-01-13 4:40 ` lizetao @ 2025-01-13 15:33 ` Bui Quang Minh 1 sibling, 0 replies; 7+ messages in thread From: Bui Quang Minh @ 2025-01-13 15:33 UTC (permalink / raw) To: Pavel Begunkov, lizetao, [email protected] Cc: Jens Axboe, [email protected], [email protected] On 1/13/25 04:15, Pavel Begunkov wrote: > On 1/12/25 16:14, Bui Quang Minh wrote: > ... >>>> @@ -2898,7 +2899,12 @@ static __cold void io_ring_exit_work(struct >>>> work_struct *work) >>>> if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) >>>> io_move_task_work_from_local(ctx); >>>> >>>> - while (io_uring_try_cancel_requests(ctx, NULL, true)) >>>> + /* >>>> + * Even if SQPOLL thread reaches this path, don't force >>>> + * iopoll here, let the io_uring_cancel_generic handle >>>> + * it. >>> >>> Just curious, will sq_thread enter this io_ring_exit_work path? >> >> AFAIK, yes. The SQPOLL thread is created with create_io_thread, this >> function creates a new task with CLONE_FILES. So all the open files is >> shared. There will be case that the parent closes its io_uring file >> and SQPOLL thread become the only owner of that file. So it can reach >> this path when terminating. > > The function is run by a separate kthread, the sqpoll task doesn't > call it directly. Yeah, the io_uring_release can be called in sqpoll thread but the io_ring_exit_work is queued in the io_uring workqueue so that function is executed in a kthread worker. I will update the comment and commit message. Thanks, Quang Minh. ^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests 2025-01-12 14:33 [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests Bui Quang Minh 2025-01-12 15:45 ` lizetao @ 2025-01-13 7:49 ` lizetao 1 sibling, 0 replies; 7+ messages in thread From: lizetao @ 2025-01-13 7:49 UTC (permalink / raw) To: Bui Quang Minh, [email protected] Cc: Jens Axboe, Pavel Begunkov, [email protected], [email protected] > -----Original Message----- > From: Bui Quang Minh <[email protected]> > Sent: Sunday, January 12, 2025 10:34 PM > To: [email protected] > Cc: Bui Quang Minh <[email protected]>; Jens Axboe > <[email protected]>; Pavel Begunkov <[email protected]>; io- > [email protected]; > [email protected]; lizetao > <[email protected]> > Subject: [PATCH] io_uring: simplify the SQPOLL thread check when cancelling > requests > > In io_uring_try_cancel_requests, we check whether sq_data->thread == > current to determine if the function is called by the SQPOLL thread to do iopoll > when IORING_SETUP_SQPOLL is set. This check can race with the SQPOLL > thread termination. > > io_uring_cancel_generic is used in 2 places: io_uring_cancel_generic and > io_ring_exit_work. In io_uring_cancel_generic, we have the information > whether the current is SQPOLL thread already. In io_ring_exit_work, in case > the SQPOLL thread reaches this path, we don't need to iopoll and leave that for > io_uring_cancel_generic to handle. > > So to avoid the racy check, this commit adds a boolean flag to > io_uring_try_cancel_requests to determine if we need to do iopoll inside the > function and only sets this flag in io_uring_cancel_generic when the current is > SQPOLL thread. > > Reported-by: [email protected] > Reported-by: Li Zetao <[email protected]> > Signed-off-by: Bui Quang Minh <[email protected]> > --- > io_uring/io_uring.c | 21 +++++++++++++++------ > 1 file changed, 15 insertions(+), 6 deletions(-) > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index > ff691f37462c..f28ea1254143 100644 > --- a/io_uring/io_uring.c > +++ b/io_uring/io_uring.c > @@ -143,7 +143,8 @@ struct io_defer_entry { > > static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, > struct io_uring_task *tctx, > - bool cancel_all); > + bool cancel_all, > + bool force_iopoll); > > static void io_queue_sqe(struct io_kiocb *req); > > @@ -2898,7 +2899,12 @@ static __cold void io_ring_exit_work(struct > work_struct *work) > if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) > io_move_task_work_from_local(ctx); > > - while (io_uring_try_cancel_requests(ctx, NULL, true)) > + /* > + * Even if SQPOLL thread reaches this path, don't force > + * iopoll here, let the io_uring_cancel_generic handle > + * it. > + */ > + while (io_uring_try_cancel_requests(ctx, NULL, true, false)) > cond_resched(); > > if (ctx->sq_data) { > @@ -3066,7 +3072,8 @@ static __cold bool io_uring_try_cancel_iowq(struct > io_ring_ctx *ctx) > > static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, > struct io_uring_task *tctx, > - bool cancel_all) > + bool cancel_all, > + bool force_iopoll) > { > struct io_task_cancel cancel = { .tctx = tctx, .all = cancel_all, }; > enum io_wq_cancel cret; > @@ -3096,7 +3103,7 @@ static __cold bool > io_uring_try_cancel_requests(struct io_ring_ctx *ctx, > > /* SQPOLL thread does its own polling */ > if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) || > - (ctx->sq_data && ctx->sq_data->thread == current)) { > + force_iopoll) { > while (!wq_list_empty(&ctx->iopoll_list)) { > io_iopoll_try_reap_events(ctx); > ret = true; > @@ -3169,13 +3176,15 @@ __cold void io_uring_cancel_generic(bool > cancel_all, struct io_sq_data *sqd) > continue; > loop |= io_uring_try_cancel_requests(node- > >ctx, > current->io_uring, > - cancel_all); > + cancel_all, > + false); > } > } else { > list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) > loop |= io_uring_try_cancel_requests(ctx, > current- > >io_uring, > - cancel_all); > + cancel_all, > + true); > } > > if (loop) { > -- > 2.43.0 > Reviewed-by: Li Zetao<[email protected]> --- Li Zetao ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-01-13 15:33 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-01-12 14:33 [PATCH] io_uring: simplify the SQPOLL thread check when cancelling requests Bui Quang Minh 2025-01-12 15:45 ` lizetao 2025-01-12 16:14 ` Bui Quang Minh 2025-01-12 21:15 ` Pavel Begunkov 2025-01-13 4:40 ` lizetao 2025-01-13 15:33 ` Bui Quang Minh 2025-01-13 7:49 ` lizetao
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox