* [PATCH 5.10] io_uring: don't forget to task-cancel drained reqs
@ 2020-11-05 14:06 Pavel Begunkov
2020-11-05 16:53 ` Jens Axboe
0 siblings, 1 reply; 2+ messages in thread
From: Pavel Begunkov @ 2020-11-05 14:06 UTC (permalink / raw)
To: Jens Axboe, io-uring
If there is a long-standing request of one task locking up execution of
deferred requests, and the defer list contains requests of another task
(all files-less), then a potential execution of __io_uring_task_cancel()
by that another task will sleep until that first long-standing request
completion, and that may take long.
E.g.
tsk1: req1/read(empty_pipe) -> tsk2: req(DRAIN)
Then __io_uring_task_cancel(tsk2) waits for req1 completion.
It seems we even can manufacture a complicated case with many tasks
sharing many rings that can lock them forever.
Cancel deferred requests for __io_uring_task_cancel() as well.
Signed-off-by: Pavel Begunkov <[email protected]>
---
Looks like I can't finish refactoring cancellations without finding new
flaws to fix. That may be not the prettiest thing but will be cleaned in
5.11.
fs/io_uring.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 984cc961871f..58da3489d791 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8495,6 +8495,7 @@ static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
}
static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ struct task_struct *task,
struct files_struct *files)
{
struct io_defer_entry *de = NULL;
@@ -8502,7 +8503,8 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
spin_lock_irq(&ctx->completion_lock);
list_for_each_entry_reverse(de, &ctx->defer_list, list) {
- if (io_match_files(de->req, files)) {
+ if (io_task_match(de->req, task) &&
+ io_match_files(de->req, files)) {
list_cut_position(&list, &ctx->defer_list, &de->list);
break;
}
@@ -8528,7 +8530,6 @@ static bool io_uring_cancel_files(struct io_ring_ctx *ctx,
if (list_empty_careful(&ctx->inflight_list))
return false;
- io_cancel_defer_files(ctx, files);
/* cancel all at once, should be faster than doing it one by one*/
io_wq_cancel_cb(ctx->io_wq, io_wq_files_match, files, true);
@@ -8620,6 +8621,11 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
io_sq_thread_park(ctx->sq_data);
}
+ if (files)
+ io_cancel_defer_files(ctx, NULL, files);
+ else
+ io_cancel_defer_files(ctx, task, NULL);
+
io_cqring_overflow_flush(ctx, true, task, files);
while (__io_uring_cancel_task_requests(ctx, task, files)) {
--
2.24.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH 5.10] io_uring: don't forget to task-cancel drained reqs
2020-11-05 14:06 [PATCH 5.10] io_uring: don't forget to task-cancel drained reqs Pavel Begunkov
@ 2020-11-05 16:53 ` Jens Axboe
0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2020-11-05 16:53 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 11/5/20 7:06 AM, Pavel Begunkov wrote:
> If there is a long-standing request of one task locking up execution of
> deferred requests, and the defer list contains requests of another task
> (all files-less), then a potential execution of __io_uring_task_cancel()
> by that another task will sleep until that first long-standing request
> completion, and that may take long.
>
> E.g.
> tsk1: req1/read(empty_pipe) -> tsk2: req(DRAIN)
> Then __io_uring_task_cancel(tsk2) waits for req1 completion.
>
> It seems we even can manufacture a complicated case with many tasks
> sharing many rings that can lock them forever.
>
> Cancel deferred requests for __io_uring_task_cancel() as well.
Thanks, applied.
--
Jens Axboe
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-11-05 16:53 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-11-05 14:06 [PATCH 5.10] io_uring: don't forget to task-cancel drained reqs Pavel Begunkov
2020-11-05 16:53 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox