public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: [email protected]
Cc: Jens Axboe <[email protected]>,
	[email protected],
	[email protected]
Subject: [PATCH 1/3] io_uring: fix nested timeout locking on disarming
Date: Wed, 20 Apr 2022 13:40:53 +0100	[thread overview]
Message-ID: <6fe65900e689b8f5b8d60ba3fa95108138b9fb9f.1650458197.git.asml.silence@gmail.com> (raw)
In-Reply-To: <[email protected]>

WARNING: possible recursive locking detected

syz-executor162/3588 is trying to acquire lock:
ffff888011a453d8 (&ctx->timeout_lock){....}-{2:2}, at: spin_lock_irq include/linux/spinlock.h:379 [inline]
ffff888011a453d8 (&ctx->timeout_lock){....}-{2:2}, at: io_disarm_next+0x545/0xaa0 fs/io_uring.c:2452
but task is already holding lock:
ffff888011a453d8 (&ctx->timeout_lock){....}-{2:2}, at: spin_lock_irq include/linux/spinlock.h:379 [inline]
ffff888011a453d8 (&ctx->timeout_lock){....}-{2:2}, at: io_kill_timeouts+0x4c/0x227 fs/io_uring.c:10432

Call Trace:
 <TASK>
...
 spin_lock_irq include/linux/spinlock.h:379 [inline]
 io_disarm_next+0x545/0xaa0 fs/io_uring.c:2452
 __io_req_complete_post+0x794/0xd90 fs/io_uring.c:2200
 io_kill_timeout fs/io_uring.c:1815 [inline]
 io_kill_timeout+0x210/0x21d fs/io_uring.c:1803
 io_kill_timeouts+0xe2/0x227 fs/io_uring.c:10435
 io_ring_ctx_wait_and_kill+0x1eb/0x360 fs/io_uring.c:10462
 io_uring_release+0x42/0x46 fs/io_uring.c:10483
 __fput+0x277/0x9d0 fs/file_table.c:317
 task_work_run+0xdd/0x1a0 kernel/task_work.c:164
...

Return tw deferred putting back, it's easier than looking after all
potential nested locking. However, instead of filling an CQE on the spot
as it was before delay it as well by io_req_complete_post() via tw.

Reported-by: [email protected]
Fixes: 78bfbdd1a497 ("io_uring: kill io_put_req_deferred()")
Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3905b3ec87b8..2b9a3af9ff42 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1214,6 +1214,7 @@ static int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags);
 
 static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer);
 static void io_eventfd_signal(struct io_ring_ctx *ctx);
+static void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags);
 
 static struct kmem_cache *req_cachep;
 
@@ -1782,7 +1783,7 @@ static void io_kill_timeout(struct io_kiocb *req, int status)
 		atomic_set(&req->ctx->cq_timeouts,
 			atomic_read(&req->ctx->cq_timeouts) + 1);
 		list_del_init(&req->timeout.list);
-		__io_req_complete_post(req, status, 0);
+		io_req_tw_post_queue(req, status, 0);
 	}
 }
 
@@ -2367,7 +2368,7 @@ static bool io_kill_linked_timeout(struct io_kiocb *req)
 		link->timeout.head = NULL;
 		if (hrtimer_try_to_cancel(&io->timer) != -1) {
 			list_del(&link->timeout.list);
-			__io_req_complete_post(link, -ECANCELED, 0);
+			io_req_tw_post_queue(link, -ECANCELED, 0);
 			return true;
 		}
 	}
@@ -2413,7 +2414,7 @@ static bool io_disarm_next(struct io_kiocb *req)
 		req->flags &= ~REQ_F_ARM_LTIMEOUT;
 		if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
 			io_remove_next_linked(req);
-			__io_req_complete_post(link, -ECANCELED, 0);
+			io_req_tw_post_queue(link, -ECANCELED, 0);
 			posted = true;
 		}
 	} else if (req->flags & REQ_F_LINK_TIMEOUT) {
@@ -2632,6 +2633,19 @@ static void io_req_task_work_add(struct io_kiocb *req, bool priority)
 	}
 }
 
+static void io_req_tw_post(struct io_kiocb *req, bool *locked)
+{
+	io_req_complete_post(req, req->cqe.res, req->cqe.flags);
+}
+
+static void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags)
+{
+	req->cqe.res = res;
+	req->cqe.flags = cflags;
+	req->io_task_work.func = io_req_tw_post;
+	io_req_task_work_add(req, false);
+}
+
 static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
 {
 	/* not needed for normal modes, but SQPOLL depends on it */
-- 
2.36.0


  reply	other threads:[~2022-04-20 12:41 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-20 12:40 [PATCH for-next 0/3] timeout fixes & improvements Pavel Begunkov
2022-04-20 12:40 ` Pavel Begunkov [this message]
2022-04-20 12:40 ` [PATCH 2/3] io_uring: move tout locking in io_timeout_cancel() Pavel Begunkov
2022-04-20 12:40 ` [PATCH 3/3] io_uring: refactor io_disarm_next() locking Pavel Begunkov
2022-04-20 22:24 ` [PATCH for-next 0/3] timeout fixes & improvements Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6fe65900e689b8f5b8d60ba3fa95108138b9fb9f.1650458197.git.asml.silence@gmail.com \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox