* [PATCH AUTOSEL 5.5 23/41] io-wq: fix IO_WQ_WORK_NO_CANCEL cancellation
[not found] <[email protected]>
@ 2020-03-16 2:33 ` Sasha Levin
2020-03-16 2:33 ` [PATCH AUTOSEL 5.5 41/41] io_uring: fix lockup with timeouts Sasha Levin
1 sibling, 0 replies; 2+ messages in thread
From: Sasha Levin @ 2020-03-16 2:33 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Pavel Begunkov, Jens Axboe, Sasha Levin, io-uring, linux-fsdevel
From: Pavel Begunkov <[email protected]>
[ Upstream commit fc04c39bae01a607454f7619665309870c60937a ]
To cancel a work, io-wq sets IO_WQ_WORK_CANCEL and executes the
callback. However, IO_WQ_WORK_NO_CANCEL works will just execute and may
return next work, which will be ignored and lost.
Cancel the whole link.
Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/io-wq.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/fs/io-wq.c b/fs/io-wq.c
index 25ffb6685baea..1f46fe663b287 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -733,6 +733,17 @@ static bool io_wq_can_queue(struct io_wqe *wqe, struct io_wqe_acct *acct,
return true;
}
+static void io_run_cancel(struct io_wq_work *work)
+{
+ do {
+ struct io_wq_work *old_work = work;
+
+ work->flags |= IO_WQ_WORK_CANCEL;
+ work->func(&work);
+ work = (work == old_work) ? NULL : work;
+ } while (work);
+}
+
static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
{
struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
@@ -745,8 +756,7 @@ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
* It's close enough to not be an issue, fork() has the same delay.
*/
if (unlikely(!io_wq_can_queue(wqe, acct, work))) {
- work->flags |= IO_WQ_WORK_CANCEL;
- work->func(&work);
+ io_run_cancel(work);
return;
}
@@ -882,8 +892,7 @@ static enum io_wq_cancel io_wqe_cancel_cb_work(struct io_wqe *wqe,
spin_unlock_irqrestore(&wqe->lock, flags);
if (found) {
- work->flags |= IO_WQ_WORK_CANCEL;
- work->func(&work);
+ io_run_cancel(work);
return IO_WQ_CANCEL_OK;
}
@@ -957,8 +966,7 @@ static enum io_wq_cancel io_wqe_cancel_work(struct io_wqe *wqe,
spin_unlock_irqrestore(&wqe->lock, flags);
if (found) {
- work->flags |= IO_WQ_WORK_CANCEL;
- work->func(&work);
+ io_run_cancel(work);
return IO_WQ_CANCEL_OK;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [PATCH AUTOSEL 5.5 41/41] io_uring: fix lockup with timeouts
[not found] <[email protected]>
2020-03-16 2:33 ` [PATCH AUTOSEL 5.5 23/41] io-wq: fix IO_WQ_WORK_NO_CANCEL cancellation Sasha Levin
@ 2020-03-16 2:33 ` Sasha Levin
1 sibling, 0 replies; 2+ messages in thread
From: Sasha Levin @ 2020-03-16 2:33 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Pavel Begunkov, Jens Axboe, Sasha Levin, linux-fsdevel, io-uring
From: Pavel Begunkov <[email protected]>
[ Upstream commit f0e20b8943509d81200cef5e30af2adfddba0f5c ]
There is a recipe to deadlock the kernel: submit a timeout sqe with a
linked_timeout (e.g. test_single_link_timeout_ception() from liburing),
and SIGKILL the process.
Then, io_kill_timeouts() takes @ctx->completion_lock, but the timeout
isn't flagged with REQ_F_COMP_LOCKED, and will try to double grab it
during io_put_free() to cancel the linked timeout. Probably, the same
can happen with another io_kill_timeout() call site, that is
io_commit_cqring().
Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/io_uring.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 60a4832089982..fd28f85677225 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -688,6 +688,7 @@ static void io_kill_timeout(struct io_kiocb *req)
if (ret != -1) {
atomic_inc(&req->ctx->cq_timeouts);
list_del_init(&req->list);
+ req->flags |= REQ_F_COMP_LOCKED;
io_cqring_fill_event(req, 0);
io_put_req(req);
}
--
2.20.1
^ permalink raw reply related [flat|nested] 2+ messages in thread