public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH 0/4] 5.9 fixes
@ 2020-07-25 11:41 Pavel Begunkov
  2020-07-25 11:41 ` [PATCH 1/4] io_uring: mark ->work uninitialised after cleanup Pavel Begunkov
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Pavel Begunkov @ 2020-07-25 11:41 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Unrelated to each other fixes/cleanups for 5.9

Pavel Begunkov (4):
  io_uring: mark ->work uninitialised after cleanup
  io_uring: fix missing io_queue_linked_timeout()
  io-wq: update hash bits
  io_uring: fix racy req->flags modification

 fs/io-wq.c    |  5 +----
 fs/io_uring.c | 38 +++++++++++++++++++++-----------------
 2 files changed, 22 insertions(+), 21 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] io_uring: mark ->work uninitialised after cleanup
  2020-07-25 11:41 [PATCH 0/4] 5.9 fixes Pavel Begunkov
@ 2020-07-25 11:41 ` Pavel Begunkov
  2020-07-25 11:41 ` [PATCH 2/4] io_uring: fix missing io_queue_linked_timeout() Pavel Begunkov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2020-07-25 11:41 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Remove REQ_F_WORK_INITIALIZED after io_req_clean_work(). That's a cold
path but is safer for those using io_req_clean_work() out of
*dismantle_req()/*io_free(). And for the same reason zero work.fs

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c7e8e9a1b27b..59f1f473ffc7 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1141,7 +1141,9 @@ static void io_req_clean_work(struct io_kiocb *req)
 		spin_unlock(&req->work.fs->lock);
 		if (fs)
 			free_fs_struct(fs);
+		req->work.fs = NULL;
 	}
+	req->flags &= ~REQ_F_WORK_INITIALIZED;
 }
 
 static void io_prep_async_work(struct io_kiocb *req)
@@ -4969,7 +4971,6 @@ static int io_poll_add(struct io_kiocb *req)
 
 	/* ->work is in union with hash_node and others */
 	io_req_clean_work(req);
-	req->flags &= ~REQ_F_WORK_INITIALIZED;
 
 	INIT_HLIST_NODE(&req->hash_node);
 	ipt.pt._qproc = io_poll_queue_proc;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] io_uring: fix missing io_queue_linked_timeout()
  2020-07-25 11:41 [PATCH 0/4] 5.9 fixes Pavel Begunkov
  2020-07-25 11:41 ` [PATCH 1/4] io_uring: mark ->work uninitialised after cleanup Pavel Begunkov
@ 2020-07-25 11:41 ` Pavel Begunkov
  2020-07-25 11:42 ` [PATCH 3/4] io-wq: update hash bits Pavel Begunkov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2020-07-25 11:41 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Whoever called io_prep_linked_timeout() should also do
io_queue_linked_timeout(). __io_queue_sqe() doesn't follow that for the
punting path leaving linked timeouts prepared but never queued.

Fixes: 6df1db6b54243 ("io_uring: fix mis-refcounting linked timeouts")
Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 59f1f473ffc7..3e406bc1f855 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5987,20 +5987,20 @@ static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	 * doesn't support non-blocking read/write attempts
 	 */
 	if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
-		if (io_arm_poll_handler(req)) {
-			if (linked_timeout)
-				io_queue_linked_timeout(linked_timeout);
-			goto exit;
-		}
+		if (!io_arm_poll_handler(req)) {
 punt:
-		ret = io_prep_work_files(req);
-		if (unlikely(ret))
-			goto err;
-		/*
-		 * Queued up for async execution, worker will release
-		 * submit reference when the iocb is actually submitted.
-		 */
-		io_queue_async_work(req);
+			ret = io_prep_work_files(req);
+			if (unlikely(ret))
+				goto err;
+			/*
+			 * Queued up for async execution, worker will release
+			 * submit reference when the iocb is actually submitted.
+			 */
+			io_queue_async_work(req);
+		}
+
+		if (linked_timeout)
+			io_queue_linked_timeout(linked_timeout);
 		goto exit;
 	}
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] io-wq: update hash bits
  2020-07-25 11:41 [PATCH 0/4] 5.9 fixes Pavel Begunkov
  2020-07-25 11:41 ` [PATCH 1/4] io_uring: mark ->work uninitialised after cleanup Pavel Begunkov
  2020-07-25 11:41 ` [PATCH 2/4] io_uring: fix missing io_queue_linked_timeout() Pavel Begunkov
@ 2020-07-25 11:42 ` Pavel Begunkov
  2020-07-25 11:42 ` [PATCH 4/4] io_uring: fix racy req->flags modification Pavel Begunkov
  2020-07-25 15:48 ` [PATCH 0/4] 5.9 fixes Jens Axboe
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2020-07-25 11:42 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Linked requests are hashed, remove a comment stating otherwise. Also
move hash bits to emphasise that we don't carry it through loop
iteration and set it every time.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io-wq.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 8702d3c3b291..e92c4724480c 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -490,7 +490,6 @@ static void io_worker_handle_work(struct io_worker *worker)
 
 	do {
 		struct io_wq_work *work;
-		unsigned int hash;
 get_next:
 		/*
 		 * If we got some work, mark us as busy. If we didn't, but
@@ -513,6 +512,7 @@ static void io_worker_handle_work(struct io_worker *worker)
 		/* handle a whole dependent link */
 		do {
 			struct io_wq_work *old_work, *next_hashed, *linked;
+			unsigned int hash = io_get_work_hash(work);
 
 			next_hashed = wq_next_work(work);
 			io_impersonate_work(worker, work);
@@ -523,7 +523,6 @@ static void io_worker_handle_work(struct io_worker *worker)
 			if (test_bit(IO_WQ_BIT_CANCEL, &wq->state))
 				work->flags |= IO_WQ_WORK_CANCEL;
 
-			hash = io_get_work_hash(work);
 			old_work = work;
 			linked = wq->do_work(work);
 
@@ -542,8 +541,6 @@ static void io_worker_handle_work(struct io_worker *worker)
 				spin_lock_irq(&wqe->lock);
 				wqe->hash_map &= ~BIT_ULL(hash);
 				wqe->flags &= ~IO_WQE_FLAG_STALLED;
-				/* dependent work is not hashed */
-				hash = -1U;
 				/* skip unnecessary unlock-lock wqe->lock */
 				if (!work)
 					goto get_next;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] io_uring: fix racy req->flags modification
  2020-07-25 11:41 [PATCH 0/4] 5.9 fixes Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-07-25 11:42 ` [PATCH 3/4] io-wq: update hash bits Pavel Begunkov
@ 2020-07-25 11:42 ` Pavel Begunkov
  2020-07-25 15:48 ` [PATCH 0/4] 5.9 fixes Jens Axboe
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2020-07-25 11:42 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Changing req->flags from any context other than where it's executed is
racy. io_uring_cancel_files() does that for overflowed requests.
Instead, keep track of already visited ones by initialising the overflow
list node.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3e406bc1f855..8eec2c5fbc9e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7873,8 +7873,11 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
 
 		if (cancel_req->flags & REQ_F_OVERFLOW) {
 			spin_lock_irq(&ctx->completion_lock);
-			list_del(&cancel_req->compl.list);
-			cancel_req->flags &= ~REQ_F_OVERFLOW;
+
+			if (list_empty(&cancel_req->compl.list))
+				goto out_wait;
+			list_del_init(&cancel_req->compl.list);
+
 			if (list_empty(&ctx->cq_overflow_list)) {
 				clear_bit(0, &ctx->sq_check_overflow);
 				clear_bit(0, &ctx->cq_check_overflow);
@@ -7898,7 +7901,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
 			io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
 			io_put_req(cancel_req);
 		}
-
+out_wait:
 		schedule();
 		finish_wait(&ctx->inflight_wait, &wait);
 	}
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/4] 5.9 fixes
  2020-07-25 11:41 [PATCH 0/4] 5.9 fixes Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-07-25 11:42 ` [PATCH 4/4] io_uring: fix racy req->flags modification Pavel Begunkov
@ 2020-07-25 15:48 ` Jens Axboe
  4 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2020-07-25 15:48 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 7/25/20 5:41 AM, Pavel Begunkov wrote:
> Unrelated to each other fixes/cleanups for 5.9

Applied, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-07-25 15:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-07-25 11:41 [PATCH 0/4] 5.9 fixes Pavel Begunkov
2020-07-25 11:41 ` [PATCH 1/4] io_uring: mark ->work uninitialised after cleanup Pavel Begunkov
2020-07-25 11:41 ` [PATCH 2/4] io_uring: fix missing io_queue_linked_timeout() Pavel Begunkov
2020-07-25 11:42 ` [PATCH 3/4] io-wq: update hash bits Pavel Begunkov
2020-07-25 11:42 ` [PATCH 4/4] io_uring: fix racy req->flags modification Pavel Begunkov
2020-07-25 15:48 ` [PATCH 0/4] 5.9 fixes Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox