public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH 0/8] for-next cleanups
@ 2021-10-23 11:13 Pavel Begunkov
  2021-10-23 11:13 ` [PATCH 1/8] io-wq: use helper for worker refcounting Pavel Begunkov
                   ` (8 more replies)
  0 siblings, 9 replies; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:13 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

Let's clean up the just merged async-polling stuff, will be
easier to maintain, 2,3,5 deal with it. 6-8 are a resend.

Pavel Begunkov (8):
  io-wq: use helper for worker refcounting
  io_uring: clean io_wq_submit_work()'s main loop
  io_uring: clean iowq submit work cancellation
  io_uring: check if opcode needs poll first on arming
  io_uring: don't try io-wq polling if not supported
  io_uring: clean up timeout async_data allocation
  io_uring: kill unused param from io_file_supports_nowait
  io_uring: clusterise ki_flags access in rw_prep

 fs/io-wq.c    |   3 +-
 fs/io_uring.c | 115 ++++++++++++++++++++++----------------------------
 2 files changed, 52 insertions(+), 66 deletions(-)

-- 
2.33.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/8] io-wq: use helper for worker refcounting
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
@ 2021-10-23 11:13 ` Pavel Begunkov
  2021-10-23 11:13 ` [PATCH 2/8] io_uring: clean io_wq_submit_work()'s main loop Pavel Begunkov
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:13 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

Use io_worker_release() instead of hand coding it in io_worker_exit().

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io-wq.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 811299ac9684..0c283bb18fb2 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -178,8 +178,7 @@ static void io_worker_exit(struct io_worker *worker)
 {
 	struct io_wqe *wqe = worker->wqe;
 
-	if (refcount_dec_and_test(&worker->ref))
-		complete(&worker->ref_done);
+	io_worker_release(worker);
 	wait_for_completion(&worker->ref_done);
 
 	raw_spin_lock(&wqe->lock);
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/8] io_uring: clean io_wq_submit_work()'s main loop
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
  2021-10-23 11:13 ` [PATCH 1/8] io-wq: use helper for worker refcounting Pavel Begunkov
@ 2021-10-23 11:13 ` Pavel Begunkov
  2021-10-25  8:53   ` Hao Xu
  2021-10-23 11:13 ` [PATCH 3/8] io_uring: clean iowq submit work cancellation Pavel Begunkov
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:13 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

Do a bit of cleaning for the main loop of io_wq_submit_work(). Get rid
of switch, just replace it with a single if as we're retrying in both
other cases. Kill issue_sqe label, Get rid of needs_poll nesting and
disambiguate a bit the comment.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 40 ++++++++++++----------------------------
 1 file changed, 12 insertions(+), 28 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 736d456e7913..7f92523c1282 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6749,40 +6749,24 @@ static void io_wq_submit_work(struct io_wq_work *work)
 		}
 
 		do {
-issue_sqe:
 			ret = io_issue_sqe(req, issue_flags);
+			if (ret != -EAGAIN)
+				break;
 			/*
-			 * We can get EAGAIN for polled IO even though we're
+			 * We can get EAGAIN for iopolled IO even though we're
 			 * forcing a sync submission from here, since we can't
 			 * wait for request slots on the block side.
 			 */
-			if (ret != -EAGAIN)
-				break;
-			if (needs_poll) {
-				bool armed = false;
-
-				ret = 0;
-				needs_poll = false;
-				issue_flags &= ~IO_URING_F_NONBLOCK;
-
-				switch (io_arm_poll_handler(req)) {
-				case IO_APOLL_READY:
-					goto issue_sqe;
-				case IO_APOLL_ABORTED:
-					/*
-					 * somehow we failed to arm the poll infra,
-					 * fallback it to a normal async worker try.
-					 */
-					break;
-				case IO_APOLL_OK:
-					armed = true;
-					break;
-				}
-
-				if (armed)
-					break;
+			if (!needs_poll) {
+				cond_resched();
+				continue;
 			}
-			cond_resched();
+
+			if (io_arm_poll_handler(req) == IO_APOLL_OK)
+				return;
+			/* aborted or ready, in either case retry blocking */
+			needs_poll = false;
+			issue_flags &= ~IO_URING_F_NONBLOCK;
 		} while (1);
 	}
 
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/8] io_uring: clean iowq submit work cancellation
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
  2021-10-23 11:13 ` [PATCH 1/8] io-wq: use helper for worker refcounting Pavel Begunkov
  2021-10-23 11:13 ` [PATCH 2/8] io_uring: clean io_wq_submit_work()'s main loop Pavel Begunkov
@ 2021-10-23 11:13 ` Pavel Begunkov
  2021-10-25  8:57   ` Hao Xu
  2021-10-23 11:13 ` [PATCH 4/8] io_uring: check if opcode needs poll first on arming Pavel Begunkov
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:13 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

If we've got IO_WQ_WORK_CANCEL in io_wq_submit_work(), handle the error
on the same lines as the check instead of having a weird code flow. The
main loop doesn't change but goes one indention left.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 59 +++++++++++++++++++++++++--------------------------
 1 file changed, 29 insertions(+), 30 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7f92523c1282..58cb3a14d58e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6721,6 +6721,8 @@ static struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
 static void io_wq_submit_work(struct io_wq_work *work)
 {
 	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+	unsigned int issue_flags = IO_URING_F_UNLOCKED;
+	bool needs_poll = false;
 	struct io_kiocb *timeout;
 	int ret = 0;
 
@@ -6735,40 +6737,37 @@ static void io_wq_submit_work(struct io_wq_work *work)
 		io_queue_linked_timeout(timeout);
 
 	/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
-	if (work->flags & IO_WQ_WORK_CANCEL)
-		ret = -ECANCELED;
+	if (work->flags & IO_WQ_WORK_CANCEL) {
+		io_req_task_queue_fail(req, -ECANCELED);
+		return;
+	}
 
-	if (!ret) {
-		bool needs_poll = false;
-		unsigned int issue_flags = IO_URING_F_UNLOCKED;
+	if (req->flags & REQ_F_FORCE_ASYNC) {
+		needs_poll = req->file && file_can_poll(req->file);
+		if (needs_poll)
+			issue_flags |= IO_URING_F_NONBLOCK;
+	}
 
-		if (req->flags & REQ_F_FORCE_ASYNC) {
-			needs_poll = req->file && file_can_poll(req->file);
-			if (needs_poll)
-				issue_flags |= IO_URING_F_NONBLOCK;
+	do {
+		ret = io_issue_sqe(req, issue_flags);
+		if (ret != -EAGAIN)
+			break;
+		/*
+		 * We can get EAGAIN for iopolled IO even though we're
+		 * forcing a sync submission from here, since we can't
+		 * wait for request slots on the block side.
+		 */
+		if (!needs_poll) {
+			cond_resched();
+			continue;
 		}
 
-		do {
-			ret = io_issue_sqe(req, issue_flags);
-			if (ret != -EAGAIN)
-				break;
-			/*
-			 * We can get EAGAIN for iopolled IO even though we're
-			 * forcing a sync submission from here, since we can't
-			 * wait for request slots on the block side.
-			 */
-			if (!needs_poll) {
-				cond_resched();
-				continue;
-			}
-
-			if (io_arm_poll_handler(req) == IO_APOLL_OK)
-				return;
-			/* aborted or ready, in either case retry blocking */
-			needs_poll = false;
-			issue_flags &= ~IO_URING_F_NONBLOCK;
-		} while (1);
-	}
+		if (io_arm_poll_handler(req) == IO_APOLL_OK)
+			return;
+		/* aborted or ready, in either case retry blocking */
+		needs_poll = false;
+		issue_flags &= ~IO_URING_F_NONBLOCK;
+	} while (1);
 
 	/* avoid locking problems by failing it from a clean context */
 	if (ret)
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/8] io_uring: check if opcode needs poll first on arming
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
                   ` (2 preceding siblings ...)
  2021-10-23 11:13 ` [PATCH 3/8] io_uring: clean iowq submit work cancellation Pavel Begunkov
@ 2021-10-23 11:13 ` Pavel Begunkov
  2021-10-25  8:58   ` Hao Xu
  2021-10-23 11:13 ` [PATCH 5/8] io_uring: don't try io-wq polling if not supported Pavel Begunkov
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:13 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

->pollout or ->pollin are set only for opcodes that need a file, so if
io_arm_poll_handler() tests them first we can be sure that the request
has file set and the ->file check can be removed.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 58cb3a14d58e..bff911f951ed 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5584,12 +5584,10 @@ static int io_arm_poll_handler(struct io_kiocb *req)
 	struct io_poll_table ipt;
 	__poll_t ret, mask = EPOLLONESHOT | POLLERR | POLLPRI;
 
-	if (!req->file || !file_can_poll(req->file))
-		return IO_APOLL_ABORTED;
-	if (req->flags & REQ_F_POLLED)
-		return IO_APOLL_ABORTED;
 	if (!def->pollin && !def->pollout)
 		return IO_APOLL_ABORTED;
+	if (!file_can_poll(req->file) || (req->flags & REQ_F_POLLED))
+		return IO_APOLL_ABORTED;
 
 	if (def->pollin) {
 		mask |= POLLIN | POLLRDNORM;
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 5/8] io_uring: don't try io-wq polling if not supported
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
                   ` (3 preceding siblings ...)
  2021-10-23 11:13 ` [PATCH 4/8] io_uring: check if opcode needs poll first on arming Pavel Begunkov
@ 2021-10-23 11:13 ` Pavel Begunkov
  2021-10-25  8:58   ` Hao Xu
  2021-10-23 11:14 ` [PATCH 6/8] io_uring: clean up timeout async_data allocation Pavel Begunkov
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:13 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

If an opcode doesn't support polling, just let it be executed
synchronously in iowq, otherwise it will do a nonblock attempt just to
fail in io_arm_poll_handler() and return back to blocking execution.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index bff911f951ed..c6f32fcf387b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6741,9 +6741,13 @@ static void io_wq_submit_work(struct io_wq_work *work)
 	}
 
 	if (req->flags & REQ_F_FORCE_ASYNC) {
-		needs_poll = req->file && file_can_poll(req->file);
-		if (needs_poll)
+		const struct io_op_def *def = &io_op_defs[req->opcode];
+		bool opcode_poll = def->pollin || def->pollout;
+
+		if (opcode_poll && file_can_poll(req->file)) {
+			needs_poll = true;
 			issue_flags |= IO_URING_F_NONBLOCK;
+		}
 	}
 
 	do {
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 6/8] io_uring: clean up timeout async_data allocation
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
                   ` (4 preceding siblings ...)
  2021-10-23 11:13 ` [PATCH 5/8] io_uring: don't try io-wq polling if not supported Pavel Begunkov
@ 2021-10-23 11:14 ` Pavel Begunkov
  2021-10-23 11:14 ` [PATCH 7/8] io_uring: kill unused param from io_file_supports_nowait Pavel Begunkov
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

opcode prep functions are one of the first things that are called, we
can't have ->async_data allocated at this point and it's certainly a
bug. Reflect this assumption in io_timeout_prep() and add a WARN_ONCE
just in case.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c6f32fcf387b..e775529a36d8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6113,7 +6113,9 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	if (unlikely(off && !req->ctx->off_timeout_used))
 		req->ctx->off_timeout_used = true;
 
-	if (!req_has_async_data(req) && io_alloc_async_data(req))
+	if (WARN_ON_ONCE(req_has_async_data(req)))
+		return -EFAULT;
+	if (io_alloc_async_data(req))
 		return -ENOMEM;
 
 	data = req->async_data;
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 7/8] io_uring: kill unused param from io_file_supports_nowait
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
                   ` (5 preceding siblings ...)
  2021-10-23 11:14 ` [PATCH 6/8] io_uring: clean up timeout async_data allocation Pavel Begunkov
@ 2021-10-23 11:14 ` Pavel Begunkov
  2021-10-23 11:14 ` [PATCH 8/8] io_uring: clusterise ki_flags access in rw_prep Pavel Begunkov
  2021-10-23 14:09 ` [PATCH 0/8] for-next cleanups Jens Axboe
  8 siblings, 0 replies; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

io_file_supports_nowait() doesn't use rw argument anymore, remove it.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index e775529a36d8..7042ed870b52 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2809,8 +2809,7 @@ static inline bool io_file_supports_nowait(struct io_kiocb *req)
 	return req->flags & REQ_F_SUPPORT_NOWAIT;
 }
 
-static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
-		      int rw)
+static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct kiocb *kiocb = &req->rw.kiocb;
@@ -3352,7 +3351,7 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	if (unlikely(!(req->file->f_mode & FMODE_READ)))
 		return -EBADF;
-	return io_prep_rw(req, sqe, READ);
+	return io_prep_rw(req, sqe);
 }
 
 /*
@@ -3568,7 +3567,7 @@ static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
 		return -EBADF;
-	return io_prep_rw(req, sqe, WRITE);
+	return io_prep_rw(req, sqe);
 }
 
 static int io_write(struct io_kiocb *req, unsigned int issue_flags)
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 8/8] io_uring: clusterise ki_flags access in rw_prep
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
                   ` (6 preceding siblings ...)
  2021-10-23 11:14 ` [PATCH 7/8] io_uring: kill unused param from io_file_supports_nowait Pavel Begunkov
@ 2021-10-23 11:14 ` Pavel Begunkov
  2021-10-23 14:09 ` [PATCH 0/8] for-next cleanups Jens Axboe
  8 siblings, 0 replies; 16+ messages in thread
From: Pavel Begunkov @ 2021-10-23 11:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Hao Xu, Pavel Begunkov

ioprio setup doesn't depend on other fields that are modified in
io_prep_rw() and we can move it down in the function without worrying
about performance. It's useful as it makes iocb->ki_flags
accesses/modifications closer together, so it's more likely the compiler
will cache it in a register and avoid extra reloads.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7042ed870b52..bba2f77ae7e7 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2840,16 +2840,6 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	    ((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req)))
 		req->flags |= REQ_F_NOWAIT;
 
-	ioprio = READ_ONCE(sqe->ioprio);
-	if (ioprio) {
-		ret = ioprio_check_cap(ioprio);
-		if (ret)
-			return ret;
-
-		kiocb->ki_ioprio = ioprio;
-	} else
-		kiocb->ki_ioprio = get_current_ioprio();
-
 	if (ctx->flags & IORING_SETUP_IOPOLL) {
 		if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
 			return -EOPNOTSUPP;
@@ -2863,6 +2853,17 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		kiocb->ki_complete = io_complete_rw;
 	}
 
+	ioprio = READ_ONCE(sqe->ioprio);
+	if (ioprio) {
+		ret = ioprio_check_cap(ioprio);
+		if (ret)
+			return ret;
+
+		kiocb->ki_ioprio = ioprio;
+	} else {
+		kiocb->ki_ioprio = get_current_ioprio();
+	}
+
 	req->imu = NULL;
 	req->rw.addr = READ_ONCE(sqe->addr);
 	req->rw.len = READ_ONCE(sqe->len);
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/8] for-next cleanups
  2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
                   ` (7 preceding siblings ...)
  2021-10-23 11:14 ` [PATCH 8/8] io_uring: clusterise ki_flags access in rw_prep Pavel Begunkov
@ 2021-10-23 14:09 ` Jens Axboe
  8 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2021-10-23 14:09 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Hao Xu

On Sat, 23 Oct 2021 12:13:54 +0100, Pavel Begunkov wrote:
> Let's clean up the just merged async-polling stuff, will be
> easier to maintain, 2,3,5 deal with it. 6-8 are a resend.
> 
> Pavel Begunkov (8):
>   io-wq: use helper for worker refcounting
>   io_uring: clean io_wq_submit_work()'s main loop
>   io_uring: clean iowq submit work cancellation
>   io_uring: check if opcode needs poll first on arming
>   io_uring: don't try io-wq polling if not supported
>   io_uring: clean up timeout async_data allocation
>   io_uring: kill unused param from io_file_supports_nowait
>   io_uring: clusterise ki_flags access in rw_prep
> 
> [...]

Applied, thanks!

[1/8] io-wq: use helper for worker refcounting
      (no commit info)
[2/8] io_uring: clean io_wq_submit_work()'s main loop
      (no commit info)
[3/8] io_uring: clean iowq submit work cancellation
      (no commit info)
[4/8] io_uring: check if opcode needs poll first on arming
      (no commit info)
[5/8] io_uring: don't try io-wq polling if not supported
      (no commit info)
[6/8] io_uring: clean up timeout async_data allocation
      (no commit info)
[7/8] io_uring: kill unused param from io_file_supports_nowait
      (no commit info)
[8/8] io_uring: clusterise ki_flags access in rw_prep
      (no commit info)

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/8] io_uring: clean io_wq_submit_work()'s main loop
  2021-10-23 11:13 ` [PATCH 2/8] io_uring: clean io_wq_submit_work()'s main loop Pavel Begunkov
@ 2021-10-25  8:53   ` Hao Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Hao Xu @ 2021-10-25  8:53 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Jens Axboe

在 2021/10/23 下午7:13, Pavel Begunkov 写道:
> Do a bit of cleaning for the main loop of io_wq_submit_work(). Get rid
> of switch, just replace it with a single if as we're retrying in both
> other cases. Kill issue_sqe label, Get rid of needs_poll nesting and
> disambiguate a bit the comment.
> 
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
Reviewed-by: Hao Xu <[email protected]>
>   fs/io_uring.c | 40 ++++++++++++----------------------------
>   1 file changed, 12 insertions(+), 28 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 736d456e7913..7f92523c1282 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6749,40 +6749,24 @@ static void io_wq_submit_work(struct io_wq_work *work)
>   		}
>   
>   		do {
> -issue_sqe:
>   			ret = io_issue_sqe(req, issue_flags);
> +			if (ret != -EAGAIN)
> +				break;
>   			/*
> -			 * We can get EAGAIN for polled IO even though we're
> +			 * We can get EAGAIN for iopolled IO even though we're
>   			 * forcing a sync submission from here, since we can't
>   			 * wait for request slots on the block side.
>   			 */
> -			if (ret != -EAGAIN)
> -				break;
> -			if (needs_poll) {
> -				bool armed = false;
> -
> -				ret = 0;
> -				needs_poll = false;
> -				issue_flags &= ~IO_URING_F_NONBLOCK;
> -
> -				switch (io_arm_poll_handler(req)) {
> -				case IO_APOLL_READY:
> -					goto issue_sqe;
> -				case IO_APOLL_ABORTED:
> -					/*
> -					 * somehow we failed to arm the poll infra,
> -					 * fallback it to a normal async worker try.
> -					 */
> -					break;
> -				case IO_APOLL_OK:
> -					armed = true;
> -					break;
> -				}
> -
> -				if (armed)
> -					break;
> +			if (!needs_poll) {
> +				cond_resched();
> +				continue;
>   			}
> -			cond_resched();
> +
> +			if (io_arm_poll_handler(req) == IO_APOLL_OK)
> +				return;
> +			/* aborted or ready, in either case retry blocking */
> +			needs_poll = false;
> +			issue_flags &= ~IO_URING_F_NONBLOCK;
>   		} while (1);
>   	}
>   
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/8] io_uring: clean iowq submit work cancellation
  2021-10-23 11:13 ` [PATCH 3/8] io_uring: clean iowq submit work cancellation Pavel Begunkov
@ 2021-10-25  8:57   ` Hao Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Hao Xu @ 2021-10-25  8:57 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Jens Axboe

在 2021/10/23 下午7:13, Pavel Begunkov 写道:
> If we've got IO_WQ_WORK_CANCEL in io_wq_submit_work(), handle the error
> on the same lines as the check instead of having a weird code flow. The
> main loop doesn't change but goes one indention left.
> 
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
Reviewed-by: Hao Xu <[email protected]>
>   fs/io_uring.c | 59 +++++++++++++++++++++++++--------------------------
>   1 file changed, 29 insertions(+), 30 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 7f92523c1282..58cb3a14d58e 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6721,6 +6721,8 @@ static struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
>   static void io_wq_submit_work(struct io_wq_work *work)
>   {
>   	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
> +	unsigned int issue_flags = IO_URING_F_UNLOCKED;
> +	bool needs_poll = false;
>   	struct io_kiocb *timeout;
>   	int ret = 0;
>   
> @@ -6735,40 +6737,37 @@ static void io_wq_submit_work(struct io_wq_work *work)
>   		io_queue_linked_timeout(timeout);
>   
>   	/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
> -	if (work->flags & IO_WQ_WORK_CANCEL)
> -		ret = -ECANCELED;
> +	if (work->flags & IO_WQ_WORK_CANCEL) {
> +		io_req_task_queue_fail(req, -ECANCELED);
> +		return;
> +	}
>   
> -	if (!ret) {
> -		bool needs_poll = false;
> -		unsigned int issue_flags = IO_URING_F_UNLOCKED;
> +	if (req->flags & REQ_F_FORCE_ASYNC) {
> +		needs_poll = req->file && file_can_poll(req->file);
> +		if (needs_poll)
> +			issue_flags |= IO_URING_F_NONBLOCK;
> +	}
>   
> -		if (req->flags & REQ_F_FORCE_ASYNC) {
> -			needs_poll = req->file && file_can_poll(req->file);
> -			if (needs_poll)
> -				issue_flags |= IO_URING_F_NONBLOCK;
> +	do {
> +		ret = io_issue_sqe(req, issue_flags);
> +		if (ret != -EAGAIN)
> +			break;
> +		/*
> +		 * We can get EAGAIN for iopolled IO even though we're
> +		 * forcing a sync submission from here, since we can't
> +		 * wait for request slots on the block side.
> +		 */
> +		if (!needs_poll) {
> +			cond_resched();
> +			continue;
>   		}
>   
> -		do {
> -			ret = io_issue_sqe(req, issue_flags);
> -			if (ret != -EAGAIN)
> -				break;
> -			/*
> -			 * We can get EAGAIN for iopolled IO even though we're
> -			 * forcing a sync submission from here, since we can't
> -			 * wait for request slots on the block side.
> -			 */
> -			if (!needs_poll) {
> -				cond_resched();
> -				continue;
> -			}
> -
> -			if (io_arm_poll_handler(req) == IO_APOLL_OK)
> -				return;
> -			/* aborted or ready, in either case retry blocking */
> -			needs_poll = false;
> -			issue_flags &= ~IO_URING_F_NONBLOCK;
> -		} while (1);
> -	}
> +		if (io_arm_poll_handler(req) == IO_APOLL_OK)
> +			return;
> +		/* aborted or ready, in either case retry blocking */
> +		needs_poll = false;
> +		issue_flags &= ~IO_URING_F_NONBLOCK;
> +	} while (1);
>   
>   	/* avoid locking problems by failing it from a clean context */
>   	if (ret)
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/8] io_uring: check if opcode needs poll first on arming
  2021-10-23 11:13 ` [PATCH 4/8] io_uring: check if opcode needs poll first on arming Pavel Begunkov
@ 2021-10-25  8:58   ` Hao Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Hao Xu @ 2021-10-25  8:58 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Jens Axboe

在 2021/10/23 下午7:13, Pavel Begunkov 写道:
> ->pollout or ->pollin are set only for opcodes that need a file, so if
> io_arm_poll_handler() tests them first we can be sure that the request
> has file set and the ->file check can be removed.
> 
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
Reviewed-by: Hao Xu <[email protected]>
>   fs/io_uring.c | 6 ++----
>   1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 58cb3a14d58e..bff911f951ed 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -5584,12 +5584,10 @@ static int io_arm_poll_handler(struct io_kiocb *req)
>   	struct io_poll_table ipt;
>   	__poll_t ret, mask = EPOLLONESHOT | POLLERR | POLLPRI;
>   
> -	if (!req->file || !file_can_poll(req->file))
> -		return IO_APOLL_ABORTED;
> -	if (req->flags & REQ_F_POLLED)
> -		return IO_APOLL_ABORTED;
>   	if (!def->pollin && !def->pollout)
>   		return IO_APOLL_ABORTED;
> +	if (!file_can_poll(req->file) || (req->flags & REQ_F_POLLED))
> +		return IO_APOLL_ABORTED;
>   
>   	if (def->pollin) {
>   		mask |= POLLIN | POLLRDNORM;
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 5/8] io_uring: don't try io-wq polling if not supported
  2021-10-23 11:13 ` [PATCH 5/8] io_uring: don't try io-wq polling if not supported Pavel Begunkov
@ 2021-10-25  8:58   ` Hao Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Hao Xu @ 2021-10-25  8:58 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Jens Axboe

在 2021/10/23 下午7:13, Pavel Begunkov 写道:
> If an opcode doesn't support polling, just let it be executed
> synchronously in iowq, otherwise it will do a nonblock attempt just to
> fail in io_arm_poll_handler() and return back to blocking execution.
> 
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
Reviewed-by: Hao Xu <[email protected]>
>   fs/io_uring.c | 8 ++++++--
>   1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index bff911f951ed..c6f32fcf387b 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6741,9 +6741,13 @@ static void io_wq_submit_work(struct io_wq_work *work)
>   	}
>   
>   	if (req->flags & REQ_F_FORCE_ASYNC) {
> -		needs_poll = req->file && file_can_poll(req->file);
> -		if (needs_poll)
> +		const struct io_op_def *def = &io_op_defs[req->opcode];
> +		bool opcode_poll = def->pollin || def->pollout;
> +
> +		if (opcode_poll && file_can_poll(req->file)) {
> +			needs_poll = true;
>   			issue_flags |= IO_URING_F_NONBLOCK;
> +		}
>   	}
>   
>   	do {
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 0/8] for-next cleanups
@ 2023-04-11 11:06 Pavel Begunkov
  2023-04-12 18:10 ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Pavel Begunkov @ 2023-04-11 11:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

A random set of cleanups

Patches 1 and 2 address kernel test robot warnings.
Patches 3 and 4 add some extra lockdep checks
The rest are doing minor rsrc cleanups.

Pavel Begunkov (8):
  io_uring: shut io_prep_async_work warning
  io_uring/kbuf: remove extra ->buf_ring null check
  io_uring: add irq lockdep checks
  io_uring/rsrc: add lockdep checks
  io_uring/rsrc: consolidate node caching
  io_uring/rsrc: zero node's rsrc data on alloc
  io_uring/rsrc: refactor io_rsrc_node_switch
  io_uring/rsrc: extract SCM file put helper

 include/linux/io_uring_types.h |  1 -
 io_uring/alloc_cache.h         |  5 +++
 io_uring/io_uring.c            |  9 ++---
 io_uring/io_uring.h            |  2 +
 io_uring/kbuf.c                | 14 +++----
 io_uring/rsrc.c                | 73 +++++++++++++++-------------------
 io_uring/rsrc.h                |  7 ++++
 7 files changed, 56 insertions(+), 55 deletions(-)

-- 
2.40.0


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/8] for-next cleanups
  2023-04-11 11:06 Pavel Begunkov
@ 2023-04-12 18:10 ` Jens Axboe
  0 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2023-04-12 18:10 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov


On Tue, 11 Apr 2023 12:06:00 +0100, Pavel Begunkov wrote:
> A random set of cleanups
> 
> Patches 1 and 2 address kernel test robot warnings.
> Patches 3 and 4 add some extra lockdep checks
> The rest are doing minor rsrc cleanups.
> 
> Pavel Begunkov (8):
>   io_uring: shut io_prep_async_work warning
>   io_uring/kbuf: remove extra ->buf_ring null check
>   io_uring: add irq lockdep checks
>   io_uring/rsrc: add lockdep checks
>   io_uring/rsrc: consolidate node caching
>   io_uring/rsrc: zero node's rsrc data on alloc
>   io_uring/rsrc: refactor io_rsrc_node_switch
>   io_uring/rsrc: extract SCM file put helper
> 
> [...]

Applied, thanks!

[1/8] io_uring: shut io_prep_async_work warning
      commit: 8b1df11f97333d6d8647f1c6c0554eb2d9774396
[2/8] io_uring/kbuf: remove extra ->buf_ring null check
      commit: ceac766a5581e4e671ec8e5236b8fdaed8e4c8c9
[3/8] io_uring: add irq lockdep checks
      commit: 8ce4269eeedc5b31f5817f610b42cba8be8fa9de
[4/8] io_uring/rsrc: add lockdep checks
      commit: 786788a8cfe03056e9c7b1c6e418c1db92a0ce80
[5/8] io_uring/rsrc: consolidate node caching
      commit: 528407b1e0ea51260fff2cc8b669c632a65d7a09
[6/8] io_uring/rsrc: zero node's rsrc data on alloc
      commit: 13c223962eac16f161cf9b6355209774c609af28
[7/8] io_uring/rsrc: refactor io_rsrc_node_switch
      commit: 2933ae6eaa05e8db6ad33a3ca12af18d2a25358c
[8/8] io_uring/rsrc: extract SCM file put helper
      commit: d581076b6a85c6f8308a4ba2bdcd82651f5183df

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-04-12 18:10 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-10-23 11:13 [PATCH 0/8] for-next cleanups Pavel Begunkov
2021-10-23 11:13 ` [PATCH 1/8] io-wq: use helper for worker refcounting Pavel Begunkov
2021-10-23 11:13 ` [PATCH 2/8] io_uring: clean io_wq_submit_work()'s main loop Pavel Begunkov
2021-10-25  8:53   ` Hao Xu
2021-10-23 11:13 ` [PATCH 3/8] io_uring: clean iowq submit work cancellation Pavel Begunkov
2021-10-25  8:57   ` Hao Xu
2021-10-23 11:13 ` [PATCH 4/8] io_uring: check if opcode needs poll first on arming Pavel Begunkov
2021-10-25  8:58   ` Hao Xu
2021-10-23 11:13 ` [PATCH 5/8] io_uring: don't try io-wq polling if not supported Pavel Begunkov
2021-10-25  8:58   ` Hao Xu
2021-10-23 11:14 ` [PATCH 6/8] io_uring: clean up timeout async_data allocation Pavel Begunkov
2021-10-23 11:14 ` [PATCH 7/8] io_uring: kill unused param from io_file_supports_nowait Pavel Begunkov
2021-10-23 11:14 ` [PATCH 8/8] io_uring: clusterise ki_flags access in rw_prep Pavel Begunkov
2021-10-23 14:09 ` [PATCH 0/8] for-next cleanups Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2023-04-11 11:06 Pavel Begunkov
2023-04-12 18:10 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox