public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-next 0/4] complete cleanups
@ 2021-02-11 18:28 Pavel Begunkov
  2021-02-11 18:28 ` [PATCH 1/4] io_uring: clean up io_req_free_batch_finish() Pavel Begunkov
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Pavel Begunkov @ 2021-02-11 18:28 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Random cleanups for iopoll and generic completions, just for shedding
some lines and overhead.

Pavel Begunkov (4):
  io_uring: clean up io_req_free_batch_finish()
  io_uring: simplify iopoll reissuing
  io_uring: move res check out of io_rw_reissue()
  io_uring: inline io_complete_rw_common()

 fs/io_uring.c | 67 +++++++++++++++------------------------------------
 1 file changed, 20 insertions(+), 47 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] io_uring: clean up io_req_free_batch_finish()
  2021-02-11 18:28 [PATCH for-next 0/4] complete cleanups Pavel Begunkov
@ 2021-02-11 18:28 ` Pavel Begunkov
  2021-02-11 18:28 ` [PATCH 2/4] io_uring: simplify iopoll reissuing Pavel Begunkov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2021-02-11 18:28 UTC (permalink / raw)
  To: Jens Axboe, io-uring

io_req_free_batch_finish() is final and does not permit struct req_batch
to be reused without re-init. To be more consistent don't clear ->task
there.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7a1e4ecf5f94..0d612e9f7dda 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2394,10 +2394,8 @@ static inline void io_init_req_batch(struct req_batch *rb)
 static void io_req_free_batch_finish(struct io_ring_ctx *ctx,
 				     struct req_batch *rb)
 {
-	if (rb->task) {
+	if (rb->task)
 		io_put_task(rb->task, rb->task_refs);
-		rb->task = NULL;
-	}
 	if (rb->ctx_refs)
 		percpu_ref_put_many(&ctx->refs, rb->ctx_refs);
 }
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] io_uring: simplify iopoll reissuing
  2021-02-11 18:28 [PATCH for-next 0/4] complete cleanups Pavel Begunkov
  2021-02-11 18:28 ` [PATCH 1/4] io_uring: clean up io_req_free_batch_finish() Pavel Begunkov
@ 2021-02-11 18:28 ` Pavel Begunkov
  2021-02-11 18:28 ` [PATCH 3/4] io_uring: move res check out of io_rw_reissue() Pavel Begunkov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2021-02-11 18:28 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Don't stash -EAGAIN'ed iopoll requests into a list to reissue it later,
do it eagerly. It removes overhead on keeping and checking that list,
and allows in case of failure for these requests to be completed through
normal iopoll completion path.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 26 +++++---------------------
 1 file changed, 5 insertions(+), 21 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 0d612e9f7dda..3df27ce5938c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1026,8 +1026,7 @@ static struct fixed_rsrc_ref_node *alloc_fixed_rsrc_ref_node(
 static void init_fixed_file_ref_node(struct io_ring_ctx *ctx,
 				     struct fixed_rsrc_ref_node *ref_node);
 
-static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
-			     unsigned int issue_flags);
+static bool io_rw_reissue(struct io_kiocb *req, long res);
 static void io_cqring_fill_event(struct io_kiocb *req, long res);
 static void io_put_req(struct io_kiocb *req);
 static void io_put_req_deferred(struct io_kiocb *req, int nr);
@@ -2555,17 +2554,6 @@ static inline bool io_run_task_work(void)
 	return false;
 }
 
-static void io_iopoll_queue(struct list_head *again)
-{
-	struct io_kiocb *req;
-
-	do {
-		req = list_first_entry(again, struct io_kiocb, inflight_entry);
-		list_del(&req->inflight_entry);
-		__io_complete_rw(req, -EAGAIN, 0, 0);
-	} while (!list_empty(again));
-}
-
 /*
  * Find and free completed poll iocbs
  */
@@ -2574,7 +2562,6 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 {
 	struct req_batch rb;
 	struct io_kiocb *req;
-	LIST_HEAD(again);
 
 	/* order with ->result store in io_complete_rw_iopoll() */
 	smp_rmb();
@@ -2584,13 +2571,13 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 		int cflags = 0;
 
 		req = list_first_entry(done, struct io_kiocb, inflight_entry);
+		list_del(&req->inflight_entry);
+
 		if (READ_ONCE(req->result) == -EAGAIN) {
-			req->result = 0;
 			req->iopoll_completed = 0;
-			list_move_tail(&req->inflight_entry, &again);
-			continue;
+			if (io_rw_reissue(req, -EAGAIN))
+				continue;
 		}
-		list_del(&req->inflight_entry);
 
 		if (req->flags & REQ_F_BUFFER_SELECTED)
 			cflags = io_put_rw_kbuf(req);
@@ -2605,9 +2592,6 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 	io_commit_cqring(ctx);
 	io_cqring_ev_posted_iopoll(ctx);
 	io_req_free_batch_finish(ctx, &rb);
-
-	if (!list_empty(&again))
-		io_iopoll_queue(&again);
 }
 
 static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] io_uring: move res check out of io_rw_reissue()
  2021-02-11 18:28 [PATCH for-next 0/4] complete cleanups Pavel Begunkov
  2021-02-11 18:28 ` [PATCH 1/4] io_uring: clean up io_req_free_batch_finish() Pavel Begunkov
  2021-02-11 18:28 ` [PATCH 2/4] io_uring: simplify iopoll reissuing Pavel Begunkov
@ 2021-02-11 18:28 ` Pavel Begunkov
  2021-02-11 18:28 ` [PATCH 4/4] io_uring: inline io_complete_rw_common() Pavel Begunkov
  2021-02-11 19:55 ` [PATCH for-next 0/4] complete cleanups Jens Axboe
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2021-02-11 18:28 UTC (permalink / raw)
  To: Jens Axboe, io-uring

We pass return code into io_rw_reissue() for it to check for -EAGAIN.
It's not the cleaniest approach and may prevent inlining of the
non-EAGAIN fast path, so do it at call sites.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3df27ce5938c..81f674f7a97a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1026,7 +1026,7 @@ static struct fixed_rsrc_ref_node *alloc_fixed_rsrc_ref_node(
 static void init_fixed_file_ref_node(struct io_ring_ctx *ctx,
 				     struct fixed_rsrc_ref_node *ref_node);
 
-static bool io_rw_reissue(struct io_kiocb *req, long res);
+static bool io_rw_reissue(struct io_kiocb *req);
 static void io_cqring_fill_event(struct io_kiocb *req, long res);
 static void io_put_req(struct io_kiocb *req);
 static void io_put_req_deferred(struct io_kiocb *req, int nr);
@@ -2575,7 +2575,7 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 
 		if (READ_ONCE(req->result) == -EAGAIN) {
 			req->iopoll_completed = 0;
-			if (io_rw_reissue(req, -EAGAIN))
+			if (io_rw_reissue(req))
 				continue;
 		}
 
@@ -2809,15 +2809,12 @@ static bool io_resubmit_prep(struct io_kiocb *req)
 }
 #endif
 
-static bool io_rw_reissue(struct io_kiocb *req, long res)
+static bool io_rw_reissue(struct io_kiocb *req)
 {
 #ifdef CONFIG_BLOCK
-	umode_t mode;
+	umode_t mode = file_inode(req->file)->i_mode;
 	int ret;
 
-	if (res != -EAGAIN && res != -EOPNOTSUPP)
-		return false;
-	mode = file_inode(req->file)->i_mode;
 	if (!S_ISBLK(mode) && !S_ISREG(mode))
 		return false;
 	if ((req->flags & REQ_F_NOWAIT) || io_wq_current_is_worker())
@@ -2840,8 +2837,10 @@ static bool io_rw_reissue(struct io_kiocb *req, long res)
 static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
 			     unsigned int issue_flags)
 {
-	if (!io_rw_reissue(req, res))
-		io_complete_rw_common(&req->rw.kiocb, res, issue_flags);
+	if ((res == -EAGAIN || res == -EOPNOTSUPP) && io_rw_reissue(req))
+		return;
+
+	io_complete_rw_common(&req->rw.kiocb, res, issue_flags);
 }
 
 static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] io_uring: inline io_complete_rw_common()
  2021-02-11 18:28 [PATCH for-next 0/4] complete cleanups Pavel Begunkov
                   ` (2 preceding siblings ...)
  2021-02-11 18:28 ` [PATCH 3/4] io_uring: move res check out of io_rw_reissue() Pavel Begunkov
@ 2021-02-11 18:28 ` Pavel Begunkov
  2021-02-11 19:55 ` [PATCH for-next 0/4] complete cleanups Jens Axboe
  4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2021-02-11 18:28 UTC (permalink / raw)
  To: Jens Axboe, io-uring

__io_complete_rw() casts request to kiocb for it to be immediately
contaier_of()'ed by io_complete_rw_common(). And the last function's
name doesn't do a great job of illuminating about its purposes, so just
inline it in its only user.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 26 +++++++++-----------------
 1 file changed, 9 insertions(+), 17 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 81f674f7a97a..c8b2b4c257c2 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2758,22 +2758,6 @@ static void kiocb_end_write(struct io_kiocb *req)
 	file_end_write(req->file);
 }
 
-static void io_complete_rw_common(struct kiocb *kiocb, long res,
-				  unsigned int issue_flags)
-{
-	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
-	int cflags = 0;
-
-	if (kiocb->ki_flags & IOCB_WRITE)
-		kiocb_end_write(req);
-
-	if (res != req->result)
-		req_set_fail_links(req);
-	if (req->flags & REQ_F_BUFFER_SELECTED)
-		cflags = io_put_rw_kbuf(req);
-	__io_req_complete(req, issue_flags, res, cflags);
-}
-
 #ifdef CONFIG_BLOCK
 static bool io_resubmit_prep(struct io_kiocb *req)
 {
@@ -2837,10 +2821,18 @@ static bool io_rw_reissue(struct io_kiocb *req)
 static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
 			     unsigned int issue_flags)
 {
+	int cflags = 0;
+
 	if ((res == -EAGAIN || res == -EOPNOTSUPP) && io_rw_reissue(req))
 		return;
+	if (res != req->result)
+		req_set_fail_links(req);
 
-	io_complete_rw_common(&req->rw.kiocb, res, issue_flags);
+	if (req->rw.kiocb.ki_flags & IOCB_WRITE)
+		kiocb_end_write(req);
+	if (req->flags & REQ_F_BUFFER_SELECTED)
+		cflags = io_put_rw_kbuf(req);
+	__io_req_complete(req, issue_flags, res, cflags);
 }
 
 static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH for-next 0/4] complete cleanups
  2021-02-11 18:28 [PATCH for-next 0/4] complete cleanups Pavel Begunkov
                   ` (3 preceding siblings ...)
  2021-02-11 18:28 ` [PATCH 4/4] io_uring: inline io_complete_rw_common() Pavel Begunkov
@ 2021-02-11 19:55 ` Jens Axboe
  4 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2021-02-11 19:55 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 2/11/21 11:28 AM, Pavel Begunkov wrote:
> Random cleanups for iopoll and generic completions, just for shedding
> some lines and overhead.
> 
> Pavel Begunkov (4):
>   io_uring: clean up io_req_free_batch_finish()
>   io_uring: simplify iopoll reissuing
>   io_uring: move res check out of io_rw_reissue()
>   io_uring: inline io_complete_rw_common()
> 
>  fs/io_uring.c | 67 +++++++++++++++------------------------------------
>  1 file changed, 20 insertions(+), 47 deletions(-)

Good cleanups, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-02-11 19:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-02-11 18:28 [PATCH for-next 0/4] complete cleanups Pavel Begunkov
2021-02-11 18:28 ` [PATCH 1/4] io_uring: clean up io_req_free_batch_finish() Pavel Begunkov
2021-02-11 18:28 ` [PATCH 2/4] io_uring: simplify iopoll reissuing Pavel Begunkov
2021-02-11 18:28 ` [PATCH 3/4] io_uring: move res check out of io_rw_reissue() Pavel Begunkov
2021-02-11 18:28 ` [PATCH 4/4] io_uring: inline io_complete_rw_common() Pavel Begunkov
2021-02-11 19:55 ` [PATCH for-next 0/4] complete cleanups Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox