public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: [email protected]
Cc: Jens Axboe <[email protected]>, [email protected]
Subject: [PATCH for-next 10/25] io_uring: kill REQ_F_COMPLETE_INLINE
Date: Tue, 14 Jun 2022 13:29:48 +0100	[thread overview]
Message-ID: <f278bf835b5a327e96bb48bb2924e88cfcd27618.1655209709.git.asml.silence@gmail.com> (raw)
In-Reply-To: <[email protected]>

REQ_F_COMPLETE_INLINE is only needed to delay queueing into the
completion list to io_queue_sqe() as __io_req_complete() is inlined and
we don't want to bloat the kernel.

As now we complete in a more centralised fashion in io_issue_sqe() we
can get rid of the flag and queue to the list directly.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/io_uring.c       | 20 ++++++++------------
 io_uring/io_uring.h       |  5 -----
 io_uring/io_uring_types.h |  3 ---
 3 files changed, 8 insertions(+), 20 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 5156844ca2bb..6c48d0c6dcd5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1278,17 +1278,14 @@ static void io_req_complete_post32(struct io_kiocb *req, u64 extra1, u64 extra2)
 
 inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags)
 {
-	if (issue_flags & IO_URING_F_COMPLETE_DEFER)
-		io_req_complete_state(req);
-	else
-		io_req_complete_post(req);
+	io_req_complete_post(req);
 }
 
 void __io_req_complete32(struct io_kiocb *req, unsigned int issue_flags,
 			 u64 extra1, u64 extra2)
 {
 	if (issue_flags & IO_URING_F_COMPLETE_DEFER) {
-		io_req_complete_state(req);
+		io_req_add_compl_list(req);
 		req->extra1 = extra1;
 		req->extra2 = extra2;
 	} else {
@@ -2132,9 +2129,12 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 	if (creds)
 		revert_creds(creds);
 
-	if (ret == IOU_OK)
-		__io_req_complete(req, issue_flags);
-	else if (ret != IOU_ISSUE_SKIP_COMPLETE)
+	if (ret == IOU_OK) {
+		if (issue_flags & IO_URING_F_COMPLETE_DEFER)
+			io_req_add_compl_list(req);
+		else
+			io_req_complete_post(req);
+	} else if (ret != IOU_ISSUE_SKIP_COMPLETE)
 		return ret;
 
 	/* If the op doesn't have a file, we're not polling for it */
@@ -2299,10 +2299,6 @@ static inline void io_queue_sqe(struct io_kiocb *req)
 
 	ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
 
-	if (req->flags & REQ_F_COMPLETE_INLINE) {
-		io_req_add_compl_list(req);
-		return;
-	}
 	/*
 	 * We async punt it if the file wasn't marked NOWAIT, or if the file
 	 * doesn't support non-blocking read/write attempts
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 26b669746d61..2141519e995a 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -193,11 +193,6 @@ static inline bool io_run_task_work(void)
 	return false;
 }
 
-static inline void io_req_complete_state(struct io_kiocb *req)
-{
-	req->flags |= REQ_F_COMPLETE_INLINE;
-}
-
 static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
 {
 	if (!*locked) {
diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h
index 25e07c3f7b2a..3cf06f4a4d2e 100644
--- a/io_uring/io_uring_types.h
+++ b/io_uring/io_uring_types.h
@@ -299,7 +299,6 @@ enum {
 	REQ_F_POLLED_BIT,
 	REQ_F_BUFFER_SELECTED_BIT,
 	REQ_F_BUFFER_RING_BIT,
-	REQ_F_COMPLETE_INLINE_BIT,
 	REQ_F_REISSUE_BIT,
 	REQ_F_CREDS_BIT,
 	REQ_F_REFCOUNT_BIT,
@@ -353,8 +352,6 @@ enum {
 	REQ_F_BUFFER_SELECTED	= BIT(REQ_F_BUFFER_SELECTED_BIT),
 	/* buffer selected from ring, needs commit */
 	REQ_F_BUFFER_RING	= BIT(REQ_F_BUFFER_RING_BIT),
-	/* completion is deferred through io_comp_state */
-	REQ_F_COMPLETE_INLINE	= BIT(REQ_F_COMPLETE_INLINE_BIT),
 	/* caller should reissue async */
 	REQ_F_REISSUE		= BIT(REQ_F_REISSUE_BIT),
 	/* supports async reads/writes */
-- 
2.36.1


  parent reply	other threads:[~2022-06-14 12:34 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-14 12:29 [PATCH for-next 00/25] 5.20 cleanups and poll optimisations Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 01/25] io_uring: make reg buf init consistent Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 02/25] io_uring: move defer_list to slow data Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 03/25] io_uring: better caching for ctx timeout fields Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 04/25] io_uring: refactor ctx slow data placement Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 05/25] io_uring: move cancel_seq out of io-wq Pavel Begunkov
2022-06-14 12:52   ` Jens Axboe
2022-06-14 13:01     ` Pavel Begunkov
2022-06-14 13:10       ` Jens Axboe
2022-06-14 12:29 ` [PATCH for-next 06/25] io_uring: move small helpers to headers Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 07/25] io_uring: inline ->registered_rings Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 08/25] io_uring: don't set REQ_F_COMPLETE_INLINE in tw Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 09/25] io_uring: never defer-complete multi-apoll Pavel Begunkov
2022-06-14 12:29 ` Pavel Begunkov [this message]
2022-06-14 12:29 ` [PATCH for-next 11/25] io_uring: refactor io_req_task_complete() Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 12/25] io_uring: don't inline io_put_kbuf Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 13/25] io_uring: remove check_cq checking from hot paths Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 14/25] io_uring: poll: remove unnecessary req->ref set Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 15/25] io_uring: switch cancel_hash to use per entry spinlock Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 16/25] io_uring: pass poll_find lock back Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 17/25] io_uring: clean up io_try_cancel Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 18/25] io_uring: limit number hash buckets Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 19/25] io_uring: clean up io_ring_ctx_alloc Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 20/25] io_uring: use state completion infra for poll reqs Pavel Begunkov
2022-06-14 12:29 ` [PATCH for-next 21/25] io_uring: add IORING_SETUP_SINGLE_ISSUER Pavel Begunkov
2022-06-14 12:56   ` Pavel Begunkov
2022-06-14 12:30 ` [PATCH for-next 22/25] io_uring: pass hash table into poll_find Pavel Begunkov
2022-06-14 12:30 ` [PATCH for-next 23/25] io_uring: introduce a struct for hash table Pavel Begunkov
2022-06-14 12:30 ` [PATCH for-next 24/25] io_uring: propagate locking state to poll cancel Pavel Begunkov
2022-06-14 12:30 ` [PATCH for-next 25/25] io_uring: mutex locked poll hashing Pavel Begunkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f278bf835b5a327e96bb48bb2924e88cfcd27618.1655209709.git.asml.silence@gmail.com \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox