From: Pavel Begunkov <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Subject: [PATCH 17/23] io_uring: remove drain_active check from hot path
Date: Fri, 24 Sep 2021 17:31:55 +0100 [thread overview]
Message-ID: <c09c5cdbb08bec098271148ab46ef8562d6b0181.1632500264.git.asml.silence@gmail.com> (raw)
In-Reply-To: <[email protected]>
req->ctx->active_drain is a bit too expensive, partially because of two
dereferences. Do a trick, if we see it set in io_init_req(), set
REQ_F_FORCE_ASYNC and it automatically goes through a slower path where
we can catch it. It's nearly free to do in io_init_req() because there
is already ->restricted check and it's in the same byte of a bitmask.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 53 ++++++++++++++++++++++++++++-----------------------
1 file changed, 29 insertions(+), 24 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5cffbfc8a61f..c1c967088252 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6430,23 +6430,15 @@ static bool io_drain_req(struct io_kiocb *req)
int ret;
u32 seq;
- if (req->flags & REQ_F_FAIL) {
- io_req_complete_fail_submit(req);
- return true;
- }
-
- /*
- * If we need to drain a request in the middle of a link, drain the
- * head request and the next request/link after the current link.
- * Considering sequential execution of links, IOSQE_IO_DRAIN will be
- * maintained for every request of our link.
- */
- if (ctx->drain_next) {
- req->flags |= REQ_F_IO_DRAIN;
- ctx->drain_next = false;
- }
/* not interested in head, start from the first linked */
io_for_each_link(pos, req->link) {
+ /*
+ * If we need to drain a request in the middle of a link, drain
+ * the head request and the next request/link after the current
+ * link. Considering sequential execution of links,
+ * IOSQE_IO_DRAIN will be maintained for every request of our
+ * link.
+ */
if (pos->flags & REQ_F_IO_DRAIN) {
ctx->drain_next = true;
req->flags |= REQ_F_IO_DRAIN;
@@ -6938,13 +6930,12 @@ static void __io_queue_sqe(struct io_kiocb *req)
static inline void io_queue_sqe(struct io_kiocb *req)
__must_hold(&req->ctx->uring_lock)
{
- if (unlikely(req->ctx->drain_active) && io_drain_req(req))
- return;
-
if (likely(!(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
__io_queue_sqe(req);
} else if (req->flags & REQ_F_FAIL) {
io_req_complete_fail_submit(req);
+ } else if (unlikely(req->ctx->drain_active) && io_drain_req(req)) {
+ return;
} else {
int ret = io_req_prep_async(req);
@@ -6964,9 +6955,6 @@ static inline bool io_check_restriction(struct io_ring_ctx *ctx,
struct io_kiocb *req,
unsigned int sqe_flags)
{
- if (likely(!ctx->restricted))
- return true;
-
if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
return false;
@@ -7007,11 +6995,28 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
!io_op_defs[req->opcode].buffer_select)
return -EOPNOTSUPP;
- if (sqe_flags & IOSQE_IO_DRAIN)
+ if (sqe_flags & IOSQE_IO_DRAIN) {
+ struct io_submit_link *link = &ctx->submit_state.link;
+
ctx->drain_active = true;
+ req->flags |= REQ_F_FORCE_ASYNC;
+ if (link->head)
+ link->head->flags |= IOSQE_IO_DRAIN | REQ_F_FORCE_ASYNC;
+ }
+ }
+ if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
+ if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
+ return -EACCES;
+ /* knock it to the slow queue path, will be drained there */
+ if (ctx->drain_active)
+ req->flags |= REQ_F_FORCE_ASYNC;
+ /* if there is no link, we're at "next" request and need to drain */
+ if (unlikely(ctx->drain_next) && !ctx->submit_state.link.head) {
+ ctx->drain_next = false;
+ ctx->drain_active = true;
+ req->flags |= REQ_F_FORCE_ASYNC | IOSQE_IO_DRAIN;
+ }
}
- if (!io_check_restriction(ctx, req, sqe_flags))
- return -EACCES;
personality = READ_ONCE(sqe->personality);
if (personality) {
--
2.33.0
next prev parent reply other threads:[~2021-09-24 16:36 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-24 16:31 [RFC][PATCHSET 00/23] rework/optimise submission+completion paths Pavel Begunkov
2021-09-24 16:31 ` [PATCH 01/23] io_uring: mark having different creds unlikely Pavel Begunkov
2021-09-24 16:31 ` [PATCH 02/23] io_uring: force_nonspin Pavel Begunkov
2021-09-24 16:31 ` [PATCH 03/23] io_uring: make io_do_iopoll return number of reqs Pavel Begunkov
2021-09-24 16:31 ` [PATCH 04/23] io_uring: use slist for completion batching Pavel Begunkov
2021-09-24 16:31 ` [PATCH 05/23] io_uring: remove allocation cache array Pavel Begunkov
2021-09-24 16:31 ` [PATCH 06/23] io-wq: add io_wq_work_node based stack Pavel Begunkov
2021-09-24 16:31 ` [PATCH 07/23] io_uring: replace list with stack for req caches Pavel Begunkov
2021-09-24 16:31 ` [PATCH 08/23] io_uring: split iopoll loop Pavel Begunkov
2021-09-24 16:31 ` [PATCH 09/23] io_uring: use single linked list for iopoll Pavel Begunkov
2021-09-24 16:31 ` [PATCH 10/23] io_uring: add a helper for batch free Pavel Begunkov
2021-09-24 16:31 ` [PATCH 11/23] io_uring: convert iopoll_completed to store_release Pavel Begunkov
2021-09-24 16:31 ` [PATCH 12/23] io_uring: optimise batch completion Pavel Begunkov
2021-09-24 16:31 ` [PATCH 13/23] io_uring: inline completion batching helpers Pavel Begunkov
2021-09-24 16:31 ` [PATCH 14/23] io_uring: don't pass tail into io_free_batch_list Pavel Begunkov
2021-09-24 16:31 ` [PATCH 15/23] io_uring: don't pass state to io_submit_state_end Pavel Begunkov
2021-09-24 16:31 ` [PATCH 16/23] io_uring: deduplicate io_queue_sqe() call sites Pavel Begunkov
2021-09-24 16:31 ` Pavel Begunkov [this message]
2021-09-24 16:31 ` [PATCH 18/23] io_uring: split slow path from io_queue_sqe Pavel Begunkov
2021-09-24 16:31 ` [PATCH 19/23] io_uring: inline hot path of __io_queue_sqe() Pavel Begunkov
2021-09-24 16:31 ` [PATCH 20/23] io_uring: reshuffle queue_sqe completion handling Pavel Begunkov
2021-09-24 16:31 ` [PATCH 21/23] io_uring: restructure submit sqes to_submit checks Pavel Begunkov
2021-09-24 16:32 ` [PATCH 22/23] io_uring: kill off ->inflight_entry field Pavel Begunkov
2021-09-24 16:32 ` [PATCH 23/23] io_uring: comment why inline complete calls io_clean_op() Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c09c5cdbb08bec098271148ab46ef8562d6b0181.1632500264.git.asml.silence@gmail.com \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox