From: Pavel Begunkov <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Subject: [PATCH 12/23] io_uring: optimise batch completion
Date: Fri, 24 Sep 2021 17:31:50 +0100 [thread overview]
Message-ID: <da1905bfd0c8edc751761c477d6277cf0b549fe5.1632500264.git.asml.silence@gmail.com> (raw)
In-Reply-To: <[email protected]>
First, convert rest of iopoll bits to single linked lists, and also
replace per-request list_add_tail() with splicing a part of slist.
With that, use io_free_batch_list() to put/free requests. The main
advantage of it is that it's now the only user of struct req_batch and
friends, and so they can be inlined. The main overhead there was
per-request call to not-inlined io_req_free_batch(), which is expensive
enough.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 38 ++++++++++----------------------------
1 file changed, 10 insertions(+), 28 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index e5d42ca45bce..70cd32fb4c70 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2423,32 +2423,10 @@ static inline bool io_run_task_work(void)
return false;
}
-/*
- * Find and free completed poll iocbs
- */
-static void io_iopoll_complete(struct io_ring_ctx *ctx, struct list_head *done)
-{
- struct req_batch rb;
- struct io_kiocb *req;
-
- io_init_req_batch(&rb);
- while (!list_empty(done)) {
- req = list_first_entry(done, struct io_kiocb, inflight_entry);
- list_del(&req->inflight_entry);
-
- if (req_ref_put_and_test(req))
- io_req_free_batch(&rb, req, &ctx->submit_state);
- }
-
- io_commit_cqring(ctx);
- io_cqring_ev_posted_iopoll(ctx);
- io_req_free_batch_finish(ctx, &rb);
-}
-
static int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
{
struct io_wq_work_node *pos, *start, *prev;
- LIST_HEAD(done);
+ struct io_wq_work_list list;
int nr_events = 0;
bool spin;
@@ -2493,15 +2471,19 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
if (!smp_load_acquire(&req->iopoll_completed))
break;
__io_cqring_fill_event(ctx, req->user_data, req->result,
- io_put_rw_kbuf(req));
-
- list_add_tail(&req->inflight_entry, &done);
+ io_put_rw_kbuf(req));
nr_events++;
}
+ if (unlikely(!nr_events))
+ return 0;
+
+ io_commit_cqring(ctx);
+ io_cqring_ev_posted_iopoll(ctx);
+ list.first = start ? start->next : ctx->iopoll_list.first;
+ list.last = prev;
wq_list_cut(&ctx->iopoll_list, prev, start);
- if (nr_events)
- io_iopoll_complete(ctx, &done);
+ io_free_batch_list(ctx, &list);
return nr_events;
}
--
2.33.0
next prev parent reply other threads:[~2021-09-24 16:36 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-24 16:31 [RFC][PATCHSET 00/23] rework/optimise submission+completion paths Pavel Begunkov
2021-09-24 16:31 ` [PATCH 01/23] io_uring: mark having different creds unlikely Pavel Begunkov
2021-09-24 16:31 ` [PATCH 02/23] io_uring: force_nonspin Pavel Begunkov
2021-09-24 16:31 ` [PATCH 03/23] io_uring: make io_do_iopoll return number of reqs Pavel Begunkov
2021-09-24 16:31 ` [PATCH 04/23] io_uring: use slist for completion batching Pavel Begunkov
2021-09-24 16:31 ` [PATCH 05/23] io_uring: remove allocation cache array Pavel Begunkov
2021-09-24 16:31 ` [PATCH 06/23] io-wq: add io_wq_work_node based stack Pavel Begunkov
2021-09-24 16:31 ` [PATCH 07/23] io_uring: replace list with stack for req caches Pavel Begunkov
2021-09-24 16:31 ` [PATCH 08/23] io_uring: split iopoll loop Pavel Begunkov
2021-09-24 16:31 ` [PATCH 09/23] io_uring: use single linked list for iopoll Pavel Begunkov
2021-09-24 16:31 ` [PATCH 10/23] io_uring: add a helper for batch free Pavel Begunkov
2021-09-24 16:31 ` [PATCH 11/23] io_uring: convert iopoll_completed to store_release Pavel Begunkov
2021-09-24 16:31 ` Pavel Begunkov [this message]
2021-09-24 16:31 ` [PATCH 13/23] io_uring: inline completion batching helpers Pavel Begunkov
2021-09-24 16:31 ` [PATCH 14/23] io_uring: don't pass tail into io_free_batch_list Pavel Begunkov
2021-09-24 16:31 ` [PATCH 15/23] io_uring: don't pass state to io_submit_state_end Pavel Begunkov
2021-09-24 16:31 ` [PATCH 16/23] io_uring: deduplicate io_queue_sqe() call sites Pavel Begunkov
2021-09-24 16:31 ` [PATCH 17/23] io_uring: remove drain_active check from hot path Pavel Begunkov
2021-09-24 16:31 ` [PATCH 18/23] io_uring: split slow path from io_queue_sqe Pavel Begunkov
2021-09-24 16:31 ` [PATCH 19/23] io_uring: inline hot path of __io_queue_sqe() Pavel Begunkov
2021-09-24 16:31 ` [PATCH 20/23] io_uring: reshuffle queue_sqe completion handling Pavel Begunkov
2021-09-24 16:31 ` [PATCH 21/23] io_uring: restructure submit sqes to_submit checks Pavel Begunkov
2021-09-24 16:32 ` [PATCH 22/23] io_uring: kill off ->inflight_entry field Pavel Begunkov
2021-09-24 16:32 ` [PATCH 23/23] io_uring: comment why inline complete calls io_clean_op() Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=da1905bfd0c8edc751761c477d6277cf0b549fe5.1632500264.git.asml.silence@gmail.com \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox