* [PATCH 00/11] clean up req free and CQ locking
@ 2023-06-23 11:23 Pavel Begunkov
2023-06-23 11:23 ` [PATCH 01/11] io_uring: open code io_put_req_find_next Pavel Begunkov
` (12 more replies)
0 siblings, 13 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Patches 1-5 are cleaning how we free requests.
Patches 7-11 brush CQ / ->completion_lock locking
Pavel Begunkov (11):
io_uring: open code io_put_req_find_next
io_uring: remove io_free_req_tw
io_uring: inline io_dismantle_req()
io_uring: move io_clean_op()
io_uring: don't batch task put on reqs free
io_uring: remove IOU_F_TWQ_FORCE_NORMAL
io_uring: kill io_cq_unlock()
io_uring: fix acquire/release annotations
io_uring: inline __io_cq_unlock
io_uring: make io_cq_unlock_post static
io_uring: merge conditional unlock flush helpers
io_uring/io_uring.c | 223 ++++++++++++++++----------------------------
io_uring/io_uring.h | 7 +-
2 files changed, 80 insertions(+), 150 deletions(-)
--
2.40.0
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 01/11] io_uring: open code io_put_req_find_next
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 02/11] io_uring: remove io_free_req_tw Pavel Begunkov
` (11 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
There is only one user of io_put_req_find_next() and it doesn't make
much sense to have it. Open code the function.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 25 +++++++------------------
1 file changed, 7 insertions(+), 18 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index ae4cb3c4e730..b488a03ba009 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1586,22 +1586,6 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
}
}
-/*
- * Drop reference to request, return next in chain (if there is one) if this
- * was the last reference to this request.
- */
-static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
-{
- struct io_kiocb *nxt = NULL;
-
- if (req_ref_put_and_test(req)) {
- if (unlikely(req->flags & IO_REQ_LINK_FLAGS))
- nxt = io_req_find_next(req);
- io_free_req(req);
- }
- return nxt;
-}
-
static unsigned io_cqring_events(struct io_ring_ctx *ctx)
{
/* See comment at the top of this file */
@@ -1954,9 +1938,14 @@ int io_poll_issue(struct io_kiocb *req, struct io_tw_state *ts)
struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
{
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+ struct io_kiocb *nxt = NULL;
- req = io_put_req_find_next(req);
- return req ? &req->work : NULL;
+ if (req_ref_put_and_test(req)) {
+ if (req->flags & IO_REQ_LINK_FLAGS)
+ nxt = io_req_find_next(req);
+ io_free_req(req);
+ }
+ return nxt ? &nxt->work : NULL;
}
void io_wq_submit_work(struct io_wq_work *work)
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 02/11] io_uring: remove io_free_req_tw
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
2023-06-23 11:23 ` [PATCH 01/11] io_uring: open code io_put_req_find_next Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 03/11] io_uring: inline io_dismantle_req() Pavel Begunkov
` (10 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Request completion is a very hot path in general, but there are 3 places
that can be doing it: io_free_batch_list(), io_req_complete_post() and
io_free_req_tw().
io_free_req_tw() is used rather marginally and we don't care about it.
Killing it can help to clean up and optimise the left two, do that by
replacing it with io_req_task_complete().
There are two things to consider:
1) io_free_req() is called when all refs are put, so we need to reinit
references. The easiest way to do that is to clear REQ_F_REFCOUNT.
2) We also don't need a cqe from it, so silence it with REQ_F_CQE_SKIP.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 23 +++++------------------
1 file changed, 5 insertions(+), 18 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index b488a03ba009..43805d2621f5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1121,26 +1121,13 @@ static inline void io_dismantle_req(struct io_kiocb *req)
io_put_file(req->file);
}
-static __cold void io_free_req_tw(struct io_kiocb *req, struct io_tw_state *ts)
-{
- struct io_ring_ctx *ctx = req->ctx;
-
- if (req->rsrc_node) {
- io_tw_lock(ctx, ts);
- io_put_rsrc_node(ctx, req->rsrc_node);
- }
- io_dismantle_req(req);
- io_put_task_remote(req->task, 1);
-
- spin_lock(&ctx->completion_lock);
- wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
- ctx->locked_free_nr++;
- spin_unlock(&ctx->completion_lock);
-}
-
__cold void io_free_req(struct io_kiocb *req)
{
- req->io_task_work.func = io_free_req_tw;
+ /* refs were already put, restore them for io_req_task_complete() */
+ req->flags &= ~REQ_F_REFCOUNT;
+ /* we only want to free it, don't post CQEs */
+ req->flags |= REQ_F_CQE_SKIP;
+ req->io_task_work.func = io_req_task_complete;
io_req_task_work_add(req);
}
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 03/11] io_uring: inline io_dismantle_req()
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
2023-06-23 11:23 ` [PATCH 01/11] io_uring: open code io_put_req_find_next Pavel Begunkov
2023-06-23 11:23 ` [PATCH 02/11] io_uring: remove io_free_req_tw Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 04/11] io_uring: move io_clean_op() Pavel Begunkov
` (9 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
io_dismantle_req() is only used in __io_req_complete_post(), open code
it there.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 43805d2621f5..50fe345bdced 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -146,7 +146,6 @@ static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
struct task_struct *task,
bool cancel_all);
-static void io_dismantle_req(struct io_kiocb *req);
static void io_clean_op(struct io_kiocb *req);
static void io_queue_sqe(struct io_kiocb *req);
static void io_move_task_work_from_local(struct io_ring_ctx *ctx);
@@ -991,7 +990,11 @@ static void __io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
}
}
io_put_kbuf_comp(req);
- io_dismantle_req(req);
+ if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
+ io_clean_op(req);
+ if (!(req->flags & REQ_F_FIXED_FILE))
+ io_put_file(req->file);
+
rsrc_node = req->rsrc_node;
/*
* Selected buffer deallocation in io_clean_op() assumes that
@@ -1111,16 +1114,6 @@ __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
return true;
}
-static inline void io_dismantle_req(struct io_kiocb *req)
-{
- unsigned int flags = req->flags;
-
- if (unlikely(flags & IO_REQ_CLEAN_FLAGS))
- io_clean_op(req);
- if (!(flags & REQ_F_FIXED_FILE))
- io_put_file(req->file);
-}
-
__cold void io_free_req(struct io_kiocb *req)
{
/* refs were already put, restore them for io_req_task_complete() */
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 04/11] io_uring: move io_clean_op()
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (2 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 03/11] io_uring: inline io_dismantle_req() Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 05/11] io_uring: don't batch task put on reqs free Pavel Begunkov
` (8 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Move io_clean_op() up in the source file and remove the forward
declaration, as the function doesn't have tricky dependencies
anymore.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 67 ++++++++++++++++++++++-----------------------
1 file changed, 33 insertions(+), 34 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 50fe345bdced..4d8613996644 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -146,7 +146,6 @@ static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
struct task_struct *task,
bool cancel_all);
-static void io_clean_op(struct io_kiocb *req);
static void io_queue_sqe(struct io_kiocb *req);
static void io_move_task_work_from_local(struct io_ring_ctx *ctx);
static void __io_submit_flush_completions(struct io_ring_ctx *ctx);
@@ -367,6 +366,39 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
return false;
}
+static void io_clean_op(struct io_kiocb *req)
+{
+ if (req->flags & REQ_F_BUFFER_SELECTED) {
+ spin_lock(&req->ctx->completion_lock);
+ io_put_kbuf_comp(req);
+ spin_unlock(&req->ctx->completion_lock);
+ }
+
+ if (req->flags & REQ_F_NEED_CLEANUP) {
+ const struct io_cold_def *def = &io_cold_defs[req->opcode];
+
+ if (def->cleanup)
+ def->cleanup(req);
+ }
+ if ((req->flags & REQ_F_POLLED) && req->apoll) {
+ kfree(req->apoll->double_poll);
+ kfree(req->apoll);
+ req->apoll = NULL;
+ }
+ if (req->flags & REQ_F_INFLIGHT) {
+ struct io_uring_task *tctx = req->task->io_uring;
+
+ atomic_dec(&tctx->inflight_tracked);
+ }
+ if (req->flags & REQ_F_CREDS)
+ put_cred(req->creds);
+ if (req->flags & REQ_F_ASYNC_DATA) {
+ kfree(req->async_data);
+ req->async_data = NULL;
+ }
+ req->flags &= ~IO_REQ_CLEAN_FLAGS;
+}
+
static inline void io_req_track_inflight(struct io_kiocb *req)
{
if (!(req->flags & REQ_F_INFLIGHT)) {
@@ -1823,39 +1855,6 @@ static __cold void io_drain_req(struct io_kiocb *req)
spin_unlock(&ctx->completion_lock);
}
-static void io_clean_op(struct io_kiocb *req)
-{
- if (req->flags & REQ_F_BUFFER_SELECTED) {
- spin_lock(&req->ctx->completion_lock);
- io_put_kbuf_comp(req);
- spin_unlock(&req->ctx->completion_lock);
- }
-
- if (req->flags & REQ_F_NEED_CLEANUP) {
- const struct io_cold_def *def = &io_cold_defs[req->opcode];
-
- if (def->cleanup)
- def->cleanup(req);
- }
- if ((req->flags & REQ_F_POLLED) && req->apoll) {
- kfree(req->apoll->double_poll);
- kfree(req->apoll);
- req->apoll = NULL;
- }
- if (req->flags & REQ_F_INFLIGHT) {
- struct io_uring_task *tctx = req->task->io_uring;
-
- atomic_dec(&tctx->inflight_tracked);
- }
- if (req->flags & REQ_F_CREDS)
- put_cred(req->creds);
- if (req->flags & REQ_F_ASYNC_DATA) {
- kfree(req->async_data);
- req->async_data = NULL;
- }
- req->flags &= ~IO_REQ_CLEAN_FLAGS;
-}
-
static bool io_assign_file(struct io_kiocb *req, const struct io_issue_def *def,
unsigned int issue_flags)
{
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 05/11] io_uring: don't batch task put on reqs free
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (3 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 04/11] io_uring: move io_clean_op() Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 06/11] io_uring: remove IOU_F_TWQ_FORCE_NORMAL Pavel Begunkov
` (7 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We're trying to batch io_put_task() in io_free_batch_list(), but
considering that the hot path is a simple inc, it's most cerainly and
probably faster to just do io_put_task() instead of task tracking.
We don't care about io_put_task_remote() as it's only for IOPOLL
where polling/waiting is done by not the submitter task.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 32 ++++++++++----------------------
1 file changed, 10 insertions(+), 22 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 4d8613996644..3eec5c761d0a 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -754,29 +754,29 @@ static void io_cqring_overflow_flush(struct io_ring_ctx *ctx)
}
/* can be called by any task */
-static void io_put_task_remote(struct task_struct *task, int nr)
+static void io_put_task_remote(struct task_struct *task)
{
struct io_uring_task *tctx = task->io_uring;
- percpu_counter_sub(&tctx->inflight, nr);
+ percpu_counter_sub(&tctx->inflight, 1);
if (unlikely(atomic_read(&tctx->in_cancel)))
wake_up(&tctx->wait);
- put_task_struct_many(task, nr);
+ put_task_struct(task);
}
/* used by a task to put its own references */
-static void io_put_task_local(struct task_struct *task, int nr)
+static void io_put_task_local(struct task_struct *task)
{
- task->io_uring->cached_refs += nr;
+ task->io_uring->cached_refs++;
}
/* must to be called somewhat shortly after putting a request */
-static inline void io_put_task(struct task_struct *task, int nr)
+static inline void io_put_task(struct task_struct *task)
{
if (likely(task == current))
- io_put_task_local(task, nr);
+ io_put_task_local(task);
else
- io_put_task_remote(task, nr);
+ io_put_task_remote(task);
}
void io_task_refs_refill(struct io_uring_task *tctx)
@@ -1033,7 +1033,7 @@ static void __io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
* we don't hold ->completion_lock. Clean them here to avoid
* deadlocks.
*/
- io_put_task_remote(req->task, 1);
+ io_put_task_remote(req->task);
wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
ctx->locked_free_nr++;
}
@@ -1518,9 +1518,6 @@ void io_queue_next(struct io_kiocb *req)
void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node)
__must_hold(&ctx->uring_lock)
{
- struct task_struct *task = NULL;
- int task_refs = 0;
-
do {
struct io_kiocb *req = container_of(node, struct io_kiocb,
comp_list);
@@ -1550,19 +1547,10 @@ void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node)
io_req_put_rsrc_locked(req, ctx);
- if (req->task != task) {
- if (task)
- io_put_task(task, task_refs);
- task = req->task;
- task_refs = 0;
- }
- task_refs++;
+ io_put_task(req->task);
node = req->comp_list.next;
io_req_add_to_cache(req, ctx);
} while (node);
-
- if (task)
- io_put_task(task, task_refs);
}
static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 06/11] io_uring: remove IOU_F_TWQ_FORCE_NORMAL
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (4 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 05/11] io_uring: don't batch task put on reqs free Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 07/11] io_uring: kill io_cq_unlock() Pavel Begunkov
` (6 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Extract a function for non-local task_work_add, and use it directly from
io_move_task_work_from_local(). Now we don't use IOU_F_TWQ_FORCE_NORMAL
and it can be killed.
As a small positive side effect we don't grab task->io_uring in
io_req_normal_work_add anymore, which is not needed for
io_req_local_work_add().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 25 ++++++++++++++-----------
io_uring/io_uring.h | 5 +----
2 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 3eec5c761d0a..776d1aa73d26 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1317,7 +1317,7 @@ static __cold void io_fallback_tw(struct io_uring_task *tctx)
}
}
-static void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
+static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
{
struct io_ring_ctx *ctx = req->ctx;
unsigned nr_wait, nr_tw, nr_tw_prev;
@@ -1368,19 +1368,11 @@ static void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
wake_up_state(ctx->submitter_task, TASK_INTERRUPTIBLE);
}
-void __io_req_task_work_add(struct io_kiocb *req, unsigned flags)
+static void io_req_normal_work_add(struct io_kiocb *req)
{
struct io_uring_task *tctx = req->task->io_uring;
struct io_ring_ctx *ctx = req->ctx;
- if (!(flags & IOU_F_TWQ_FORCE_NORMAL) &&
- (ctx->flags & IORING_SETUP_DEFER_TASKRUN)) {
- rcu_read_lock();
- io_req_local_work_add(req, flags);
- rcu_read_unlock();
- return;
- }
-
/* task_work already pending, we're done */
if (!llist_add(&req->io_task_work.node, &tctx->task_list))
return;
@@ -1394,6 +1386,17 @@ void __io_req_task_work_add(struct io_kiocb *req, unsigned flags)
io_fallback_tw(tctx);
}
+void __io_req_task_work_add(struct io_kiocb *req, unsigned flags)
+{
+ if (req->ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
+ rcu_read_lock();
+ io_req_local_work_add(req, flags);
+ rcu_read_unlock();
+ } else {
+ io_req_normal_work_add(req);
+ }
+}
+
static void __cold io_move_task_work_from_local(struct io_ring_ctx *ctx)
{
struct llist_node *node;
@@ -1404,7 +1407,7 @@ static void __cold io_move_task_work_from_local(struct io_ring_ctx *ctx)
io_task_work.node);
node = node->next;
- __io_req_task_work_add(req, IOU_F_TWQ_FORCE_NORMAL);
+ io_req_normal_work_add(req);
}
}
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 9718897133db..20ba6df49b1f 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -16,9 +16,6 @@
#endif
enum {
- /* don't use deferred task_work */
- IOU_F_TWQ_FORCE_NORMAL = 1,
-
/*
* A hint to not wake right away but delay until there are enough of
* tw's queued to match the number of CQEs the task is waiting for.
@@ -26,7 +23,7 @@ enum {
* Must not be used wirh requests generating more than one CQE.
* It's also ignored unless IORING_SETUP_DEFER_TASKRUN is set.
*/
- IOU_F_TWQ_LAZY_WAKE = 2,
+ IOU_F_TWQ_LAZY_WAKE = 1,
};
enum {
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 07/11] io_uring: kill io_cq_unlock()
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (5 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 06/11] io_uring: remove IOU_F_TWQ_FORCE_NORMAL Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 08/11] io_uring: fix acquire/release annotations Pavel Begunkov
` (5 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We're abusing ->completion_lock helpers. io_cq_unlock() neither
locking conditionally nor doing CQE flushing, which means that callers
must have some side reason of taking the lock and should do it directly.
Open code io_cq_unlock() into io_cqring_overflow_kill() and clean it up.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 776d1aa73d26..2f55abb676c0 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -644,12 +644,6 @@ static inline void io_cq_lock(struct io_ring_ctx *ctx)
spin_lock(&ctx->completion_lock);
}
-static inline void io_cq_unlock(struct io_ring_ctx *ctx)
- __releases(ctx->completion_lock)
-{
- spin_unlock(&ctx->completion_lock);
-}
-
/* keep it inlined for io_submit_flush_completions() */
static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx)
__releases(ctx->completion_lock)
@@ -694,10 +688,10 @@ static void io_cqring_overflow_kill(struct io_ring_ctx *ctx)
struct io_overflow_cqe *ocqe;
LIST_HEAD(list);
- io_cq_lock(ctx);
+ spin_lock(&ctx->completion_lock);
list_splice_init(&ctx->cq_overflow_list, &list);
clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
- io_cq_unlock(ctx);
+ spin_unlock(&ctx->completion_lock);
while (!list_empty(&list)) {
ocqe = list_first_entry(&list, struct io_overflow_cqe, list);
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 08/11] io_uring: fix acquire/release annotations
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (6 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 07/11] io_uring: kill io_cq_unlock() Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 09/11] io_uring: inline __io_cq_unlock Pavel Begunkov
` (4 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We do conditional locking, so __io_cq_lock() and friends not always
actually grab/release the lock, so kill misleading annotations.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2f55abb676c0..8cb0f60d2885 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -626,7 +626,6 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
}
static inline void __io_cq_lock(struct io_ring_ctx *ctx)
- __acquires(ctx->completion_lock)
{
if (!ctx->task_complete)
spin_lock(&ctx->completion_lock);
@@ -646,7 +645,6 @@ static inline void io_cq_lock(struct io_ring_ctx *ctx)
/* keep it inlined for io_submit_flush_completions() */
static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx)
- __releases(ctx->completion_lock)
{
io_commit_cqring(ctx);
__io_cq_unlock(ctx);
@@ -655,7 +653,6 @@ static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx)
}
static void __io_cq_unlock_post_flush(struct io_ring_ctx *ctx)
- __releases(ctx->completion_lock)
{
io_commit_cqring(ctx);
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 09/11] io_uring: inline __io_cq_unlock
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (7 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 08/11] io_uring: fix acquire/release annotations Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 10/11] io_uring: make io_cq_unlock_post static Pavel Begunkov
` (3 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
__io_cq_unlock is not very helpful, and users should be calling flush
variants anyway. Open code the function.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 8cb0f60d2885..39d83b631107 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -631,12 +631,6 @@ static inline void __io_cq_lock(struct io_ring_ctx *ctx)
spin_lock(&ctx->completion_lock);
}
-static inline void __io_cq_unlock(struct io_ring_ctx *ctx)
-{
- if (!ctx->task_complete)
- spin_unlock(&ctx->completion_lock);
-}
-
static inline void io_cq_lock(struct io_ring_ctx *ctx)
__acquires(ctx->completion_lock)
{
@@ -647,7 +641,9 @@ static inline void io_cq_lock(struct io_ring_ctx *ctx)
static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx)
{
io_commit_cqring(ctx);
- __io_cq_unlock(ctx);
+ if (!ctx->task_complete)
+ spin_unlock(&ctx->completion_lock);
+
io_commit_cqring_flush(ctx);
io_cqring_wake(ctx);
}
@@ -664,7 +660,7 @@ static void __io_cq_unlock_post_flush(struct io_ring_ctx *ctx)
*/
io_commit_cqring_flush(ctx);
} else {
- __io_cq_unlock(ctx);
+ spin_unlock(&ctx->completion_lock);
io_commit_cqring_flush(ctx);
io_cqring_wake(ctx);
}
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 10/11] io_uring: make io_cq_unlock_post static
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (8 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 09/11] io_uring: inline __io_cq_unlock Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 11:23 ` [PATCH 11/11] io_uring: merge conditional unlock flush helpers Pavel Begunkov
` (2 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
io_cq_unlock_post() is exclusively used in io_uring/io_uring.c, mark it
static and don't expose to other files.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 2 +-
io_uring/io_uring.h | 2 --
2 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 39d83b631107..70fffed83e95 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -666,7 +666,7 @@ static void __io_cq_unlock_post_flush(struct io_ring_ctx *ctx)
}
}
-void io_cq_unlock_post(struct io_ring_ctx *ctx)
+static void io_cq_unlock_post(struct io_ring_ctx *ctx)
__releases(ctx->completion_lock)
{
io_commit_cqring(ctx);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 20ba6df49b1f..d3606d30cf6f 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -110,8 +110,6 @@ static inline void io_req_task_work_add(struct io_kiocb *req)
#define io_for_each_link(pos, head) \
for (pos = (head); pos; pos = pos->link)
-void io_cq_unlock_post(struct io_ring_ctx *ctx);
-
static inline struct io_uring_cqe *io_get_cqe_overflow(struct io_ring_ctx *ctx,
bool overflow)
{
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 11/11] io_uring: merge conditional unlock flush helpers
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (9 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 10/11] io_uring: make io_cq_unlock_post static Pavel Begunkov
@ 2023-06-23 11:23 ` Pavel Begunkov
2023-06-23 14:26 ` [PATCH 00/11] clean up req free and CQ locking Jens Axboe
2023-06-23 14:27 ` Jens Axboe
12 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2023-06-23 11:23 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
There is no reason not to use __io_cq_unlock_post_flush for intermediate
aux CQE flushing, all ->task_complete should apply there, i.e. if set it
should be the submitter task. Combine them, get rid of of
__io_cq_unlock_post() and rename the left function.
This place was also taking a couple percents of CPU according to
profiles for max throughput net benchmarks due to multishot recv
flooding it with completions.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 13 +------------
1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 70fffed83e95..1b53a2ab0a27 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -637,18 +637,7 @@ static inline void io_cq_lock(struct io_ring_ctx *ctx)
spin_lock(&ctx->completion_lock);
}
-/* keep it inlined for io_submit_flush_completions() */
static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx)
-{
- io_commit_cqring(ctx);
- if (!ctx->task_complete)
- spin_unlock(&ctx->completion_lock);
-
- io_commit_cqring_flush(ctx);
- io_cqring_wake(ctx);
-}
-
-static void __io_cq_unlock_post_flush(struct io_ring_ctx *ctx)
{
io_commit_cqring(ctx);
@@ -1568,7 +1557,7 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
}
}
}
- __io_cq_unlock_post_flush(ctx);
+ __io_cq_unlock_post(ctx);
if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
io_free_batch_list(ctx, state->compl_reqs.first);
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 00/11] clean up req free and CQ locking
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (10 preceding siblings ...)
2023-06-23 11:23 ` [PATCH 11/11] io_uring: merge conditional unlock flush helpers Pavel Begunkov
@ 2023-06-23 14:26 ` Jens Axboe
2023-06-23 14:27 ` Jens Axboe
12 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2023-06-23 14:26 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 6/23/23 5:23?AM, Pavel Begunkov wrote:
> Patches 1-5 are cleaning how we free requests.
> Patches 7-11 brush CQ / ->completion_lock locking
>
> Pavel Begunkov (11):
> io_uring: open code io_put_req_find_next
> io_uring: remove io_free_req_tw
> io_uring: inline io_dismantle_req()
> io_uring: move io_clean_op()
> io_uring: don't batch task put on reqs free
> io_uring: remove IOU_F_TWQ_FORCE_NORMAL
> io_uring: kill io_cq_unlock()
> io_uring: fix acquire/release annotations
> io_uring: inline __io_cq_unlock
> io_uring: make io_cq_unlock_post static
> io_uring: merge conditional unlock flush helpers
>
> io_uring/io_uring.c | 223 ++++++++++++++++----------------------------
> io_uring/io_uring.h | 7 +-
> 2 files changed, 80 insertions(+), 150 deletions(-)
Was going to say that it's a bit late for 6.5, but looking over the
patches, it's pretty straight forward and mostly just good cleanups and
code reduction. I'll queue it up for 6.5, thanks.
--
Jens Axboe
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 00/11] clean up req free and CQ locking
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
` (11 preceding siblings ...)
2023-06-23 14:26 ` [PATCH 00/11] clean up req free and CQ locking Jens Axboe
@ 2023-06-23 14:27 ` Jens Axboe
12 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2023-06-23 14:27 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
On Fri, 23 Jun 2023 12:23:20 +0100, Pavel Begunkov wrote:
> Patches 1-5 are cleaning how we free requests.
> Patches 7-11 brush CQ / ->completion_lock locking
>
> Pavel Begunkov (11):
> io_uring: open code io_put_req_find_next
> io_uring: remove io_free_req_tw
> io_uring: inline io_dismantle_req()
> io_uring: move io_clean_op()
> io_uring: don't batch task put on reqs free
> io_uring: remove IOU_F_TWQ_FORCE_NORMAL
> io_uring: kill io_cq_unlock()
> io_uring: fix acquire/release annotations
> io_uring: inline __io_cq_unlock
> io_uring: make io_cq_unlock_post static
> io_uring: merge conditional unlock flush helpers
>
> [...]
Applied, thanks!
[01/11] io_uring: open code io_put_req_find_next
commit: 247f97a5f19b642eba5f5c1cf95fc3169326b3fb
[02/11] io_uring: remove io_free_req_tw
commit: 6ec9afc7f4cba58ab740c59d4c964d9422e2ea82
[03/11] io_uring: inline io_dismantle_req()
commit: 3b7a612fd0dbd321e15a308b8ac1f8bbf81432bd
[04/11] io_uring: move io_clean_op()
commit: 5a754dea27fb91a418f7429e24479e4184dee2e3
[05/11] io_uring: don't batch task put on reqs free
commit: 2fdd6fb5ff958a0f6b403e3f3ffd645b60b2a2b2
[06/11] io_uring: remove IOU_F_TWQ_FORCE_NORMAL
commit: 91c7884ac9a92ffbf78af7fc89603daf24f448a9
[07/11] io_uring: kill io_cq_unlock()
commit: f432b76bcc93f36edb3d371f7b8d7881261dd6e7
[08/11] io_uring: fix acquire/release annotations
commit: 55b6a69fed5df6f88ef0b2ace562b422162beb61
[09/11] io_uring: inline __io_cq_unlock
commit: ff12617728fa5c7fb5325e164503ca4e936b80bd
[10/11] io_uring: make io_cq_unlock_post static
commit: 0fdb9a196c6728b51e0e7a4f6fa292d9fd5793de
[11/11] io_uring: merge conditional unlock flush helpers
commit: c98c81a4ac37b651be7eb9d16f562fc4acc5f867
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-06-23 14:27 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-23 11:23 [PATCH 00/11] clean up req free and CQ locking Pavel Begunkov
2023-06-23 11:23 ` [PATCH 01/11] io_uring: open code io_put_req_find_next Pavel Begunkov
2023-06-23 11:23 ` [PATCH 02/11] io_uring: remove io_free_req_tw Pavel Begunkov
2023-06-23 11:23 ` [PATCH 03/11] io_uring: inline io_dismantle_req() Pavel Begunkov
2023-06-23 11:23 ` [PATCH 04/11] io_uring: move io_clean_op() Pavel Begunkov
2023-06-23 11:23 ` [PATCH 05/11] io_uring: don't batch task put on reqs free Pavel Begunkov
2023-06-23 11:23 ` [PATCH 06/11] io_uring: remove IOU_F_TWQ_FORCE_NORMAL Pavel Begunkov
2023-06-23 11:23 ` [PATCH 07/11] io_uring: kill io_cq_unlock() Pavel Begunkov
2023-06-23 11:23 ` [PATCH 08/11] io_uring: fix acquire/release annotations Pavel Begunkov
2023-06-23 11:23 ` [PATCH 09/11] io_uring: inline __io_cq_unlock Pavel Begunkov
2023-06-23 11:23 ` [PATCH 10/11] io_uring: make io_cq_unlock_post static Pavel Begunkov
2023-06-23 11:23 ` [PATCH 11/11] io_uring: merge conditional unlock flush helpers Pavel Begunkov
2023-06-23 14:26 ` [PATCH 00/11] clean up req free and CQ locking Jens Axboe
2023-06-23 14:27 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox