public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET v3 0/5] Allow non-atomic allocs for overflows
@ 2025-05-16 20:05 Jens Axboe
  2025-05-16 20:05 ` [PATCH 1/5] io_uring: open code io_req_cqe_overflow() Jens Axboe
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 20:05 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, csander

Hi,

Sorry for spamming this patchset today, but I do believe this is close to
final... Last one today, promise. At least it should appease some of Pavel's
concerns. Basically this patchset tries to accomplish two things:

1) Enable GFP_KERNEL alloc of the overflow entries, when possible.

2) Make the overflow side pollute the fast/common path as little as
   possible.

Also does some cleanups, like passing more appropriate and easily
readable arguments to the overflow handling, rather than need 3..5
arguments of various user_data/res/cflags/extra1/extra1 being passed
along.

Passes normal regression testing.

 include/linux/io_uring_types.h |  2 +-
 io_uring/io_uring.c            | 97 ++++++++++++++++++++++------------
 2 files changed, 63 insertions(+), 36 deletions(-)

Since v2:
- Finish conversion by adding final helpers so that each of the three
  call sites can use one of them without needing open-alloc + post.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] io_uring: open code io_req_cqe_overflow()
  2025-05-16 20:05 [PATCHSET v3 0/5] Allow non-atomic allocs for overflows Jens Axboe
@ 2025-05-16 20:05 ` Jens Axboe
  2025-05-16 20:05 ` [PATCH 2/5] io_uring: split alloc and add of overflow Jens Axboe
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 20:05 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, csander, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

A preparation patch, just open code io_req_cqe_overflow().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/io_uring.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 43c285cd2294..e4d6e572eabc 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -739,14 +739,6 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
 	return true;
 }
 
-static void io_req_cqe_overflow(struct io_kiocb *req)
-{
-	io_cqring_event_overflow(req->ctx, req->cqe.user_data,
-				req->cqe.res, req->cqe.flags,
-				req->big_cqe.extra1, req->big_cqe.extra2);
-	memset(&req->big_cqe, 0, sizeof(req->big_cqe));
-}
-
 /*
  * writes to the cq entry need to come after reading head; the
  * control dependency is enough as we're using WRITE_ONCE to
@@ -1435,11 +1427,19 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 		    unlikely(!io_fill_cqe_req(ctx, req))) {
 			if (ctx->lockless_cq) {
 				spin_lock(&ctx->completion_lock);
-				io_req_cqe_overflow(req);
+				io_cqring_event_overflow(req->ctx, req->cqe.user_data,
+							req->cqe.res, req->cqe.flags,
+							req->big_cqe.extra1,
+							req->big_cqe.extra2);
 				spin_unlock(&ctx->completion_lock);
 			} else {
-				io_req_cqe_overflow(req);
+				io_cqring_event_overflow(req->ctx, req->cqe.user_data,
+							req->cqe.res, req->cqe.flags,
+							req->big_cqe.extra1,
+							req->big_cqe.extra2);
 			}
+
+			memset(&req->big_cqe, 0, sizeof(req->big_cqe));
 		}
 	}
 	__io_cq_unlock_post(ctx);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] io_uring: split alloc and add of overflow
  2025-05-16 20:05 [PATCHSET v3 0/5] Allow non-atomic allocs for overflows Jens Axboe
  2025-05-16 20:05 ` [PATCH 1/5] io_uring: open code io_req_cqe_overflow() Jens Axboe
@ 2025-05-16 20:05 ` Jens Axboe
  2025-05-16 23:00   ` Caleb Sander Mateos
  2025-05-16 20:05 ` [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer Jens Axboe
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 20:05 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, csander, Jens Axboe

Add a new helper, io_alloc_ocqe(), that simply allocates and fills an
overflow entry. Then it can get done outside of the locking section,
and hence use more appropriate gfp_t allocation flags rather than always
default to GFP_ATOMIC.

Inspired by a previous series from Pavel:

https://lore.kernel.org/io-uring/cover.1747209332.git.asml.silence@gmail.com/

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/io_uring.c | 75 +++++++++++++++++++++++++++------------------
 1 file changed, 45 insertions(+), 30 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index e4d6e572eabc..b564a1bdc068 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -697,20 +697,11 @@ static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
 	}
 }
 
-static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
-				     s32 res, u32 cflags, u64 extra1, u64 extra2)
+static bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
+				   struct io_overflow_cqe *ocqe)
 {
-	struct io_overflow_cqe *ocqe;
-	size_t ocq_size = sizeof(struct io_overflow_cqe);
-	bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
-
 	lockdep_assert_held(&ctx->completion_lock);
 
-	if (is_cqe32)
-		ocq_size += sizeof(struct io_uring_cqe);
-
-	ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
-	trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
 	if (!ocqe) {
 		struct io_rings *r = ctx->rings;
 
@@ -728,17 +719,35 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
 		atomic_or(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
 
 	}
-	ocqe->cqe.user_data = user_data;
-	ocqe->cqe.res = res;
-	ocqe->cqe.flags = cflags;
-	if (is_cqe32) {
-		ocqe->cqe.big_cqe[0] = extra1;
-		ocqe->cqe.big_cqe[1] = extra2;
-	}
 	list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
 	return true;
 }
 
+static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
+					     u64 user_data, s32 res, u32 cflags,
+					     u64 extra1, u64 extra2, gfp_t gfp)
+{
+	struct io_overflow_cqe *ocqe;
+	size_t ocq_size = sizeof(struct io_overflow_cqe);
+	bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
+
+	if (is_cqe32)
+		ocq_size += sizeof(struct io_uring_cqe);
+
+	ocqe = kmalloc(ocq_size, gfp | __GFP_ACCOUNT);
+	trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
+	if (ocqe) {
+		ocqe->cqe.user_data = user_data;
+		ocqe->cqe.res = res;
+		ocqe->cqe.flags = cflags;
+		if (is_cqe32) {
+			ocqe->cqe.big_cqe[0] = extra1;
+			ocqe->cqe.big_cqe[1] = extra2;
+		}
+	}
+	return ocqe;
+}
+
 /*
  * writes to the cq entry need to come after reading head; the
  * control dependency is enough as we're using WRITE_ONCE to
@@ -803,8 +812,12 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 
 	io_cq_lock(ctx);
 	filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
-	if (!filled)
-		filled = io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
+	if (unlikely(!filled)) {
+		struct io_overflow_cqe *ocqe;
+
+		ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_ATOMIC);
+		filled = io_cqring_add_overflow(ctx, ocqe);
+	}
 	io_cq_unlock_post(ctx);
 	return filled;
 }
@@ -819,8 +832,11 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
 	lockdep_assert(ctx->lockless_cq);
 
 	if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
+		struct io_overflow_cqe *ocqe;
+
+		ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_KERNEL);
 		spin_lock(&ctx->completion_lock);
-		io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
+		io_cqring_add_overflow(ctx, ocqe);
 		spin_unlock(&ctx->completion_lock);
 	}
 	ctx->submit_state.cq_flush = true;
@@ -1425,20 +1441,19 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 		 */
 		if (!(req->flags & (REQ_F_CQE_SKIP | REQ_F_REISSUE)) &&
 		    unlikely(!io_fill_cqe_req(ctx, req))) {
+			gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
+			struct io_overflow_cqe *ocqe;
+
+			ocqe = io_alloc_ocqe(ctx, req->cqe.user_data, req->cqe.res,
+					     req->cqe.flags, req->big_cqe.extra1,
+					     req->big_cqe.extra2, gfp);
 			if (ctx->lockless_cq) {
 				spin_lock(&ctx->completion_lock);
-				io_cqring_event_overflow(req->ctx, req->cqe.user_data,
-							req->cqe.res, req->cqe.flags,
-							req->big_cqe.extra1,
-							req->big_cqe.extra2);
+				io_cqring_add_overflow(ctx, ocqe);
 				spin_unlock(&ctx->completion_lock);
 			} else {
-				io_cqring_event_overflow(req->ctx, req->cqe.user_data,
-							req->cqe.res, req->cqe.flags,
-							req->big_cqe.extra1,
-							req->big_cqe.extra2);
+				io_cqring_add_overflow(ctx, ocqe);
 			}
-
 			memset(&req->big_cqe, 0, sizeof(req->big_cqe));
 		}
 	}
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer
  2025-05-16 20:05 [PATCHSET v3 0/5] Allow non-atomic allocs for overflows Jens Axboe
  2025-05-16 20:05 ` [PATCH 1/5] io_uring: open code io_req_cqe_overflow() Jens Axboe
  2025-05-16 20:05 ` [PATCH 2/5] io_uring: split alloc and add of overflow Jens Axboe
@ 2025-05-16 20:05 ` Jens Axboe
  2025-05-16 23:07   ` Caleb Sander Mateos
  2025-05-16 20:05 ` [PATCH 4/5] io_uring: pass in struct io_big_cqe to io_alloc_ocqe() Jens Axboe
  2025-05-16 20:05 ` [PATCH 5/5] io_uring: add new helpers for posting overflows Jens Axboe
  4 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 20:05 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, csander, Jens Axboe

The number of arguments to io_alloc_ocqe() is a bit unwieldy. Make it
take a struct io_cqe pointer rather than three sepearate CQE args. One
path already has that readily available, add an io_init_cqe() helper for
the remainding two.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/io_uring.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index b564a1bdc068..b50c2d434e74 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -724,8 +724,8 @@ static bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
 }
 
 static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
-					     u64 user_data, s32 res, u32 cflags,
-					     u64 extra1, u64 extra2, gfp_t gfp)
+					     struct io_cqe *cqe, u64 extra1,
+					     u64 extra2, gfp_t gfp)
 {
 	struct io_overflow_cqe *ocqe;
 	size_t ocq_size = sizeof(struct io_overflow_cqe);
@@ -735,11 +735,11 @@ static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
 		ocq_size += sizeof(struct io_uring_cqe);
 
 	ocqe = kmalloc(ocq_size, gfp | __GFP_ACCOUNT);
-	trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
+	trace_io_uring_cqe_overflow(ctx, cqe->user_data, cqe->res, cqe->flags, ocqe);
 	if (ocqe) {
-		ocqe->cqe.user_data = user_data;
-		ocqe->cqe.res = res;
-		ocqe->cqe.flags = cflags;
+		ocqe->cqe.user_data = cqe->user_data;
+		ocqe->cqe.res = cqe->res;
+		ocqe->cqe.flags = cqe->flags;
 		if (is_cqe32) {
 			ocqe->cqe.big_cqe[0] = extra1;
 			ocqe->cqe.big_cqe[1] = extra2;
@@ -806,6 +806,9 @@ static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
 	return false;
 }
 
+#define io_init_cqe(user_data, res, cflags)	\
+	(struct io_cqe) { .user_data = user_data, .res = res, .flags = cflags }
+
 bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
 {
 	bool filled;
@@ -814,8 +817,9 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 	filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
 	if (unlikely(!filled)) {
 		struct io_overflow_cqe *ocqe;
+		struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
 
-		ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_ATOMIC);
+		ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_ATOMIC);
 		filled = io_cqring_add_overflow(ctx, ocqe);
 	}
 	io_cq_unlock_post(ctx);
@@ -833,8 +837,9 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
 
 	if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
 		struct io_overflow_cqe *ocqe;
+		struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
 
-		ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_KERNEL);
+		ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_KERNEL);
 		spin_lock(&ctx->completion_lock);
 		io_cqring_add_overflow(ctx, ocqe);
 		spin_unlock(&ctx->completion_lock);
@@ -1444,8 +1449,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 			gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
 			struct io_overflow_cqe *ocqe;
 
-			ocqe = io_alloc_ocqe(ctx, req->cqe.user_data, req->cqe.res,
-					     req->cqe.flags, req->big_cqe.extra1,
+			ocqe = io_alloc_ocqe(ctx, &req->cqe, req->big_cqe.extra1,
 					     req->big_cqe.extra2, gfp);
 			if (ctx->lockless_cq) {
 				spin_lock(&ctx->completion_lock);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] io_uring: pass in struct io_big_cqe to io_alloc_ocqe()
  2025-05-16 20:05 [PATCHSET v3 0/5] Allow non-atomic allocs for overflows Jens Axboe
                   ` (2 preceding siblings ...)
  2025-05-16 20:05 ` [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer Jens Axboe
@ 2025-05-16 20:05 ` Jens Axboe
  2025-05-16 23:10   ` Caleb Sander Mateos
  2025-05-16 20:05 ` [PATCH 5/5] io_uring: add new helpers for posting overflows Jens Axboe
  4 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 20:05 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, csander, Jens Axboe

Rather than pass extra1/extra2 separately, just pass in the (now) named
io_big_cqe struct instead. The callers that don't use/support CQE32 will
now just pass a single NULL, rather than two seperate mystery zero
values.

Move the clearing of the big_cqe elements into io_alloc_ocqe() as well,
so it can get moved out of the generic code.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 include/linux/io_uring_types.h |  2 +-
 io_uring/io_uring.c            | 22 +++++++++++-----------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 00dbd7cd0e7d..2922635986f5 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -710,7 +710,7 @@ struct io_kiocb {
 	const struct cred		*creds;
 	struct io_wq_work		work;
 
-	struct {
+	struct io_big_cqe {
 		u64			extra1;
 		u64			extra2;
 	} big_cqe;
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index b50c2d434e74..c66fc4b7356b 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -724,8 +724,8 @@ static bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
 }
 
 static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
-					     struct io_cqe *cqe, u64 extra1,
-					     u64 extra2, gfp_t gfp)
+					     struct io_cqe *cqe,
+					     struct io_big_cqe *big_cqe, gfp_t gfp)
 {
 	struct io_overflow_cqe *ocqe;
 	size_t ocq_size = sizeof(struct io_overflow_cqe);
@@ -734,17 +734,19 @@ static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
 	if (is_cqe32)
 		ocq_size += sizeof(struct io_uring_cqe);
 
-	ocqe = kmalloc(ocq_size, gfp | __GFP_ACCOUNT);
+	ocqe = kzalloc(ocq_size, gfp | __GFP_ACCOUNT);
 	trace_io_uring_cqe_overflow(ctx, cqe->user_data, cqe->res, cqe->flags, ocqe);
 	if (ocqe) {
 		ocqe->cqe.user_data = cqe->user_data;
 		ocqe->cqe.res = cqe->res;
 		ocqe->cqe.flags = cqe->flags;
-		if (is_cqe32) {
-			ocqe->cqe.big_cqe[0] = extra1;
-			ocqe->cqe.big_cqe[1] = extra2;
+		if (is_cqe32 && big_cqe) {
+			ocqe->cqe.big_cqe[0] = big_cqe->extra1;
+			ocqe->cqe.big_cqe[1] = big_cqe->extra2;
 		}
 	}
+	if (big_cqe)
+		big_cqe->extra1 = big_cqe->extra2 = 0;
 	return ocqe;
 }
 
@@ -819,7 +821,7 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 		struct io_overflow_cqe *ocqe;
 		struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
 
-		ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_ATOMIC);
+		ocqe = io_alloc_ocqe(ctx, &cqe, NULL, GFP_ATOMIC);
 		filled = io_cqring_add_overflow(ctx, ocqe);
 	}
 	io_cq_unlock_post(ctx);
@@ -839,7 +841,7 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
 		struct io_overflow_cqe *ocqe;
 		struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
 
-		ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_KERNEL);
+		ocqe = io_alloc_ocqe(ctx, &cqe, NULL, GFP_KERNEL);
 		spin_lock(&ctx->completion_lock);
 		io_cqring_add_overflow(ctx, ocqe);
 		spin_unlock(&ctx->completion_lock);
@@ -1449,8 +1451,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 			gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
 			struct io_overflow_cqe *ocqe;
 
-			ocqe = io_alloc_ocqe(ctx, &req->cqe, req->big_cqe.extra1,
-					     req->big_cqe.extra2, gfp);
+			ocqe = io_alloc_ocqe(ctx, &req->cqe, &req->big_cqe, gfp);
 			if (ctx->lockless_cq) {
 				spin_lock(&ctx->completion_lock);
 				io_cqring_add_overflow(ctx, ocqe);
@@ -1458,7 +1459,6 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 			} else {
 				io_cqring_add_overflow(ctx, ocqe);
 			}
-			memset(&req->big_cqe, 0, sizeof(req->big_cqe));
 		}
 	}
 	__io_cq_unlock_post(ctx);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] io_uring: add new helpers for posting overflows
  2025-05-16 20:05 [PATCHSET v3 0/5] Allow non-atomic allocs for overflows Jens Axboe
                   ` (3 preceding siblings ...)
  2025-05-16 20:05 ` [PATCH 4/5] io_uring: pass in struct io_big_cqe to io_alloc_ocqe() Jens Axboe
@ 2025-05-16 20:05 ` Jens Axboe
  2025-05-16 23:17   ` Caleb Sander Mateos
  4 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 20:05 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, csander, Jens Axboe

Add two helpers, one for posting overflows for lockless_cq rings, and
one for non-lockless_cq rings. The former can allocate sanely with
GFP_KERNEL, but needs to grab the completion lock for posting, while the
latter must do non-sleeping allocs as it already holds the completion
lock.

While at it, mark the overflow handling functions as __cold as well, as
they should not generally be called during normal operations of the
ring.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/io_uring.c | 50 ++++++++++++++++++++++++++-------------------
 1 file changed, 29 insertions(+), 21 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c66fc4b7356b..52087b079a0c 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -697,8 +697,8 @@ static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
 	}
 }
 
-static bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
-				   struct io_overflow_cqe *ocqe)
+static __cold bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
+					  struct io_overflow_cqe *ocqe)
 {
 	lockdep_assert_held(&ctx->completion_lock);
 
@@ -808,6 +808,27 @@ static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
 	return false;
 }
 
+static __cold void io_cqe_overflow_lockless(struct io_ring_ctx *ctx,
+					    struct io_cqe *cqe,
+					    struct io_big_cqe *big_cqe)
+{
+	struct io_overflow_cqe *ocqe;
+
+	ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_KERNEL);
+	spin_lock(&ctx->completion_lock);
+	io_cqring_add_overflow(ctx, ocqe);
+	spin_unlock(&ctx->completion_lock);
+}
+
+static __cold bool io_cqe_overflow(struct io_ring_ctx *ctx, struct io_cqe *cqe,
+				   struct io_big_cqe *big_cqe)
+{
+	struct io_overflow_cqe *ocqe;
+
+	ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_ATOMIC);
+	return io_cqring_add_overflow(ctx, ocqe);
+}
+
 #define io_init_cqe(user_data, res, cflags)	\
 	(struct io_cqe) { .user_data = user_data, .res = res, .flags = cflags }
 
@@ -818,11 +839,9 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 	io_cq_lock(ctx);
 	filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
 	if (unlikely(!filled)) {
-		struct io_overflow_cqe *ocqe;
 		struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
 
-		ocqe = io_alloc_ocqe(ctx, &cqe, NULL, GFP_ATOMIC);
-		filled = io_cqring_add_overflow(ctx, ocqe);
+		filled = io_cqe_overflow(ctx, &cqe, NULL);
 	}
 	io_cq_unlock_post(ctx);
 	return filled;
@@ -838,13 +857,9 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
 	lockdep_assert(ctx->lockless_cq);
 
 	if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
-		struct io_overflow_cqe *ocqe;
 		struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
 
-		ocqe = io_alloc_ocqe(ctx, &cqe, NULL, GFP_KERNEL);
-		spin_lock(&ctx->completion_lock);
-		io_cqring_add_overflow(ctx, ocqe);
-		spin_unlock(&ctx->completion_lock);
+		io_cqe_overflow_lockless(ctx, &cqe, NULL);
 	}
 	ctx->submit_state.cq_flush = true;
 }
@@ -1448,17 +1463,10 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 		 */
 		if (!(req->flags & (REQ_F_CQE_SKIP | REQ_F_REISSUE)) &&
 		    unlikely(!io_fill_cqe_req(ctx, req))) {
-			gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
-			struct io_overflow_cqe *ocqe;
-
-			ocqe = io_alloc_ocqe(ctx, &req->cqe, &req->big_cqe, gfp);
-			if (ctx->lockless_cq) {
-				spin_lock(&ctx->completion_lock);
-				io_cqring_add_overflow(ctx, ocqe);
-				spin_unlock(&ctx->completion_lock);
-			} else {
-				io_cqring_add_overflow(ctx, ocqe);
-			}
+			if (ctx->lockless_cq)
+				io_cqe_overflow_lockless(ctx, &req->cqe, &req->big_cqe);
+			else
+				io_cqe_overflow(ctx, &req->cqe, &req->big_cqe);
 		}
 	}
 	__io_cq_unlock_post(ctx);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] io_uring: split alloc and add of overflow
  2025-05-16 20:05 ` [PATCH 2/5] io_uring: split alloc and add of overflow Jens Axboe
@ 2025-05-16 23:00   ` Caleb Sander Mateos
  0 siblings, 0 replies; 13+ messages in thread
From: Caleb Sander Mateos @ 2025-05-16 23:00 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, asml.silence

On Fri, May 16, 2025 at 1:10 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> Add a new helper, io_alloc_ocqe(), that simply allocates and fills an
> overflow entry. Then it can get done outside of the locking section,
> and hence use more appropriate gfp_t allocation flags rather than always
> default to GFP_ATOMIC.
>
> Inspired by a previous series from Pavel:
>
> https://lore.kernel.org/io-uring/cover.1747209332.git.asml.silence@gmail.com/
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer
  2025-05-16 20:05 ` [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer Jens Axboe
@ 2025-05-16 23:07   ` Caleb Sander Mateos
  2025-05-16 23:08     ` Caleb Sander Mateos
  2025-05-16 23:48     ` Jens Axboe
  0 siblings, 2 replies; 13+ messages in thread
From: Caleb Sander Mateos @ 2025-05-16 23:07 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, asml.silence

On Fri, May 16, 2025 at 1:10 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> The number of arguments to io_alloc_ocqe() is a bit unwieldy. Make it
> take a struct io_cqe pointer rather than three sepearate CQE args. One

typo: "separate"

> path already has that readily available, add an io_init_cqe() helper for
> the remainding two.

typo: "remaining"

>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  io_uring/io_uring.c | 24 ++++++++++++++----------
>  1 file changed, 14 insertions(+), 10 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index b564a1bdc068..b50c2d434e74 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -724,8 +724,8 @@ static bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
>  }
>
>  static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
> -                                            u64 user_data, s32 res, u32 cflags,
> -                                            u64 extra1, u64 extra2, gfp_t gfp)
> +                                            struct io_cqe *cqe, u64 extra1,
> +                                            u64 extra2, gfp_t gfp)
>  {
>         struct io_overflow_cqe *ocqe;
>         size_t ocq_size = sizeof(struct io_overflow_cqe);
> @@ -735,11 +735,11 @@ static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
>                 ocq_size += sizeof(struct io_uring_cqe);
>
>         ocqe = kmalloc(ocq_size, gfp | __GFP_ACCOUNT);
> -       trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
> +       trace_io_uring_cqe_overflow(ctx, cqe->user_data, cqe->res, cqe->flags, ocqe);
>         if (ocqe) {
> -               ocqe->cqe.user_data = user_data;
> -               ocqe->cqe.res = res;
> -               ocqe->cqe.flags = cflags;
> +               ocqe->cqe.user_data = cqe->user_data;
> +               ocqe->cqe.res = cqe->res;
> +               ocqe->cqe.flags = cqe->flags;
>                 if (is_cqe32) {
>                         ocqe->cqe.big_cqe[0] = extra1;
>                         ocqe->cqe.big_cqe[1] = extra2;
> @@ -806,6 +806,9 @@ static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
>         return false;
>  }
>
> +#define io_init_cqe(user_data, res, cflags)    \
> +       (struct io_cqe) { .user_data = user_data, .res = res, .flags = cflags }

The arguments and result should be parenthesized to prevent unexpected
groupings. Better yet, make this a static inline function.

> +
>  bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
>  {
>         bool filled;
> @@ -814,8 +817,9 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
>         filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
>         if (unlikely(!filled)) {
>                 struct io_overflow_cqe *ocqe;
> +               struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
>
> -               ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_ATOMIC);
> +               ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_ATOMIC);
>                 filled = io_cqring_add_overflow(ctx, ocqe);
>         }
>         io_cq_unlock_post(ctx);
> @@ -833,8 +837,9 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
>
>         if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
>                 struct io_overflow_cqe *ocqe;
> +               struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
>
> -               ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_KERNEL);
> +               ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_KERNEL);
>                 spin_lock(&ctx->completion_lock);
>                 io_cqring_add_overflow(ctx, ocqe);
>                 spin_unlock(&ctx->completion_lock);
> @@ -1444,8 +1449,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>                         gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
>                         struct io_overflow_cqe *ocqe;
>
> -                       ocqe = io_alloc_ocqe(ctx, req->cqe.user_data, req->cqe.res,
> -                                            req->cqe.flags, req->big_cqe.extra1,
> +                       ocqe = io_alloc_ocqe(ctx, &req->cqe, req->big_cqe.extra1,
>                                              req->big_cqe.extra2, gfp);

If the req->big_cqe type were named, these 2 arguments could be
combined into just &req->big_cqe.

Best,
Caleb

>                         if (ctx->lockless_cq) {
>                                 spin_lock(&ctx->completion_lock);
> --
> 2.49.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer
  2025-05-16 23:07   ` Caleb Sander Mateos
@ 2025-05-16 23:08     ` Caleb Sander Mateos
  2025-05-16 23:48     ` Jens Axboe
  1 sibling, 0 replies; 13+ messages in thread
From: Caleb Sander Mateos @ 2025-05-16 23:08 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, asml.silence

On Fri, May 16, 2025 at 4:07 PM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> On Fri, May 16, 2025 at 1:10 PM Jens Axboe <axboe@kernel.dk> wrote:
> >
> > The number of arguments to io_alloc_ocqe() is a bit unwieldy. Make it
> > take a struct io_cqe pointer rather than three sepearate CQE args. One
>
> typo: "separate"
>
> > path already has that readily available, add an io_init_cqe() helper for
> > the remainding two.
>
> typo: "remaining"
>
> >
> > Signed-off-by: Jens Axboe <axboe@kernel.dk>
> > ---
> >  io_uring/io_uring.c | 24 ++++++++++++++----------
> >  1 file changed, 14 insertions(+), 10 deletions(-)
> >
> > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > index b564a1bdc068..b50c2d434e74 100644
> > --- a/io_uring/io_uring.c
> > +++ b/io_uring/io_uring.c
> > @@ -724,8 +724,8 @@ static bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
> >  }
> >
> >  static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
> > -                                            u64 user_data, s32 res, u32 cflags,
> > -                                            u64 extra1, u64 extra2, gfp_t gfp)
> > +                                            struct io_cqe *cqe, u64 extra1,
> > +                                            u64 extra2, gfp_t gfp)
> >  {
> >         struct io_overflow_cqe *ocqe;
> >         size_t ocq_size = sizeof(struct io_overflow_cqe);
> > @@ -735,11 +735,11 @@ static struct io_overflow_cqe *io_alloc_ocqe(struct io_ring_ctx *ctx,
> >                 ocq_size += sizeof(struct io_uring_cqe);
> >
> >         ocqe = kmalloc(ocq_size, gfp | __GFP_ACCOUNT);
> > -       trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
> > +       trace_io_uring_cqe_overflow(ctx, cqe->user_data, cqe->res, cqe->flags, ocqe);
> >         if (ocqe) {
> > -               ocqe->cqe.user_data = user_data;
> > -               ocqe->cqe.res = res;
> > -               ocqe->cqe.flags = cflags;
> > +               ocqe->cqe.user_data = cqe->user_data;
> > +               ocqe->cqe.res = cqe->res;
> > +               ocqe->cqe.flags = cqe->flags;
> >                 if (is_cqe32) {
> >                         ocqe->cqe.big_cqe[0] = extra1;
> >                         ocqe->cqe.big_cqe[1] = extra2;
> > @@ -806,6 +806,9 @@ static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
> >         return false;
> >  }
> >
> > +#define io_init_cqe(user_data, res, cflags)    \
> > +       (struct io_cqe) { .user_data = user_data, .res = res, .flags = cflags }
>
> The arguments and result should be parenthesized to prevent unexpected
> groupings. Better yet, make this a static inline function.
>
> > +
> >  bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
> >  {
> >         bool filled;
> > @@ -814,8 +817,9 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
> >         filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
> >         if (unlikely(!filled)) {
> >                 struct io_overflow_cqe *ocqe;
> > +               struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
> >
> > -               ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_ATOMIC);
> > +               ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_ATOMIC);
> >                 filled = io_cqring_add_overflow(ctx, ocqe);
> >         }
> >         io_cq_unlock_post(ctx);
> > @@ -833,8 +837,9 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
> >
> >         if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
> >                 struct io_overflow_cqe *ocqe;
> > +               struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
> >
> > -               ocqe = io_alloc_ocqe(ctx, user_data, res, cflags, 0, 0, GFP_KERNEL);
> > +               ocqe = io_alloc_ocqe(ctx, &cqe, 0, 0, GFP_KERNEL);
> >                 spin_lock(&ctx->completion_lock);
> >                 io_cqring_add_overflow(ctx, ocqe);
> >                 spin_unlock(&ctx->completion_lock);
> > @@ -1444,8 +1449,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> >                         gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
> >                         struct io_overflow_cqe *ocqe;
> >
> > -                       ocqe = io_alloc_ocqe(ctx, req->cqe.user_data, req->cqe.res,
> > -                                            req->cqe.flags, req->big_cqe.extra1,
> > +                       ocqe = io_alloc_ocqe(ctx, &req->cqe, req->big_cqe.extra1,
> >                                              req->big_cqe.extra2, gfp);
>
> If the req->big_cqe type were named, these 2 arguments could be
> combined into just &req->big_cqe.

Oops, I see you do this in the next patch. Looks good.

Best,
Caleb

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] io_uring: pass in struct io_big_cqe to io_alloc_ocqe()
  2025-05-16 20:05 ` [PATCH 4/5] io_uring: pass in struct io_big_cqe to io_alloc_ocqe() Jens Axboe
@ 2025-05-16 23:10   ` Caleb Sander Mateos
  0 siblings, 0 replies; 13+ messages in thread
From: Caleb Sander Mateos @ 2025-05-16 23:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, asml.silence

On Fri, May 16, 2025 at 1:10 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> Rather than pass extra1/extra2 separately, just pass in the (now) named
> io_big_cqe struct instead. The callers that don't use/support CQE32 will
> now just pass a single NULL, rather than two seperate mystery zero
> values.
>
> Move the clearing of the big_cqe elements into io_alloc_ocqe() as well,
> so it can get moved out of the generic code.
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] io_uring: add new helpers for posting overflows
  2025-05-16 20:05 ` [PATCH 5/5] io_uring: add new helpers for posting overflows Jens Axboe
@ 2025-05-16 23:17   ` Caleb Sander Mateos
  2025-05-16 23:49     ` Jens Axboe
  0 siblings, 1 reply; 13+ messages in thread
From: Caleb Sander Mateos @ 2025-05-16 23:17 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, asml.silence

On Fri, May 16, 2025 at 1:10 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> Add two helpers, one for posting overflows for lockless_cq rings, and
> one for non-lockless_cq rings. The former can allocate sanely with
> GFP_KERNEL, but needs to grab the completion lock for posting, while the
> latter must do non-sleeping allocs as it already holds the completion
> lock.
>
> While at it, mark the overflow handling functions as __cold as well, as
> they should not generally be called during normal operations of the
> ring.
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  io_uring/io_uring.c | 50 ++++++++++++++++++++++++++-------------------
>  1 file changed, 29 insertions(+), 21 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index c66fc4b7356b..52087b079a0c 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -697,8 +697,8 @@ static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
>         }
>  }
>
> -static bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
> -                                  struct io_overflow_cqe *ocqe)
> +static __cold bool io_cqring_add_overflow(struct io_ring_ctx *ctx,
> +                                         struct io_overflow_cqe *ocqe)
>  {
>         lockdep_assert_held(&ctx->completion_lock);
>
> @@ -808,6 +808,27 @@ static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
>         return false;
>  }
>
> +static __cold void io_cqe_overflow_lockless(struct io_ring_ctx *ctx,
> +                                           struct io_cqe *cqe,
> +                                           struct io_big_cqe *big_cqe)

Naming nit: "lockless" seems a bit misleading since this does still
take the completion_lock. Maybe name this function "io_cqe_overflow()"
and the other "io_cqe_overflow_locked()"?

Otherwise,

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>

> +{
> +       struct io_overflow_cqe *ocqe;
> +
> +       ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_KERNEL);
> +       spin_lock(&ctx->completion_lock);
> +       io_cqring_add_overflow(ctx, ocqe);
> +       spin_unlock(&ctx->completion_lock);
> +}
> +
> +static __cold bool io_cqe_overflow(struct io_ring_ctx *ctx, struct io_cqe *cqe,
> +                                  struct io_big_cqe *big_cqe)
> +{
> +       struct io_overflow_cqe *ocqe;
> +
> +       ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_ATOMIC);
> +       return io_cqring_add_overflow(ctx, ocqe);
> +}
> +
>  #define io_init_cqe(user_data, res, cflags)    \
>         (struct io_cqe) { .user_data = user_data, .res = res, .flags = cflags }
>
> @@ -818,11 +839,9 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
>         io_cq_lock(ctx);
>         filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
>         if (unlikely(!filled)) {
> -               struct io_overflow_cqe *ocqe;
>                 struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
>
> -               ocqe = io_alloc_ocqe(ctx, &cqe, NULL, GFP_ATOMIC);
> -               filled = io_cqring_add_overflow(ctx, ocqe);
> +               filled = io_cqe_overflow(ctx, &cqe, NULL);
>         }
>         io_cq_unlock_post(ctx);
>         return filled;
> @@ -838,13 +857,9 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
>         lockdep_assert(ctx->lockless_cq);
>
>         if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
> -               struct io_overflow_cqe *ocqe;
>                 struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
>
> -               ocqe = io_alloc_ocqe(ctx, &cqe, NULL, GFP_KERNEL);
> -               spin_lock(&ctx->completion_lock);
> -               io_cqring_add_overflow(ctx, ocqe);
> -               spin_unlock(&ctx->completion_lock);
> +               io_cqe_overflow_lockless(ctx, &cqe, NULL);
>         }
>         ctx->submit_state.cq_flush = true;
>  }
> @@ -1448,17 +1463,10 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>                  */
>                 if (!(req->flags & (REQ_F_CQE_SKIP | REQ_F_REISSUE)) &&
>                     unlikely(!io_fill_cqe_req(ctx, req))) {
> -                       gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
> -                       struct io_overflow_cqe *ocqe;
> -
> -                       ocqe = io_alloc_ocqe(ctx, &req->cqe, &req->big_cqe, gfp);
> -                       if (ctx->lockless_cq) {
> -                               spin_lock(&ctx->completion_lock);
> -                               io_cqring_add_overflow(ctx, ocqe);
> -                               spin_unlock(&ctx->completion_lock);
> -                       } else {
> -                               io_cqring_add_overflow(ctx, ocqe);
> -                       }
> +                       if (ctx->lockless_cq)
> +                               io_cqe_overflow_lockless(ctx, &req->cqe, &req->big_cqe);
> +                       else
> +                               io_cqe_overflow(ctx, &req->cqe, &req->big_cqe);
>                 }
>         }
>         __io_cq_unlock_post(ctx);
> --
> 2.49.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer
  2025-05-16 23:07   ` Caleb Sander Mateos
  2025-05-16 23:08     ` Caleb Sander Mateos
@ 2025-05-16 23:48     ` Jens Axboe
  1 sibling, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 23:48 UTC (permalink / raw)
  To: Caleb Sander Mateos; +Cc: io-uring, asml.silence

On 5/16/25 5:07 PM, Caleb Sander Mateos wrote:
> On Fri, May 16, 2025 at 1:10?PM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> The number of arguments to io_alloc_ocqe() is a bit unwieldy. Make it
>> take a struct io_cqe pointer rather than three sepearate CQE args. One
> 
> typo: "separate"
> 
>> path already has that readily available, add an io_init_cqe() helper for
>> the remainding two.
> 
> typo: "remaining"

Thanks, will fix those.

>> @@ -806,6 +806,9 @@ static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
>>         return false;
>>  }
>>
>> +#define io_init_cqe(user_data, res, cflags)    \
>> +       (struct io_cqe) { .user_data = user_data, .res = res, .flags = cflags }
> 
> The arguments and result should be parenthesized to prevent unexpected
> groupings. Better yet, make this a static inline function.

Sure, will do (make it a static inline).

>> @@ -1444,8 +1449,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>>                         gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
>>                         struct io_overflow_cqe *ocqe;
>>
>> -                       ocqe = io_alloc_ocqe(ctx, req->cqe.user_data, req->cqe.res,
>> -                                            req->cqe.flags, req->big_cqe.extra1,
>> +                       ocqe = io_alloc_ocqe(ctx, &req->cqe, req->big_cqe.extra1,
>>                                              req->big_cqe.extra2, gfp);
> 
> If the req->big_cqe type were named, these 2 arguments could be
> combined into just &req->big_cqe.

I see you saw the next patch post that, so will just ignore this one.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] io_uring: add new helpers for posting overflows
  2025-05-16 23:17   ` Caleb Sander Mateos
@ 2025-05-16 23:49     ` Jens Axboe
  0 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2025-05-16 23:49 UTC (permalink / raw)
  To: Caleb Sander Mateos; +Cc: io-uring, asml.silence

On 5/16/25 5:17 PM, Caleb Sander Mateos wrote:
>> @@ -808,6 +808,27 @@ static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
>>         return false;
>>  }
>>
>> +static __cold void io_cqe_overflow_lockless(struct io_ring_ctx *ctx,
>> +                                           struct io_cqe *cqe,
>> +                                           struct io_big_cqe *big_cqe)
> 
> Naming nit: "lockless" seems a bit misleading since this does still
> take the completion_lock. Maybe name this function "io_cqe_overflow()"
> and the other "io_cqe_overflow_locked()"?

Only reason why I chose lockless is to match the ctx member for the
same. But yes, you're suggestion is probably better. I'll mull it over
and change it regardless.

> Otherwise,
> 
> Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>

Thanks!

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-05-16 23:49 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-16 20:05 [PATCHSET v3 0/5] Allow non-atomic allocs for overflows Jens Axboe
2025-05-16 20:05 ` [PATCH 1/5] io_uring: open code io_req_cqe_overflow() Jens Axboe
2025-05-16 20:05 ` [PATCH 2/5] io_uring: split alloc and add of overflow Jens Axboe
2025-05-16 23:00   ` Caleb Sander Mateos
2025-05-16 20:05 ` [PATCH 3/5] io_uring: make io_alloc_ocqe() take a struct io_cqe pointer Jens Axboe
2025-05-16 23:07   ` Caleb Sander Mateos
2025-05-16 23:08     ` Caleb Sander Mateos
2025-05-16 23:48     ` Jens Axboe
2025-05-16 20:05 ` [PATCH 4/5] io_uring: pass in struct io_big_cqe to io_alloc_ocqe() Jens Axboe
2025-05-16 23:10   ` Caleb Sander Mateos
2025-05-16 20:05 ` [PATCH 5/5] io_uring: add new helpers for posting overflows Jens Axboe
2025-05-16 23:17   ` Caleb Sander Mateos
2025-05-16 23:49     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox