public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/1] io_uring: mixed sqe support
@ 2025-08-29 19:39 Keith Busch
  2025-08-29 19:39 ` [RFC PATCH liburing 1/2] Add support IORING_SETUP_SQE_MIXED Keith Busch
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Keith Busch @ 2025-08-29 19:39 UTC (permalink / raw)
  To: axboe, io-uring; +Cc: Keith Busch

From: Keith Busch <kbusch@kernel.org>

The CQ supports mixed size entries, so why not do it for SQ's too? There
are use cases that currently allocate different queues just to keep
these things separated, but we can efficiently handle both cases in a
single ring.

This RFC is taking the last SQE flags bit for the purpose. That might
not be okay, but serves as a proof of concept for a first look.

Keith Busch (1):
  io_uring: add support for IORING_SETUP_SQE_MIXED

 include/uapi/linux/io_uring.h |  9 +++++++++
 io_uring/fdinfo.c             | 21 ++++++++++-----------
 io_uring/io_uring.c           | 15 ++++++++++++++-
 3 files changed, 33 insertions(+), 12 deletions(-)

-- 
2.47.3


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH liburing 1/2] Add support IORING_SETUP_SQE_MIXED
  2025-08-29 19:39 [RFC PATCH 0/1] io_uring: mixed sqe support Keith Busch
@ 2025-08-29 19:39 ` Keith Busch
  2025-08-29 19:39 ` [RFC PATCH 1/1] io_uring: add support for IORING_SETUP_SQE_MIXED Keith Busch
  2025-08-29 19:39 ` [RFC PATCH liburing 2/2] Add nop testing " Keith Busch
  2 siblings, 0 replies; 6+ messages in thread
From: Keith Busch @ 2025-08-29 19:39 UTC (permalink / raw)
  To: axboe, io-uring; +Cc: Keith Busch

From: Keith Busch <kbusch@kernel.org>

This adds core support for mixed sized SQEs in the same SQ ring. Before
this, SQEs were either 64b in size (the normal size), or 128b if
IORING_SETUP_SQE128 was set in the ring initialization. With the mixed
support, an SQE may be either 64b or 128b on the same SQ ring. If the
SQE is 128b in size, then IOSQE_SQE_128B will be set in the sqe flags.
The client may post a NOP SQE with IOSQE_CQE_SKIP_SUCCESS set that the
kernel should simply ignore as it's just a pad filler that is posted
when required.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 src/include/liburing.h          | 31 +++++++++++++++++++++++++++++++
 src/include/liburing/io_uring.h |  9 +++++++++
 2 files changed, 40 insertions(+)

diff --git a/src/include/liburing.h b/src/include/liburing.h
index 7ea876e1..dc94ad13 100644
--- a/src/include/liburing.h
+++ b/src/include/liburing.h
@@ -1853,6 +1853,37 @@ IOURINGINLINE struct io_uring_sqe *_io_uring_get_sqe(struct io_uring *ring)
 	return sqe;
 }
 
+IOURINGINLINE struct io_uring_sqe *_io_uring_get_sqe128_mixed(struct io_uring *ring)
+	LIBURING_NOEXCEPT
+{
+	struct io_uring_sq *sq = &ring->sq;
+	unsigned head = io_uring_load_sq_head(ring), tail = sq->sqe_tail;
+	struct io_uring_sqe *sqe;
+
+	if (!(ring->flags & IORING_SETUP_SQE_MIXED))
+		return NULL;
+
+	if ((tail & sq->ring_mask) + 1 == sq->ring_entries) {
+		sqe = _io_uring_get_sqe(ring);
+		if (!sqe)
+			return NULL;
+
+		io_uring_prep_nop(sqe);
+		sqe->flags |= IOSQE_CQE_SKIP_SUCCESS;
+		tail = sq->sqe_tail;
+	}
+
+	if ((tail + 1) - head >= sq->ring_entries)
+		return NULL;
+
+	sqe = &sq->sqes[tail & sq->ring_mask];
+	sq->sqe_tail = tail + 2;
+	io_uring_initialize_sqe(sqe);
+	sqe->flags |= IOSQE_SQE_128B;
+
+	return sqe;
+}
+
 /*
  * Return the appropriate mask for a buffer ring of size 'ring_entries'
  */
diff --git a/src/include/liburing/io_uring.h b/src/include/liburing/io_uring.h
index 643514e5..fd02fa52 100644
--- a/src/include/liburing/io_uring.h
+++ b/src/include/liburing/io_uring.h
@@ -126,6 +126,7 @@ enum io_uring_sqe_flags_bit {
 	IOSQE_ASYNC_BIT,
 	IOSQE_BUFFER_SELECT_BIT,
 	IOSQE_CQE_SKIP_SUCCESS_BIT,
+	IOSQE_SQE_128B_BIT,
 };
 
 /*
@@ -145,6 +146,8 @@ enum io_uring_sqe_flags_bit {
 #define IOSQE_BUFFER_SELECT	(1U << IOSQE_BUFFER_SELECT_BIT)
 /* don't post CQE if request succeeded */
 #define IOSQE_CQE_SKIP_SUCCESS	(1U << IOSQE_CQE_SKIP_SUCCESS_BIT)
+/* this is a 128b/big-sqe posting */
+#define IOSQE_SQE_128B          (1U << IOSQE_SQE_128B_BIT)
 
 /*
  * io_uring_setup() flags
@@ -211,6 +214,12 @@ enum io_uring_sqe_flags_bit {
  */
 #define IORING_SETUP_CQE_MIXED		(1U << 18)
 
+/*
+ *  Allow both 64b and 128b SQEs. If a 128b SQE is posted, it will have
+ *  IOSQE_SQE_128B set in sqe->flags.
+ */
+#define IORING_SETUP_SQE_MIXED		(1U << 19)
+
 enum io_uring_op {
 	IORING_OP_NOP,
 	IORING_OP_READV,
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH 1/1] io_uring: add support for IORING_SETUP_SQE_MIXED
  2025-08-29 19:39 [RFC PATCH 0/1] io_uring: mixed sqe support Keith Busch
  2025-08-29 19:39 ` [RFC PATCH liburing 1/2] Add support IORING_SETUP_SQE_MIXED Keith Busch
@ 2025-08-29 19:39 ` Keith Busch
  2025-08-29 21:29   ` Caleb Sander Mateos
  2025-08-29 19:39 ` [RFC PATCH liburing 2/2] Add nop testing " Keith Busch
  2 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2025-08-29 19:39 UTC (permalink / raw)
  To: axboe, io-uring; +Cc: Keith Busch

From: Keith Busch <kbusch@kernel.org>

Normal rings support 64b SQEs for posting submissions, while certain
features require the ring to be configured with IORING_SETUP_SQE128, as
they need to convey more information per submission. This, in turn,
makes ALL the SQEs be 128b in size. This is somewhat wasteful and
inefficient, particularly when only certain SQEs need to be of the
bigger variant.

This adds support for setting up a ring with mixed SQE sizes, using
IORING_SETUP_SQE_MIXED. When setup in this mode, SQEs posted to the ring
may be either 64b or 128b in size. If a SQE is 128b in size, then
IOSQE_SQE_128B flag is set in the SQE flags to indicate that this is the
case. If this flag isn't set, the SQE is the normal 64b variant.

SQEs on these types of mixed rings may also utilize NOP with skip
success set.  This can happen if the ring is one (small) SQE entry away
from wrapping, and an attempt is made to post a 128b SQE. As SQEs must be
contigious in the SQ ring, a 128b SQE cannot wrap the ring. For this
case, a single NOP SQE is posted with the SKIP flag set. The kernel
should simply ignore those.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 include/uapi/linux/io_uring.h |  9 +++++++++
 io_uring/fdinfo.c             | 21 ++++++++++-----------
 io_uring/io_uring.c           | 15 ++++++++++++++-
 3 files changed, 33 insertions(+), 12 deletions(-)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 04ebff33d0e62..9cef9085f52ee 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -146,6 +146,7 @@ enum io_uring_sqe_flags_bit {
 	IOSQE_ASYNC_BIT,
 	IOSQE_BUFFER_SELECT_BIT,
 	IOSQE_CQE_SKIP_SUCCESS_BIT,
+	IOSQE_SQE_128B_BIT,
 };
 
 /*
@@ -165,6 +166,8 @@ enum io_uring_sqe_flags_bit {
 #define IOSQE_BUFFER_SELECT	(1U << IOSQE_BUFFER_SELECT_BIT)
 /* don't post CQE if request succeeded */
 #define IOSQE_CQE_SKIP_SUCCESS	(1U << IOSQE_CQE_SKIP_SUCCESS_BIT)
+/* this is a 128b/big-sqe posting */
+#define IOSQE_SQE_128B		(1U << IOSQE_SQE_128B_BIT)
 
 /*
  * io_uring_setup() flags
@@ -231,6 +234,12 @@ enum io_uring_sqe_flags_bit {
  */
 #define IORING_SETUP_CQE_MIXED		(1U << 18)
 
+/*
+ * Allow both 64b and 128b SQEs. If a 128b SQE is posted, it will have
+ * IOSQE_SQE_128B set in sqe->flags.
+ */
+#define IORING_SETUP_SQE_MIXED		(1U << 19)
+
 enum io_uring_op {
 	IORING_OP_NOP,
 	IORING_OP_READV,
diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
index 5c73398387690..4eb6dbb9807be 100644
--- a/io_uring/fdinfo.c
+++ b/io_uring/fdinfo.c
@@ -65,15 +65,10 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
 	unsigned int sq_tail = READ_ONCE(r->sq.tail);
 	unsigned int cq_head = READ_ONCE(r->cq.head);
 	unsigned int cq_tail = READ_ONCE(r->cq.tail);
-	unsigned int sq_shift = 0;
-	unsigned int sq_entries;
 	int sq_pid = -1, sq_cpu = -1;
 	u64 sq_total_time = 0, sq_work_time = 0;
 	unsigned int i;
 
-	if (ctx->flags & IORING_SETUP_SQE128)
-		sq_shift = 1;
-
 	/*
 	 * we may get imprecise sqe and cqe info if uring is actively running
 	 * since we get cached_sq_head and cached_cq_tail without uring_lock
@@ -89,18 +84,19 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
 	seq_printf(m, "CqTail:\t%u\n", cq_tail);
 	seq_printf(m, "CachedCqTail:\t%u\n", data_race(ctx->cached_cq_tail));
 	seq_printf(m, "SQEs:\t%u\n", sq_tail - sq_head);
-	sq_entries = min(sq_tail - sq_head, ctx->sq_entries);
-	for (i = 0; i < sq_entries; i++) {
-		unsigned int entry = i + sq_head;
+	while (sq_head < sq_tail) {
 		struct io_uring_sqe *sqe;
 		unsigned int sq_idx;
+		bool sqe128 = false;
 
 		if (ctx->flags & IORING_SETUP_NO_SQARRAY)
 			break;
-		sq_idx = READ_ONCE(ctx->sq_array[entry & sq_mask]);
+		sq_idx = READ_ONCE(ctx->sq_array[sq_head & sq_mask]);
 		if (sq_idx > sq_mask)
 			continue;
-		sqe = &ctx->sq_sqes[sq_idx << sq_shift];
+		sqe = &ctx->sq_sqes[sq_idx];
+		if (sqe->flags & IOSQE_SQE_128B || ctx->flags & IORING_SETUP_SQE128)
+			sqe128 = true;
 		seq_printf(m, "%5u: opcode:%s, fd:%d, flags:%x, off:%llu, "
 			      "addr:0x%llx, rw_flags:0x%x, buf_index:%d "
 			      "user_data:%llu",
@@ -108,7 +104,7 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
 			   sqe->flags, (unsigned long long) sqe->off,
 			   (unsigned long long) sqe->addr, sqe->rw_flags,
 			   sqe->buf_index, sqe->user_data);
-		if (sq_shift) {
+		if (sqe128) {
 			u64 *sqeb = (void *) (sqe + 1);
 			int size = sizeof(struct io_uring_sqe) / sizeof(u64);
 			int j;
@@ -120,6 +116,9 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
 			}
 		}
 		seq_printf(m, "\n");
+		sq_head++;
+		if (sqe128)
+			sq_head++;
 	}
 	seq_printf(m, "CQEs:\t%u\n", cq_tail - cq_head);
 	while (cq_head < cq_tail) {
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 6c07efac977ce..7788292be8560 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2416,6 +2416,8 @@ static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe)
 	if (ctx->flags & IORING_SETUP_SQE128)
 		head <<= 1;
 	*sqe = &ctx->sq_sqes[head];
+	if (ctx->flags & IORING_SETUP_SQE_MIXED && (*sqe)->flags & IOSQE_SQE_128B)
+		ctx->cached_sq_head++;
 	return true;
 }
 
@@ -2793,6 +2795,10 @@ unsigned long rings_size(unsigned int flags, unsigned int sq_entries,
 		if (cq_entries < 2)
 			return SIZE_MAX;
 	}
+	if (flags & IORING_SETUP_SQE_MIXED) {
+		if (sq_entries < 2)
+			return SIZE_MAX;
+	}
 
 #ifdef CONFIG_SMP
 	off = ALIGN(off, SMP_CACHE_BYTES);
@@ -3724,6 +3730,13 @@ static int io_uring_sanitise_params(struct io_uring_params *p)
 	if ((flags & (IORING_SETUP_CQE32|IORING_SETUP_CQE_MIXED)) ==
 	    (IORING_SETUP_CQE32|IORING_SETUP_CQE_MIXED))
 		return -EINVAL;
+	/*
+	 * Nonsensical to ask for SQE128 and mixed SQE support, it's not
+	 * supported to post 64b SQEs on a ring setup with SQE128.
+	 */
+	if ((flags & (IORING_SETUP_SQE128|IORING_SETUP_SQE_MIXED)) ==
+	    (IORING_SETUP_SQE128|IORING_SETUP_SQE_MIXED))
+		return -EINVAL;
 
 	return 0;
 }
@@ -3952,7 +3965,7 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
 			IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN |
 			IORING_SETUP_NO_MMAP | IORING_SETUP_REGISTERED_FD_ONLY |
 			IORING_SETUP_NO_SQARRAY | IORING_SETUP_HYBRID_IOPOLL |
-			IORING_SETUP_CQE_MIXED))
+			IORING_SETUP_CQE_MIXED | IORING_SETUP_SQE_MIXED))
 		return -EINVAL;
 
 	return io_uring_create(entries, &p, params);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH liburing 2/2] Add nop testing for IORING_SETUP_SQE_MIXED
  2025-08-29 19:39 [RFC PATCH 0/1] io_uring: mixed sqe support Keith Busch
  2025-08-29 19:39 ` [RFC PATCH liburing 1/2] Add support IORING_SETUP_SQE_MIXED Keith Busch
  2025-08-29 19:39 ` [RFC PATCH 1/1] io_uring: add support for IORING_SETUP_SQE_MIXED Keith Busch
@ 2025-08-29 19:39 ` Keith Busch
  2 siblings, 0 replies; 6+ messages in thread
From: Keith Busch @ 2025-08-29 19:39 UTC (permalink / raw)
  To: axboe, io-uring; +Cc: Keith Busch

From: Keith Busch <kbusch@kernel.org>

Add NOP testing for mixed sized SQEs.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 test/Makefile |   1 +
 test/nop128.c | 105 ++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 106 insertions(+)
 create mode 100644 test/nop128.c

diff --git a/test/Makefile b/test/Makefile
index 54251bf2..b3a67393 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -150,6 +150,7 @@ test_srcs := \
 	nop.c \
 	nop32.c \
 	nop32-overflow.c \
+	nop128.c \
 	ooo-file-unreg.c \
 	openat2.c \
 	open-close.c \
diff --git a/test/nop128.c b/test/nop128.c
new file mode 100644
index 00000000..f9a2dd93
--- /dev/null
+++ b/test/nop128.c
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Description: run various nop tests
+ *
+ */
+#include <errno.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <fcntl.h>
+
+#include "liburing.h"
+#include "helpers.h"
+#include "test.h"
+
+static int seq;
+
+static int test_single_nop(struct io_uring *ring, unsigned req_flags, bool sqe128)
+{
+	struct io_uring_cqe *cqe;
+	struct io_uring_sqe *sqe;
+	int ret;
+
+	if (sqe128)
+		sqe = _io_uring_get_sqe128_mixed(ring);
+	else
+		sqe = io_uring_get_sqe(ring);
+	if (!sqe) {
+		fprintf(stderr, "get sqe failed\n");
+		goto err;
+	}
+
+	io_uring_prep_nop(sqe);
+	sqe->user_data = ++seq;
+	sqe->flags |= req_flags;
+
+	ret = io_uring_submit(ring);
+	if (ret <= 0) {
+		fprintf(stderr, "sqe submit failed: %d\n", ret);
+		goto err;
+	}
+
+	ret = io_uring_wait_cqe(ring, &cqe);
+	if (ret < 0) {
+		fprintf(stderr, "wait completion %d\n", ret);
+		goto err;
+	}
+	if (cqe->user_data != seq) {
+		fprintf(stderr, "Unexpected user_data: %ld\n", (long) cqe->user_data);
+		goto err;
+	}
+	io_uring_cqe_seen(ring, cqe);
+	return T_EXIT_PASS;
+err:
+	return T_EXIT_FAIL;
+}
+
+static int test_ring(unsigned flags)
+{
+	struct io_uring ring;
+	struct io_uring_params p = { };
+	int ret, i;
+
+	p.flags = flags;
+	ret = io_uring_queue_init_params(8, &ring, &p);
+	if (ret) {
+		if (ret == -EINVAL)
+			return T_EXIT_SKIP;
+		fprintf(stderr, "ring setup failed: %d\n", ret);
+		return T_EXIT_FAIL;
+	}
+
+	test_single_nop(&ring, 0, 0);
+
+	for (i = 0; i < 16; i++) {
+		ret = test_single_nop(&ring, 0, 1);
+		if (ret) {
+			printf("fail off %d\n", i);
+			fprintf(stderr, "test_single_nop failed\n");
+			goto err;
+		}
+	}
+err:
+	io_uring_queue_exit(&ring);
+	return ret;
+}
+
+int main(int argc, char *argv[])
+{
+	int ret;
+
+	if (argc > 1)
+		return T_EXIT_SKIP;
+
+	ret = test_ring(IORING_SETUP_SQE_MIXED);
+	if (ret == T_EXIT_SKIP) {
+		return T_EXIT_SKIP;
+	} else if (ret != T_EXIT_PASS) {
+		fprintf(stderr, "Mixed ring test failed\n");
+		return ret;
+	}
+
+	return T_EXIT_PASS;
+}
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/1] io_uring: add support for IORING_SETUP_SQE_MIXED
  2025-08-29 19:39 ` [RFC PATCH 1/1] io_uring: add support for IORING_SETUP_SQE_MIXED Keith Busch
@ 2025-08-29 21:29   ` Caleb Sander Mateos
  2025-09-02 18:03     ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Caleb Sander Mateos @ 2025-08-29 21:29 UTC (permalink / raw)
  To: Keith Busch; +Cc: axboe, io-uring, Keith Busch

On Fri, Aug 29, 2025 at 12:41 PM Keith Busch <kbusch@meta.com> wrote:
>
> From: Keith Busch <kbusch@kernel.org>
>
> Normal rings support 64b SQEs for posting submissions, while certain
> features require the ring to be configured with IORING_SETUP_SQE128, as
> they need to convey more information per submission. This, in turn,
> makes ALL the SQEs be 128b in size. This is somewhat wasteful and
> inefficient, particularly when only certain SQEs need to be of the
> bigger variant.
>
> This adds support for setting up a ring with mixed SQE sizes, using
> IORING_SETUP_SQE_MIXED. When setup in this mode, SQEs posted to the ring
> may be either 64b or 128b in size. If a SQE is 128b in size, then
> IOSQE_SQE_128B flag is set in the SQE flags to indicate that this is the
> case. If this flag isn't set, the SQE is the normal 64b variant.
>
> SQEs on these types of mixed rings may also utilize NOP with skip
> success set.  This can happen if the ring is one (small) SQE entry away
> from wrapping, and an attempt is made to post a 128b SQE. As SQEs must be
> contigious in the SQ ring, a 128b SQE cannot wrap the ring. For this

typo: contiguous"

> case, a single NOP SQE is posted with the SKIP flag set. The kernel
> should simply ignore those.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
>  include/uapi/linux/io_uring.h |  9 +++++++++
>  io_uring/fdinfo.c             | 21 ++++++++++-----------
>  io_uring/io_uring.c           | 15 ++++++++++++++-
>  3 files changed, 33 insertions(+), 12 deletions(-)
>
> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
> index 04ebff33d0e62..9cef9085f52ee 100644
> --- a/include/uapi/linux/io_uring.h
> +++ b/include/uapi/linux/io_uring.h
> @@ -146,6 +146,7 @@ enum io_uring_sqe_flags_bit {
>         IOSQE_ASYNC_BIT,
>         IOSQE_BUFFER_SELECT_BIT,
>         IOSQE_CQE_SKIP_SUCCESS_BIT,
> +       IOSQE_SQE_128B_BIT,
>  };
>
>  /*
> @@ -165,6 +166,8 @@ enum io_uring_sqe_flags_bit {
>  #define IOSQE_BUFFER_SELECT    (1U << IOSQE_BUFFER_SELECT_BIT)
>  /* don't post CQE if request succeeded */
>  #define IOSQE_CQE_SKIP_SUCCESS (1U << IOSQE_CQE_SKIP_SUCCESS_BIT)
> +/* this is a 128b/big-sqe posting */
> +#define IOSQE_SQE_128B         (1U << IOSQE_SQE_128B_BIT)
>
>  /*
>   * io_uring_setup() flags
> @@ -231,6 +234,12 @@ enum io_uring_sqe_flags_bit {
>   */
>  #define IORING_SETUP_CQE_MIXED         (1U << 18)
>
> +/*
> + * Allow both 64b and 128b SQEs. If a 128b SQE is posted, it will have
> + * IOSQE_SQE_128B set in sqe->flags.
> + */
> +#define IORING_SETUP_SQE_MIXED         (1U << 19)
> +
>  enum io_uring_op {
>         IORING_OP_NOP,
>         IORING_OP_READV,
> diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
> index 5c73398387690..4eb6dbb9807be 100644
> --- a/io_uring/fdinfo.c
> +++ b/io_uring/fdinfo.c
> @@ -65,15 +65,10 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
>         unsigned int sq_tail = READ_ONCE(r->sq.tail);
>         unsigned int cq_head = READ_ONCE(r->cq.head);
>         unsigned int cq_tail = READ_ONCE(r->cq.tail);
> -       unsigned int sq_shift = 0;
> -       unsigned int sq_entries;
>         int sq_pid = -1, sq_cpu = -1;
>         u64 sq_total_time = 0, sq_work_time = 0;
>         unsigned int i;
>
> -       if (ctx->flags & IORING_SETUP_SQE128)
> -               sq_shift = 1;
> -
>         /*
>          * we may get imprecise sqe and cqe info if uring is actively running
>          * since we get cached_sq_head and cached_cq_tail without uring_lock
> @@ -89,18 +84,19 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
>         seq_printf(m, "CqTail:\t%u\n", cq_tail);
>         seq_printf(m, "CachedCqTail:\t%u\n", data_race(ctx->cached_cq_tail));
>         seq_printf(m, "SQEs:\t%u\n", sq_tail - sq_head);
> -       sq_entries = min(sq_tail - sq_head, ctx->sq_entries);
> -       for (i = 0; i < sq_entries; i++) {
> -               unsigned int entry = i + sq_head;
> +       while (sq_head < sq_tail) {
>                 struct io_uring_sqe *sqe;
>                 unsigned int sq_idx;
> +               bool sqe128 = false;
>
>                 if (ctx->flags & IORING_SETUP_NO_SQARRAY)
>                         break;
> -               sq_idx = READ_ONCE(ctx->sq_array[entry & sq_mask]);
> +               sq_idx = READ_ONCE(ctx->sq_array[sq_head & sq_mask]);
>                 if (sq_idx > sq_mask)
>                         continue;
> -               sqe = &ctx->sq_sqes[sq_idx << sq_shift];
> +               sqe = &ctx->sq_sqes[sq_idx];

We still need to double the sq_idx in the IORING_SETUP_SQE128 case, right?

> +               if (sqe->flags & IOSQE_SQE_128B || ctx->flags & IORING_SETUP_SQE128)
> +                       sqe128 = true;
>                 seq_printf(m, "%5u: opcode:%s, fd:%d, flags:%x, off:%llu, "
>                               "addr:0x%llx, rw_flags:0x%x, buf_index:%d "
>                               "user_data:%llu",
> @@ -108,7 +104,7 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
>                            sqe->flags, (unsigned long long) sqe->off,
>                            (unsigned long long) sqe->addr, sqe->rw_flags,
>                            sqe->buf_index, sqe->user_data);
> -               if (sq_shift) {
> +               if (sqe128) {
>                         u64 *sqeb = (void *) (sqe + 1);

Needs to check that IOSQE_SQE_128B isn't set on the last entry in the
SQE array to avoid accessing past the end of it?

>                         int size = sizeof(struct io_uring_sqe) / sizeof(u64);
>                         int j;
> @@ -120,6 +116,9 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
>                         }
>                 }
>                 seq_printf(m, "\n");
> +               sq_head++;
> +               if (sqe128)
> +                       sq_head++;
>         }
>         seq_printf(m, "CQEs:\t%u\n", cq_tail - cq_head);
>         while (cq_head < cq_tail) {
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 6c07efac977ce..7788292be8560 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -2416,6 +2416,8 @@ static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe)
>         if (ctx->flags & IORING_SETUP_SQE128)
>                 head <<= 1;
>         *sqe = &ctx->sq_sqes[head];
> +       if (ctx->flags & IORING_SETUP_SQE_MIXED && (*sqe)->flags & IOSQE_SQE_128B)

Use READ_ONCE() to read userspace-mapped memory?

Do we also need to check that this isn't the last entry in the SQEs
array? Otherwise, it seems like buggy/malicious userspace can cause
the kernel to read past the end of the array.

> +               ctx->cached_sq_head++;

Probably worth documenting that when using the SQ indirection array
(i.e. not IORING_SETUP_NO_SQARRAY), the index into the indirection
array is expected to increment twice for a 128-byte SQE even though it
still only uses a single unsigned entry in this array. I can see how
this behavior eases the liburing implementation of this (so the
indirection array is always an identity mapping to the SQE array), but
it might be surprising for applications that reuse SQEs and only
modify the indirection array that they have to skip an extra entry in
the indirection array.

io_get_sequence() seems to have some funky logic that assumes
cached_sq_head counts full I/Os rather than SQEs, so this might break
it.

io_sqring_entries() also computes sq.tail - cached_sq_head, which I
don't think works in the presence of IOSQE_SQE_128B. io_submit_sqes()
probably needs to compare cached_sq_head to the (cached) value of
sq.tail to tell when all the SQEs have been consumed.

>         return true;
>  }
>
> @@ -2793,6 +2795,10 @@ unsigned long rings_size(unsigned int flags, unsigned int sq_entries,
>                 if (cq_entries < 2)
>                         return SIZE_MAX;
>         }
> +       if (flags & IORING_SETUP_SQE_MIXED) {
> +               if (sq_entries < 2)
> +                       return SIZE_MAX;
> +       }
>
>  #ifdef CONFIG_SMP
>         off = ALIGN(off, SMP_CACHE_BYTES);
> @@ -3724,6 +3730,13 @@ static int io_uring_sanitise_params(struct io_uring_params *p)
>         if ((flags & (IORING_SETUP_CQE32|IORING_SETUP_CQE_MIXED)) ==
>             (IORING_SETUP_CQE32|IORING_SETUP_CQE_MIXED))
>                 return -EINVAL;
> +       /*
> +        * Nonsensical to ask for SQE128 and mixed SQE support, it's not
> +        * supported to post 64b SQEs on a ring setup with SQE128.
> +        */
> +       if ((flags & (IORING_SETUP_SQE128|IORING_SETUP_SQE_MIXED)) ==
> +           (IORING_SETUP_SQE128|IORING_SETUP_SQE_MIXED))
> +               return -EINVAL;
>
>         return 0;
>  }
> @@ -3952,7 +3965,7 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
>                         IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN |
>                         IORING_SETUP_NO_MMAP | IORING_SETUP_REGISTERED_FD_ONLY |
>                         IORING_SETUP_NO_SQARRAY | IORING_SETUP_HYBRID_IOPOLL |
> -                       IORING_SETUP_CQE_MIXED))
> +                       IORING_SETUP_CQE_MIXED | IORING_SETUP_SQE_MIXED))

There are a few other users of IORING_SETUP_SQE128 that likely need to
be made aware of IOSQE_SQE_128B. For one, uring_sqe_size(), which is
used to determine how large the SQE payload is when copying it for a
uring_cmd that goes async. And the logic for setting IO_URING_F_SQE128
in io_uring_cmd() also needs to be adjusted.

Best,
Caleb

>                 return -EINVAL;
>
>         return io_uring_create(entries, &p, params);
> --
> 2.47.3
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/1] io_uring: add support for IORING_SETUP_SQE_MIXED
  2025-08-29 21:29   ` Caleb Sander Mateos
@ 2025-09-02 18:03     ` Keith Busch
  0 siblings, 0 replies; 6+ messages in thread
From: Keith Busch @ 2025-09-02 18:03 UTC (permalink / raw)
  To: Caleb Sander Mateos; +Cc: Keith Busch, axboe, io-uring

On Fri, Aug 29, 2025 at 02:29:39PM -0700, Caleb Sander Mateos wrote:
> 
> There are a few other users of IORING_SETUP_SQE128 that likely need to
> be made aware of IOSQE_SQE_128B. For one, uring_sqe_size(), which is
> used to determine how large the SQE payload is when copying it for a
> uring_cmd that goes async. And the logic for setting IO_URING_F_SQE128
> in io_uring_cmd() also needs to be adjusted.

Yeah, you're right. I did the liburing side first, and was a bit excited
to see the nop test was working correctly so quickly that I didn't
finish all the pieces. I'll get all that worked out on the next version.
Thanks for the comments!

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-09-02 18:03 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-29 19:39 [RFC PATCH 0/1] io_uring: mixed sqe support Keith Busch
2025-08-29 19:39 ` [RFC PATCH liburing 1/2] Add support IORING_SETUP_SQE_MIXED Keith Busch
2025-08-29 19:39 ` [RFC PATCH 1/1] io_uring: add support for IORING_SETUP_SQE_MIXED Keith Busch
2025-08-29 21:29   ` Caleb Sander Mateos
2025-09-02 18:03     ` Keith Busch
2025-08-29 19:39 ` [RFC PATCH liburing 2/2] Add nop testing " Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox