public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
@ 2026-03-09 21:20 Tom Ryan
  2026-03-09 21:29 ` Keith Busch
  0 siblings, 1 reply; 8+ messages in thread
From: Tom Ryan @ 2026-03-09 21:20 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Greg KH


[-- Attachment #1.1: Type: text/plain, Size: 38 bytes --]

Hi All,

Patch attached.

Thanks,
Tom

[-- Attachment #1.2: Type: text/html, Size: 162 bytes --]

[-- Attachment #2: 0001-io_uring-validate-physical-SQE-index-for-SQE_MIXED-128b-ops.patch --]
[-- Type: application/octet-stream, Size: 1869 bytes --]

From 4ba67b00d176e9f0ddff8fc80d5c28051d580f8b Mon Sep 17 00:00:00 2001
From: Tom Ryan <ryan36005@gmail.com>
Date: Mon, 9 Mar 2026 09:14:59 -0700
Subject: [PATCH] io_uring: validate physical SQE index for SQE_MIXED 128-byte
 ops

When IORING_SETUP_SQE_MIXED is used with sq_array (the default, without
IORING_SETUP_NO_SQARRAY), the boundary check for 128-byte SQE operations
in io_init_req() validates the logical SQ head position but not the
physical index obtained from sq_array.

Since sq_array allows user-controlled remapping of logical to physical
SQE indices, an unprivileged user can set sq_array[N] = sq_entries - 1,
placing a 128-byte operation at the last physical SQE slot. The
subsequent 128-byte memcpy in io_uring_cmd_sqe_copy() then reads 64
bytes past the end of the SQE array.

Fix this by checking that the physical SQE index (derived from the sqe
pointer) has room for the full 128-byte read, i.e., is not the last
entry in the array.

Fixes: 1cba30bf9fdd ("io_uring: add support for IORING_SETUP_SQE_MIXED")
Signed-off-by: Tom Ryan <ryan36005@gmail.com>
---
 io_uring/io_uring.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index aa9570316..2fa72d5e5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
 		    !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
 			return io_init_fail_req(req, -EINVAL);
+		/* Validate physical SQE index has room for 128-byte read */
+		if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
+			return io_init_fail_req(req, -EINVAL);
 		/*
 		 * A 128b operation on a mixed SQ uses two entries, so we have
 		 * to increment the head and cached refs, and decrement what's
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
  2026-03-09 21:20 io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass Tom Ryan
@ 2026-03-09 21:29 ` Keith Busch
  2026-03-09 21:45   ` Caleb Sander Mateos
  0 siblings, 1 reply; 8+ messages in thread
From: Keith Busch @ 2026-03-09 21:29 UTC (permalink / raw)
  To: Tom Ryan; +Cc: io-uring, Jens Axboe, Greg KH

On Mon, Mar 09, 2026 at 02:20:38PM -0700, Tom Ryan wrote:
> Patch attached.

You can just submit the patch as text in the mail message.

> @@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
>  		if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
>  		    !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
>  			return io_init_fail_req(req, -EINVAL);
> +		/* Validate physical SQE index has room for 128-byte read */
> +		if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
> +			return io_init_fail_req(req, -EINVAL);

Isn't this new check redundant with the "left < 2" check preceding it?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
  2026-03-09 21:29 ` Keith Busch
@ 2026-03-09 21:45   ` Caleb Sander Mateos
  2026-03-09 21:54     ` Keith Busch
  0 siblings, 1 reply; 8+ messages in thread
From: Caleb Sander Mateos @ 2026-03-09 21:45 UTC (permalink / raw)
  To: Keith Busch; +Cc: Tom Ryan, io-uring, Jens Axboe, Greg KH

On Mon, Mar 9, 2026 at 2:34 PM Keith Busch <kbusch@kernel.org> wrote:
>
> On Mon, Mar 09, 2026 at 02:20:38PM -0700, Tom Ryan wrote:
> > Patch attached.
>
> You can just submit the patch as text in the mail message.
>
> > @@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> >               if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
> >                   !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
> >                       return io_init_fail_req(req, -EINVAL);
> > +             /* Validate physical SQE index has room for 128-byte read */
> > +             if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
> > +                     return io_init_fail_req(req, -EINVAL);
>
> Isn't this new check redundant with the "left < 2" check preceding it?

I think it's orthogonal with *left < 2. How many SQEs are remaining to
submit is unrelated to the index of each SQE. It is, however,
redundant with !(ctx->cached_sq_head & (ctx->sq_entries - 1)), but
only in the IORING_SETUP_NO_SQARRAY case. For
non-IORING_SETUP_NO_SQARRAY rings, the SQ indirection array entry can
point to the last entry of the SQE array, causing the big SQE to
extend past the end. Probably, this added condition can replace
!(ctx->cached_sq_head & (ctx->sq_entries - 1)). That checks whether
this is the last entry *in the SQ indirection array*, but it should be
checking the SQE array.

Best,
Caleb

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
  2026-03-09 21:45   ` Caleb Sander Mateos
@ 2026-03-09 21:54     ` Keith Busch
  2026-03-10  5:20       ` [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED 128-byte ops Tom Ryan
  0 siblings, 1 reply; 8+ messages in thread
From: Keith Busch @ 2026-03-09 21:54 UTC (permalink / raw)
  To: Caleb Sander Mateos; +Cc: Tom Ryan, io-uring, Jens Axboe, Greg KH

On Mon, Mar 09, 2026 at 02:45:59PM -0700, Caleb Sander Mateos wrote:
> On Mon, Mar 9, 2026 at 2:34 PM Keith Busch <kbusch@kernel.org> wrote:
> >
> > On Mon, Mar 09, 2026 at 02:20:38PM -0700, Tom Ryan wrote:
> > > Patch attached.
> >
> > You can just submit the patch as text in the mail message.
> >
> > > @@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > >               if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
> > >                   !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
> > >                       return io_init_fail_req(req, -EINVAL);
> > > +             /* Validate physical SQE index has room for 128-byte read */
> > > +             if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
> > > +                     return io_init_fail_req(req, -EINVAL);
> >
> > Isn't this new check redundant with the "left < 2" check preceding it?
> 
> I think it's orthogonal with *left < 2. How many SQEs are remaining to
> submit is unrelated to the index of each SQE. It is, however,
> redundant with !(ctx->cached_sq_head & (ctx->sq_entries - 1)), but
> only in the IORING_SETUP_NO_SQARRAY case. For
> non-IORING_SETUP_NO_SQARRAY rings, the SQ indirection array entry can
> point to the last entry of the SQE array, causing the big SQE to
> extend past the end. Probably, this added condition can replace
> !(ctx->cached_sq_head & (ctx->sq_entries - 1)). That checks whether
> this is the last entry *in the SQ indirection array*, but it should be
> checking the SQE array.

Oh, right. The left < 2 was to confirm we have contiguous entries for a
big sqe, but you could index to an unaligned end with the sqarray.

Folding this into the previous 'if' sounds good. And please consider an
addition to liburing tests.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED 128-byte ops
  2026-03-09 21:54     ` Keith Busch
@ 2026-03-10  5:20       ` Tom Ryan
  2026-03-10  5:20         ` [PATCH liburing] test/sqe-mixed-boundary: validate physical SQE index for " Tom Ryan
  2026-03-10 14:44         ` [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED " Jens Axboe
  0 siblings, 2 replies; 8+ messages in thread
From: Tom Ryan @ 2026-03-10  5:20 UTC (permalink / raw)
  To: io-uring; +Cc: axboe, gregkh, kbusch, csander, Tom Ryan

When IORING_SETUP_SQE_MIXED is used without IORING_SETUP_NO_SQARRAY,
the boundary check for 128-byte SQE operations in io_init_req()
validated the logical SQ head position rather than the physical SQE
index.

The existing check:

  !(ctx->cached_sq_head & (ctx->sq_entries - 1))

ensures the logical position isn't at the end of the ring, which is
correct for NO_SQARRAY rings where physical == logical. However, when
sq_array is present, an unprivileged user can remap any logical
position to an arbitrary physical index via sq_array. Setting
sq_array[N] = sq_entries - 1 places a 128-byte operation at the last
physical SQE slot, causing the 128-byte memcpy in
io_uring_cmd_sqe_copy() to read 64 bytes past the end of the SQE
array.

Replace the cached_sq_head alignment check with a direct validation
of the physical SQE index, which correctly handles both sq_array and
NO_SQARRAY cases.

Fixes: 1cba30bf9fdd ("io_uring: add support for IORING_SETUP_SQE_MIXED")
Signed-off-by: Tom Ryan <ryan36005@gmail.com>
---
v1 -> v2:
 - Replace the cached_sq_head alignment check rather than adding a
   separate check, per Caleb Sander Mateos' observation that the new
   physical index validation subsumes the old logical check for both
   sq_array and NO_SQARRAY cases
 - Fold into existing conditional per Keith Busch
 - liburing test sent separately

 io_uring/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index aa9570316..d9a307384 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1745,7 +1745,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		 * well as 2 contiguous entries.
 		 */
 		if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
-		    !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
+		    (unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
 			return io_init_fail_req(req, -EINVAL);
 		/*
 		 * A 128b operation on a mixed SQ uses two entries, so we have
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH liburing] test/sqe-mixed-boundary: validate physical SQE index for 128-byte ops
  2026-03-10  5:20       ` [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED 128-byte ops Tom Ryan
@ 2026-03-10  5:20         ` Tom Ryan
  2026-03-10 13:01           ` Jens Axboe
  2026-03-10 14:44         ` [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED " Jens Axboe
  1 sibling, 1 reply; 8+ messages in thread
From: Tom Ryan @ 2026-03-10  5:20 UTC (permalink / raw)
  To: io-uring; +Cc: axboe, gregkh, kbusch, csander, Tom Ryan

Add a test for the kernel fix that replaces the cached_sq_head alignment
check with physical SQE index validation in io_init_req() for SQE_MIXED
128-byte operations.

test_valid_position: verifies that a NOP128 at a valid physical slot
(identity-mapped via sq_array) succeeds.

test_oob_boundary: verifies that a NOP128 remapped via sq_array to the
last physical SQE slot is rejected with -EINVAL, preventing a 64-byte
OOB read past the SQE array.

Signed-off-by: Tom Ryan <ryan36005@gmail.com>
---
 test/Makefile             |   1 +
 test/sqe-mixed-boundary.c | 182 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 183 insertions(+)
 create mode 100644 test/sqe-mixed-boundary.c

diff --git a/test/Makefile b/test/Makefile
index 7b94a1f..a10d44c 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -253,6 +253,7 @@ test_srcs := \
 	sq-poll-share.c \
 	sqpoll-sleep.c \
 	sq-space_left.c \
+	sqe-mixed-boundary.c \
 	sqe-mixed-nop.c \
 	sqe-mixed-bad-wrap.c \
 	sqe-mixed-uring_cmd.c \
diff --git a/test/sqe-mixed-boundary.c b/test/sqe-mixed-boundary.c
new file mode 100644
index 0000000..f2e6b47
--- /dev/null
+++ b/test/sqe-mixed-boundary.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Description: test SQE_MIXED physical SQE boundary validation with sq_array
+ *
+ * Verify that 128-byte operations are correctly rejected when sq_array
+ * remaps them to the last physical SQE slot, preventing a 64-byte OOB
+ * read past the SQE array.
+ */
+#include <stdio.h>
+#include <string.h>
+
+#include "liburing.h"
+#include "helpers.h"
+#include "test.h"
+
+#define NENTRIES	64
+
+/*
+ * Positive test: NOP128 at a valid physical position should succeed.
+ */
+static int test_valid_position(void)
+{
+	struct io_uring ring;
+	struct io_uring_cqe *cqe;
+	struct io_uring_sqe *sqe;
+	int ret;
+
+	ret = io_uring_queue_init(NENTRIES, &ring, IORING_SETUP_SQE_MIXED);
+	if (ret) {
+		if (ret == -EINVAL)
+			return T_EXIT_SKIP;
+		fprintf(stderr, "ring init: %d\n", ret);
+		return T_EXIT_FAIL;
+	}
+
+	sqe = io_uring_get_sqe(&ring);
+	io_uring_prep_nop(sqe);
+	sqe->user_data = 1;
+
+	sqe = io_uring_get_sqe128(&ring);
+	if (!sqe) {
+		fprintf(stderr, "get_sqe128 failed\n");
+		goto fail;
+	}
+	io_uring_prep_nop128(sqe);
+	sqe->user_data = 2;
+
+	ret = io_uring_submit(&ring);
+	if (ret < 0) {
+		fprintf(stderr, "submit: %d\n", ret);
+		goto fail;
+	}
+
+	ret = io_uring_wait_cqe(&ring, &cqe);
+	if (ret) {
+		fprintf(stderr, "wait_cqe: %d\n", ret);
+		goto fail;
+	}
+	io_uring_cqe_seen(&ring, cqe);
+
+	ret = io_uring_wait_cqe(&ring, &cqe);
+	if (ret) {
+		fprintf(stderr, "wait_cqe: %d\n", ret);
+		goto fail;
+	}
+	if (cqe->user_data == 2 && cqe->res != 0) {
+		fprintf(stderr, "NOP128 at valid position failed: %d\n",
+			cqe->res);
+		io_uring_cqe_seen(&ring, cqe);
+		goto fail;
+	}
+	io_uring_cqe_seen(&ring, cqe);
+
+	io_uring_queue_exit(&ring);
+	return T_EXIT_PASS;
+fail:
+	io_uring_queue_exit(&ring);
+	return T_EXIT_FAIL;
+}
+
+/*
+ * Negative test: NOP128 at the last physical SQE slot via sq_array remap
+ * must be rejected. Without the kernel fix, this triggers a 64-byte OOB
+ * read in io_uring_cmd_sqe_copy().
+ */
+static int test_oob_boundary(void)
+{
+	struct io_uring ring;
+	struct io_uring_cqe *cqe;
+	struct io_uring_sqe *sqe;
+	unsigned mask;
+	int ret, i, found;
+
+	ret = io_uring_queue_init(NENTRIES, &ring, IORING_SETUP_SQE_MIXED);
+	if (ret) {
+		if (ret == -EINVAL)
+			return T_EXIT_SKIP;
+		fprintf(stderr, "ring init: %d\n", ret);
+		return T_EXIT_FAIL;
+	}
+
+	mask = *ring.sq.kring_entries - 1;
+
+	/* Advance internal tail: NOP (1) + NOP128 (2) = 3 slots */
+	sqe = io_uring_get_sqe(&ring);
+	io_uring_prep_nop(sqe);
+	sqe->user_data = 1;
+
+	sqe = io_uring_get_sqe128(&ring);
+	if (!sqe) {
+		fprintf(stderr, "get_sqe128 failed\n");
+		goto fail;
+	}
+
+	/*
+	 * Override: remap logical position 1 to last physical slot.
+	 * Prep NOP128 there instead of the position get_sqe128 returned.
+	 */
+	ring.sq.array[1] = mask;
+	memset(&ring.sq.sqes[mask], 0, sizeof(struct io_uring_sqe));
+	io_uring_prep_nop128(&ring.sq.sqes[mask]);
+	ring.sq.sqes[mask].user_data = 2;
+
+	ret = io_uring_submit(&ring);
+	if (ret < 0) {
+		fprintf(stderr, "submit: %d\n", ret);
+		goto fail;
+	}
+
+	found = 0;
+	for (i = 0; i < 3; i++) {
+		ret = io_uring_wait_cqe(&ring, &cqe);
+		if (ret)
+			break;
+		if (cqe->user_data == 2) {
+			if (cqe->res != -EINVAL) {
+				fprintf(stderr,
+					"NOP128 at last slot: expected -EINVAL, got %d\n",
+					cqe->res);
+				io_uring_cqe_seen(&ring, cqe);
+				goto fail;
+			}
+			found = 1;
+		}
+		io_uring_cqe_seen(&ring, cqe);
+	}
+
+	if (!found) {
+		fprintf(stderr, "no CQE for NOP128 boundary test\n");
+		goto fail;
+	}
+
+	io_uring_queue_exit(&ring);
+	return T_EXIT_PASS;
+fail:
+	io_uring_queue_exit(&ring);
+	return T_EXIT_FAIL;
+}
+
+int main(int argc, char *argv[])
+{
+	int ret;
+
+	if (argc > 1)
+		return T_EXIT_SKIP;
+
+	ret = test_valid_position();
+	if (ret == T_EXIT_SKIP)
+		return T_EXIT_SKIP;
+	if (ret) {
+		fprintf(stderr, "test_valid_position failed\n");
+		return T_EXIT_FAIL;
+	}
+
+	ret = test_oob_boundary();
+	if (ret) {
+		fprintf(stderr, "test_oob_boundary failed\n");
+		return ret;
+	}
+
+	return T_EXIT_PASS;
+}
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH liburing] test/sqe-mixed-boundary: validate physical SQE index for 128-byte ops
  2026-03-10  5:20         ` [PATCH liburing] test/sqe-mixed-boundary: validate physical SQE index for " Tom Ryan
@ 2026-03-10 13:01           ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2026-03-10 13:01 UTC (permalink / raw)
  To: Tom Ryan, io-uring; +Cc: gregkh, kbusch, csander

On 3/9/26 11:20 PM, Tom Ryan wrote:
> +/*
> + * Negative test: NOP128 at the last physical SQE slot via sq_array remap
> + * must be rejected. Without the kernel fix, this triggers a 64-byte OOB
> + * read in io_uring_cmd_sqe_copy().
> + */
> +static int test_oob_boundary(void)
> +{
> +	struct io_uring ring;
> +	struct io_uring_cqe *cqe;
> +	struct io_uring_sqe *sqe;
> +	unsigned mask;
> +	int ret, i, found;
> +
> +	ret = io_uring_queue_init(NENTRIES, &ring, IORING_SETUP_SQE_MIXED);
> +	if (ret) {
> +		if (ret == -EINVAL)
> +			return T_EXIT_SKIP;
> +		fprintf(stderr, "ring init: %d\n", ret);
> +		return T_EXIT_FAIL;
> +	}

I don't think this will work, because this function requires the sqe
redirection array and liburing will wrap the above in SETUP_NO_SQARRAY.
Is this some llm written test case, or conversion of a raw use case? Did
you actually try and run the test case?

You can certainly make it work, you'd have to use
__io_uring_queue_init_params() to accomplish the setting up of the ring
without IORING_SETUP_NO_SQARRAY.

> +	found = 0;
> +	for (i = 0; i < 3; i++) {
> +		ret = io_uring_wait_cqe(&ring, &cqe);
> +		if (ret)
> +			break;
> +		if (cqe->user_data == 2) {
> +			if (cqe->res != -EINVAL) {
> +				fprintf(stderr,
> +					"NOP128 at last slot: expected -EINVAL, got %d\n",
> +					cqe->res);
> +				io_uring_cqe_seen(&ring, cqe);
> +				goto fail;
> +			}
> +			found = 1;
> +		}
> +		io_uring_cqe_seen(&ring, cqe);
> +	}

This one puzzles me too - you submit 2 SQEs, yet you wait for 3. This
will just sit forever until killed by the test suite timeout.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED 128-byte ops
  2026-03-10  5:20       ` [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED 128-byte ops Tom Ryan
  2026-03-10  5:20         ` [PATCH liburing] test/sqe-mixed-boundary: validate physical SQE index for " Tom Ryan
@ 2026-03-10 14:44         ` Jens Axboe
  1 sibling, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2026-03-10 14:44 UTC (permalink / raw)
  To: io-uring, Tom Ryan; +Cc: gregkh, kbusch, csander


On Mon, 09 Mar 2026 22:20:02 -0700, Tom Ryan wrote:
> When IORING_SETUP_SQE_MIXED is used without IORING_SETUP_NO_SQARRAY,
> the boundary check for 128-byte SQE operations in io_init_req()
> validated the logical SQ head position rather than the physical SQE
> index.
> 
> The existing check:
> 
> [...]

Applied, thanks!

[1/1] io_uring: fix physical SQE bounds check for SQE_MIXED 128-byte ops
      commit: c76e0f1d77f87e258193c2628253782d5ff414c7

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-10 14:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09 21:20 io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass Tom Ryan
2026-03-09 21:29 ` Keith Busch
2026-03-09 21:45   ` Caleb Sander Mateos
2026-03-09 21:54     ` Keith Busch
2026-03-10  5:20       ` [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED 128-byte ops Tom Ryan
2026-03-10  5:20         ` [PATCH liburing] test/sqe-mixed-boundary: validate physical SQE index for " Tom Ryan
2026-03-10 13:01           ` Jens Axboe
2026-03-10 14:44         ` [PATCH v2] io_uring: fix physical SQE bounds check for SQE_MIXED " Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox