* io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
@ 2026-03-09 21:20 Tom Ryan
2026-03-09 21:29 ` Keith Busch
0 siblings, 1 reply; 4+ messages in thread
From: Tom Ryan @ 2026-03-09 21:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, Greg KH
[-- Attachment #1.1: Type: text/plain, Size: 38 bytes --]
Hi All,
Patch attached.
Thanks,
Tom
[-- Attachment #1.2: Type: text/html, Size: 162 bytes --]
[-- Attachment #2: 0001-io_uring-validate-physical-SQE-index-for-SQE_MIXED-128b-ops.patch --]
[-- Type: application/octet-stream, Size: 1869 bytes --]
From 4ba67b00d176e9f0ddff8fc80d5c28051d580f8b Mon Sep 17 00:00:00 2001
From: Tom Ryan <ryan36005@gmail.com>
Date: Mon, 9 Mar 2026 09:14:59 -0700
Subject: [PATCH] io_uring: validate physical SQE index for SQE_MIXED 128-byte
ops
When IORING_SETUP_SQE_MIXED is used with sq_array (the default, without
IORING_SETUP_NO_SQARRAY), the boundary check for 128-byte SQE operations
in io_init_req() validates the logical SQ head position but not the
physical index obtained from sq_array.
Since sq_array allows user-controlled remapping of logical to physical
SQE indices, an unprivileged user can set sq_array[N] = sq_entries - 1,
placing a 128-byte operation at the last physical SQE slot. The
subsequent 128-byte memcpy in io_uring_cmd_sqe_copy() then reads 64
bytes past the end of the SQE array.
Fix this by checking that the physical SQE index (derived from the sqe
pointer) has room for the full 128-byte read, i.e., is not the last
entry in the array.
Fixes: 1cba30bf9fdd ("io_uring: add support for IORING_SETUP_SQE_MIXED")
Signed-off-by: Tom Ryan <ryan36005@gmail.com>
---
io_uring/io_uring.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index aa9570316..2fa72d5e5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
!(ctx->cached_sq_head & (ctx->sq_entries - 1)))
return io_init_fail_req(req, -EINVAL);
+ /* Validate physical SQE index has room for 128-byte read */
+ if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
+ return io_init_fail_req(req, -EINVAL);
/*
* A 128b operation on a mixed SQ uses two entries, so we have
* to increment the head and cached refs, and decrement what's
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
2026-03-09 21:20 io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass Tom Ryan
@ 2026-03-09 21:29 ` Keith Busch
2026-03-09 21:45 ` Caleb Sander Mateos
0 siblings, 1 reply; 4+ messages in thread
From: Keith Busch @ 2026-03-09 21:29 UTC (permalink / raw)
To: Tom Ryan; +Cc: io-uring, Jens Axboe, Greg KH
On Mon, Mar 09, 2026 at 02:20:38PM -0700, Tom Ryan wrote:
> Patch attached.
You can just submit the patch as text in the mail message.
> @@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
> !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
> return io_init_fail_req(req, -EINVAL);
> + /* Validate physical SQE index has room for 128-byte read */
> + if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
> + return io_init_fail_req(req, -EINVAL);
Isn't this new check redundant with the "left < 2" check preceding it?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
2026-03-09 21:29 ` Keith Busch
@ 2026-03-09 21:45 ` Caleb Sander Mateos
2026-03-09 21:54 ` Keith Busch
0 siblings, 1 reply; 4+ messages in thread
From: Caleb Sander Mateos @ 2026-03-09 21:45 UTC (permalink / raw)
To: Keith Busch; +Cc: Tom Ryan, io-uring, Jens Axboe, Greg KH
On Mon, Mar 9, 2026 at 2:34 PM Keith Busch <kbusch@kernel.org> wrote:
>
> On Mon, Mar 09, 2026 at 02:20:38PM -0700, Tom Ryan wrote:
> > Patch attached.
>
> You can just submit the patch as text in the mail message.
>
> > @@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
> > !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
> > return io_init_fail_req(req, -EINVAL);
> > + /* Validate physical SQE index has room for 128-byte read */
> > + if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
> > + return io_init_fail_req(req, -EINVAL);
>
> Isn't this new check redundant with the "left < 2" check preceding it?
I think it's orthogonal with *left < 2. How many SQEs are remaining to
submit is unrelated to the index of each SQE. It is, however,
redundant with !(ctx->cached_sq_head & (ctx->sq_entries - 1)), but
only in the IORING_SETUP_NO_SQARRAY case. For
non-IORING_SETUP_NO_SQARRAY rings, the SQ indirection array entry can
point to the last entry of the SQE array, causing the big SQE to
extend past the end. Probably, this added condition can replace
!(ctx->cached_sq_head & (ctx->sq_entries - 1)). That checks whether
this is the last entry *in the SQ indirection array*, but it should be
checking the SQE array.
Best,
Caleb
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
2026-03-09 21:45 ` Caleb Sander Mateos
@ 2026-03-09 21:54 ` Keith Busch
0 siblings, 0 replies; 4+ messages in thread
From: Keith Busch @ 2026-03-09 21:54 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Tom Ryan, io-uring, Jens Axboe, Greg KH
On Mon, Mar 09, 2026 at 02:45:59PM -0700, Caleb Sander Mateos wrote:
> On Mon, Mar 9, 2026 at 2:34 PM Keith Busch <kbusch@kernel.org> wrote:
> >
> > On Mon, Mar 09, 2026 at 02:20:38PM -0700, Tom Ryan wrote:
> > > Patch attached.
> >
> > You can just submit the patch as text in the mail message.
> >
> > > @@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
> > > !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
> > > return io_init_fail_req(req, -EINVAL);
> > > + /* Validate physical SQE index has room for 128-byte read */
> > > + if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
> > > + return io_init_fail_req(req, -EINVAL);
> >
> > Isn't this new check redundant with the "left < 2" check preceding it?
>
> I think it's orthogonal with *left < 2. How many SQEs are remaining to
> submit is unrelated to the index of each SQE. It is, however,
> redundant with !(ctx->cached_sq_head & (ctx->sq_entries - 1)), but
> only in the IORING_SETUP_NO_SQARRAY case. For
> non-IORING_SETUP_NO_SQARRAY rings, the SQ indirection array entry can
> point to the last entry of the SQE array, causing the big SQE to
> extend past the end. Probably, this added condition can replace
> !(ctx->cached_sq_head & (ctx->sq_entries - 1)). That checks whether
> this is the last entry *in the SQ indirection array*, but it should be
> checking the SQE array.
Oh, right. The left < 2 was to confirm we have contiguous entries for a
big sqe, but you could index to an unaligned end with the sqarray.
Folding this into the previous 'if' sounds good. And please consider an
addition to liburing tests.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-09 21:54 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09 21:20 io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass Tom Ryan
2026-03-09 21:29 ` Keith Busch
2026-03-09 21:45 ` Caleb Sander Mateos
2026-03-09 21:54 ` Keith Busch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox