public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass
@ 2026-03-09 21:20 Tom Ryan
  2026-03-09 21:29 ` Keith Busch
  0 siblings, 1 reply; 4+ messages in thread
From: Tom Ryan @ 2026-03-09 21:20 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Greg KH


[-- Attachment #1.1: Type: text/plain, Size: 38 bytes --]

Hi All,

Patch attached.

Thanks,
Tom

[-- Attachment #1.2: Type: text/html, Size: 162 bytes --]

[-- Attachment #2: 0001-io_uring-validate-physical-SQE-index-for-SQE_MIXED-128b-ops.patch --]
[-- Type: application/octet-stream, Size: 1869 bytes --]

From 4ba67b00d176e9f0ddff8fc80d5c28051d580f8b Mon Sep 17 00:00:00 2001
From: Tom Ryan <ryan36005@gmail.com>
Date: Mon, 9 Mar 2026 09:14:59 -0700
Subject: [PATCH] io_uring: validate physical SQE index for SQE_MIXED 128-byte
 ops

When IORING_SETUP_SQE_MIXED is used with sq_array (the default, without
IORING_SETUP_NO_SQARRAY), the boundary check for 128-byte SQE operations
in io_init_req() validates the logical SQ head position but not the
physical index obtained from sq_array.

Since sq_array allows user-controlled remapping of logical to physical
SQE indices, an unprivileged user can set sq_array[N] = sq_entries - 1,
placing a 128-byte operation at the last physical SQE slot. The
subsequent 128-byte memcpy in io_uring_cmd_sqe_copy() then reads 64
bytes past the end of the SQE array.

Fix this by checking that the physical SQE index (derived from the sqe
pointer) has room for the full 128-byte read, i.e., is not the last
entry in the array.

Fixes: 1cba30bf9fdd ("io_uring: add support for IORING_SETUP_SQE_MIXED")
Signed-off-by: Tom Ryan <ryan36005@gmail.com>
---
 io_uring/io_uring.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index aa9570316..2fa72d5e5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1747,6 +1747,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||
 		    !(ctx->cached_sq_head & (ctx->sq_entries - 1)))
 			return io_init_fail_req(req, -EINVAL);
+		/* Validate physical SQE index has room for 128-byte read */
+		if ((unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)
+			return io_init_fail_req(req, -EINVAL);
 		/*
 		 * A 128b operation on a mixed SQ uses two entries, so we have
 		 * to increment the head and cached refs, and decrement what's
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-03-09 21:54 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09 21:20 io_uring: OOB read in SQE_MIXED mode via sq_array physical index bypass Tom Ryan
2026-03-09 21:29 ` Keith Busch
2026-03-09 21:45   ` Caleb Sander Mateos
2026-03-09 21:54     ` Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox