* [PATCHSET 0/6] Various bug fixes
@ 2026-04-21 13:51 Jens Axboe
2026-04-21 13:51 ` [PATCH 1/6] io_uring: fix spurious fput in registered ring path Jens Axboe
` (5 more replies)
0 siblings, 6 replies; 12+ messages in thread
From: Jens Axboe @ 2026-04-21 13:51 UTC (permalink / raw)
To: io-uring
Hi,
A random bag of fixes and cleanups. In detail:
- Patch 1, defensive cleanup for a patch merged in this merge window.
Not reachable, but it's confusing and should get cleaned up.
- Patch 2, spectre masking for file updates.
- Patch 3, defensive cleanup for the imu cache, using kvfree()
consistently. Idea being that it'd be easy to mess this up in the
future if caching changes.
- Patch 4, more defensive cleanups, just hardening ensuring that
only >= 0 is passed in for bytes consumed for the kbuf path.
- Patch 5, actual fix for futex, where multiple partial wakeups would
end up waking the same queue multiple times, rather than moving on
to the next one.
- Patch 6, actual fix for ring resizing with CQE32/SQE128 and pending
entries in the SQ or CQ rings.
io_uring/alloc_cache.h | 2 +-
io_uring/futex.c | 4 +++-
io_uring/io_uring.c | 3 ++-
io_uring/register.c | 36 ++++++++++++++++++++++++++++--------
io_uring/rsrc.c | 5 ++++-
io_uring/rsrc.h | 9 +++++++--
io_uring/rw.c | 4 ++--
7 files changed, 47 insertions(+), 16 deletions(-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/6] io_uring: fix spurious fput in registered ring path
2026-04-21 13:51 [PATCHSET 0/6] Various bug fixes Jens Axboe
@ 2026-04-21 13:51 ` Jens Axboe
2026-04-21 17:05 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 2/6] io_uring/rsrc: unify nospec indexing for direct descriptors Jens Axboe
` (4 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2026-04-21 13:51 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
Fix an issue with io_uring_ctx_get_file() not gating fput() on whether
or not the file descriptor is a registered/direct one or not.
Fixes: c5e9f6a96bf7 ("io_uring: unify getting ctx from passed in file descriptor")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
io_uring/io_uring.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index dd6326dc5f88..4ed998d60c09 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2575,7 +2575,8 @@ struct file *io_uring_ctx_get_file(unsigned int fd, bool registered)
return ERR_PTR(-EBADF);
if (io_is_uring_fops(file))
return file;
- fput(file);
+ if (!registered)
+ fput(file);
return ERR_PTR(-EOPNOTSUPP);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/6] io_uring/rsrc: unify nospec indexing for direct descriptors
2026-04-21 13:51 [PATCHSET 0/6] Various bug fixes Jens Axboe
2026-04-21 13:51 ` [PATCH 1/6] io_uring: fix spurious fput in registered ring path Jens Axboe
@ 2026-04-21 13:51 ` Jens Axboe
2026-04-21 17:09 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 3/6] io_uring/rsrc: use kvfree() for the imu cache Jens Axboe
` (3 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2026-04-21 13:51 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
For file updates, the node reset isn't capping the value via
array_index_nospec() like the other paths do. Ensure it's all sane and
have the update path do the proper capping as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
io_uring/rsrc.c | 3 +++
io_uring/rsrc.h | 9 +++++++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index fd36e0e319a2..c042054c3b5f 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -238,6 +238,9 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
continue;
i = up->offset + done;
+ if (i >= ctx->file_table.data.nr)
+ break;
+ i = array_index_nospec(i, ctx->file_table.data.nr);
if (io_reset_rsrc_node(ctx, &ctx->file_table.data, i))
io_file_bitmap_clear(&ctx->file_table, i);
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index cff0f8834c35..44e3386f7c1c 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -109,10 +109,15 @@ static inline void io_put_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node
}
static inline bool io_reset_rsrc_node(struct io_ring_ctx *ctx,
- struct io_rsrc_data *data, int index)
+ struct io_rsrc_data *data,
+ unsigned int index)
{
- struct io_rsrc_node *node = data->nodes[index];
+ struct io_rsrc_node *node;
+ if (index >= data->nr)
+ return false;
+ index = array_index_nospec(index, data->nr);
+ node = data->nodes[index];
if (!node)
return false;
io_put_rsrc_node(ctx, node);
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/6] io_uring/rsrc: use kvfree() for the imu cache
2026-04-21 13:51 [PATCHSET 0/6] Various bug fixes Jens Axboe
2026-04-21 13:51 ` [PATCH 1/6] io_uring: fix spurious fput in registered ring path Jens Axboe
2026-04-21 13:51 ` [PATCH 2/6] io_uring/rsrc: unify nospec indexing for direct descriptors Jens Axboe
@ 2026-04-21 13:51 ` Jens Axboe
2026-04-21 13:51 ` [PATCH 4/6] io_uring/rw: add defensive hardening for negative kbuf lengths Jens Axboe
` (2 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2026-04-21 13:51 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
Currently anything that requires kvmalloc_flex() for allocations will
not get re-cached, and hence the cache freeing path is correct in that
it always uses kfree() to free the allocated memory. But this seems a
bit fragile as it's something that could get mix should that situation
change, so switch io_free_imu() and io_alloc_cache_free() to use kvfree
as the desctructor.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
io_uring/alloc_cache.h | 2 +-
io_uring/rsrc.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h
index 45fcd8b3b824..962b6e2d04cc 100644
--- a/io_uring/alloc_cache.h
+++ b/io_uring/alloc_cache.h
@@ -64,7 +64,7 @@ static inline void *io_cache_alloc(struct io_alloc_cache *cache, gfp_t gfp)
static inline void io_cache_free(struct io_alloc_cache *cache, void *obj)
{
if (!io_alloc_cache_put(cache, obj))
- kfree(obj);
+ kvfree(obj);
}
#endif
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index c042054c3b5f..650303626be6 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -168,7 +168,7 @@ bool io_rsrc_cache_init(struct io_ring_ctx *ctx)
void io_rsrc_cache_free(struct io_ring_ctx *ctx)
{
io_alloc_cache_free(&ctx->node_cache, kfree);
- io_alloc_cache_free(&ctx->imu_cache, kfree);
+ io_alloc_cache_free(&ctx->imu_cache, kvfree);
}
static void io_clear_table_tags(struct io_rsrc_data *data)
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 4/6] io_uring/rw: add defensive hardening for negative kbuf lengths
2026-04-21 13:51 [PATCHSET 0/6] Various bug fixes Jens Axboe
` (2 preceding siblings ...)
2026-04-21 13:51 ` [PATCH 3/6] io_uring/rsrc: use kvfree() for the imu cache Jens Axboe
@ 2026-04-21 13:51 ` Jens Axboe
2026-04-21 17:10 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 5/6] io_uring/futex: ensure partial wakes are appropriately dequeued Jens Axboe
2026-04-21 13:51 ` [PATCH 6/6] io_uring/register: fix ring resizing with mixed/large SQEs/CQEs Jens Axboe
5 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2026-04-21 13:51 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
No real bug here, just being a bit defensive in ensuring that whatever
gets passed into io_put_kbuf() is always >= 0 and not some random error
value.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
io_uring/rw.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 20654deff84d..e729e0e7657e 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -580,7 +580,7 @@ void io_req_rw_complete(struct io_tw_req tw_req, io_tw_token_t tw)
io_req_io_end(req);
if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING))
- req->cqe.flags |= io_put_kbuf(req, req->cqe.res, NULL);
+ req->cqe.flags |= io_put_kbuf(req, max(req->cqe.res, 0), NULL);
io_req_rw_cleanup(req, 0);
io_req_task_complete(tw_req, tw);
@@ -1379,7 +1379,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
list_del(&req->iopoll_node);
wq_list_add_tail(&req->comp_list, &ctx->submit_state.compl_reqs);
nr_events++;
- req->cqe.flags = io_put_kbuf(req, req->cqe.res, NULL);
+ req->cqe.flags = io_put_kbuf(req, max(req->cqe.res, 0), NULL);
if (!io_is_uring_cmd(req))
io_req_rw_cleanup(req, 0);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 5/6] io_uring/futex: ensure partial wakes are appropriately dequeued
2026-04-21 13:51 [PATCHSET 0/6] Various bug fixes Jens Axboe
` (3 preceding siblings ...)
2026-04-21 13:51 ` [PATCH 4/6] io_uring/rw: add defensive hardening for negative kbuf lengths Jens Axboe
@ 2026-04-21 13:51 ` Jens Axboe
2026-04-21 17:11 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 6/6] io_uring/register: fix ring resizing with mixed/large SQEs/CQEs Jens Axboe
5 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2026-04-21 13:51 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
If a FUTEX_WAITV vectored operation is only partially woken, we
should call __futex_wake_mark() on the queue to account for that.
If not, then a later wakeup will wake the same entry, rather than
the next one in line.
Fixes: 8f350194d5cfd ("io_uring: add support for vectored futex waits")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
io_uring/futex.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/io_uring/futex.c b/io_uring/futex.c
index fd503c24b428..9cc1788ef4c6 100644
--- a/io_uring/futex.c
+++ b/io_uring/futex.c
@@ -159,8 +159,10 @@ static void io_futex_wakev_fn(struct wake_q_head *wake_q, struct futex_q *q)
struct io_kiocb *req = q->wake_data;
struct io_futexv_data *ifd = req->async_data;
- if (!io_futexv_claim(ifd))
+ if (!io_futexv_claim(ifd)) {
+ __futex_wake_mark(q);
return;
+ }
if (unlikely(!__futex_wake_mark(q)))
return;
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 6/6] io_uring/register: fix ring resizing with mixed/large SQEs/CQEs
2026-04-21 13:51 [PATCHSET 0/6] Various bug fixes Jens Axboe
` (4 preceding siblings ...)
2026-04-21 13:51 ` [PATCH 5/6] io_uring/futex: ensure partial wakes are appropriately dequeued Jens Axboe
@ 2026-04-21 13:51 ` Jens Axboe
2026-04-21 17:12 ` Gabriel Krisman Bertazi
5 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2026-04-21 13:51 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, stable
The ring resizing only properly handles "normal" sized SQEs or CQEs, if
there are pending entries around a resize. This normally should not be
the case, but the code is supposed to handle this regardless.
For the mixed SQE/CQE cases, the current copying works fine as they
are indexed in the same way. Each half is just copied separately. But
for fixed large SQEs and CQEs, the iteration and copy need to take that
into account.
Cc: stable@kernel.org
Fixes: 79cfe9e59c2a ("io_uring/register: add IORING_REGISTER_RESIZE_RINGS")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
io_uring/register.c | 36 ++++++++++++++++++++++++++++--------
1 file changed, 28 insertions(+), 8 deletions(-)
diff --git a/io_uring/register.c b/io_uring/register.c
index 24e593332d1a..dce5e2f9cf77 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -599,10 +599,20 @@ static int io_register_resize_rings(struct io_ring_ctx *ctx, void __user *arg)
if (tail - old_head > p->sq_entries)
goto overflow;
for (i = old_head; i < tail; i++) {
- unsigned src_head = i & (ctx->sq_entries - 1);
- unsigned dst_head = i & (p->sq_entries - 1);
-
- n.sq_sqes[dst_head] = o.sq_sqes[src_head];
+ unsigned index, dst_mask, src_mask;
+ size_t sq_size;
+
+ index = i;
+ sq_size = sizeof(struct io_uring_sqe);
+ src_mask = ctx->sq_entries - 1;
+ dst_mask = p->sq_entries - 1;
+ if (ctx->flags & IORING_SETUP_SQE128) {
+ index <<= 1;
+ sq_size <<= 1;
+ src_mask = (ctx->sq_entries << 1) - 1;
+ dst_mask = (p->sq_entries << 1) - 1;
+ }
+ memcpy(&n.sq_sqes[index & dst_mask], &o.sq_sqes[index & src_mask], sq_size);
}
WRITE_ONCE(n.rings->sq.head, old_head);
WRITE_ONCE(n.rings->sq.tail, tail);
@@ -619,10 +629,20 @@ static int io_register_resize_rings(struct io_ring_ctx *ctx, void __user *arg)
goto out;
}
for (i = old_head; i < tail; i++) {
- unsigned src_head = i & (ctx->cq_entries - 1);
- unsigned dst_head = i & (p->cq_entries - 1);
-
- n.rings->cqes[dst_head] = o.rings->cqes[src_head];
+ unsigned index, dst_mask, src_mask;
+ size_t cq_size;
+
+ index = i;
+ cq_size = sizeof(struct io_uring_cqe);
+ src_mask = ctx->cq_entries - 1;
+ dst_mask = p->cq_entries - 1;
+ if (ctx->flags & IORING_SETUP_CQE32) {
+ index <<= 1;
+ cq_size <<= 1;
+ src_mask = (ctx->cq_entries << 1) - 1;
+ dst_mask = (p->cq_entries << 1) - 1;
+ }
+ memcpy(&n.rings->cqes[index & dst_mask], &o.rings->cqes[index & src_mask], cq_size);
}
WRITE_ONCE(n.rings->cq.head, old_head);
WRITE_ONCE(n.rings->cq.tail, tail);
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/6] io_uring: fix spurious fput in registered ring path
2026-04-21 13:51 ` [PATCH 1/6] io_uring: fix spurious fput in registered ring path Jens Axboe
@ 2026-04-21 17:05 ` Gabriel Krisman Bertazi
0 siblings, 0 replies; 12+ messages in thread
From: Gabriel Krisman Bertazi @ 2026-04-21 17:05 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring
Jens Axboe <axboe@kernel.dk> writes:
> Fix an issue with io_uring_ctx_get_file() not gating fput() on whether
> or not the file descriptor is a registered/direct one or not.
>
> Fixes: c5e9f6a96bf7 ("io_uring: unify getting ctx from passed in file descriptor")
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Oh.
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
--
Gabriel Krisman Bertazi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/6] io_uring/rsrc: unify nospec indexing for direct descriptors
2026-04-21 13:51 ` [PATCH 2/6] io_uring/rsrc: unify nospec indexing for direct descriptors Jens Axboe
@ 2026-04-21 17:09 ` Gabriel Krisman Bertazi
0 siblings, 0 replies; 12+ messages in thread
From: Gabriel Krisman Bertazi @ 2026-04-21 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring
Jens Axboe <axboe@kernel.dk> writes:
> For file updates, the node reset isn't capping the value via
> array_index_nospec() like the other paths do. Ensure it's all sane and
> have the update path do the proper capping as well.
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
> ---
> io_uring/rsrc.c | 3 +++
> io_uring/rsrc.h | 9 +++++++--
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> index fd36e0e319a2..c042054c3b5f 100644
> --- a/io_uring/rsrc.c
> +++ b/io_uring/rsrc.c
> @@ -238,6 +238,9 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
> continue;
>
> i = up->offset + done;
> + if (i >= ctx->file_table.data.nr)
> + break;
> + i = array_index_nospec(i, ctx->file_table.data.nr);
> if (io_reset_rsrc_node(ctx, &ctx->file_table.data, i))
> io_file_bitmap_clear(&ctx->file_table, i);
>
> diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
> index cff0f8834c35..44e3386f7c1c 100644
> --- a/io_uring/rsrc.h
> +++ b/io_uring/rsrc.h
> @@ -109,10 +109,15 @@ static inline void io_put_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node
> }
>
> static inline bool io_reset_rsrc_node(struct io_ring_ctx *ctx,
> - struct io_rsrc_data *data, int index)
> + struct io_rsrc_data *data,
> + unsigned int index)
> {
> - struct io_rsrc_node *node = data->nodes[index];
> + struct io_rsrc_node *node;
>
> + if (index >= data->nr)
> + return false;
> + index = array_index_nospec(index, data->nr);
> + node = data->nodes[index];
> if (!node)
> return false;
> io_put_rsrc_node(ctx, node);
--
Gabriel Krisman Bertazi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/6] io_uring/rw: add defensive hardening for negative kbuf lengths
2026-04-21 13:51 ` [PATCH 4/6] io_uring/rw: add defensive hardening for negative kbuf lengths Jens Axboe
@ 2026-04-21 17:10 ` Gabriel Krisman Bertazi
0 siblings, 0 replies; 12+ messages in thread
From: Gabriel Krisman Bertazi @ 2026-04-21 17:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring
Jens Axboe <axboe@kernel.dk> writes:
> No real bug here, just being a bit defensive in ensuring that whatever
> gets passed into io_put_kbuf() is always >= 0 and not some random error
> value.
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
> io_uring/rw.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/io_uring/rw.c b/io_uring/rw.c
> index 20654deff84d..e729e0e7657e 100644
> --- a/io_uring/rw.c
> +++ b/io_uring/rw.c
> @@ -580,7 +580,7 @@ void io_req_rw_complete(struct io_tw_req tw_req, io_tw_token_t tw)
> io_req_io_end(req);
>
> if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING))
> - req->cqe.flags |= io_put_kbuf(req, req->cqe.res, NULL);
> + req->cqe.flags |= io_put_kbuf(req, max(req->cqe.res, 0), NULL);
>
> io_req_rw_cleanup(req, 0);
> io_req_task_complete(tw_req, tw);
> @@ -1379,7 +1379,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
> list_del(&req->iopoll_node);
> wq_list_add_tail(&req->comp_list, &ctx->submit_state.compl_reqs);
> nr_events++;
> - req->cqe.flags = io_put_kbuf(req, req->cqe.res, NULL);
> + req->cqe.flags = io_put_kbuf(req, max(req->cqe.res, 0), NULL);
> if (!io_is_uring_cmd(req))
> io_req_rw_cleanup(req, 0);
> }
Much more readable if it were rolled as an early return inside io_put_kbuf, but clearly:
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
--
Gabriel Krisman Bertazi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 5/6] io_uring/futex: ensure partial wakes are appropriately dequeued
2026-04-21 13:51 ` [PATCH 5/6] io_uring/futex: ensure partial wakes are appropriately dequeued Jens Axboe
@ 2026-04-21 17:11 ` Gabriel Krisman Bertazi
0 siblings, 0 replies; 12+ messages in thread
From: Gabriel Krisman Bertazi @ 2026-04-21 17:11 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring
Jens Axboe <axboe@kernel.dk> writes:
> If a FUTEX_WAITV vectored operation is only partially woken, we
> should call __futex_wake_mark() on the queue to account for that.
> If not, then a later wakeup will wake the same entry, rather than
> the next one in line.
>
> Fixes: 8f350194d5cfd ("io_uring: add support for vectored futex waits")
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
> ---
> io_uring/futex.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/io_uring/futex.c b/io_uring/futex.c
> index fd503c24b428..9cc1788ef4c6 100644
> --- a/io_uring/futex.c
> +++ b/io_uring/futex.c
> @@ -159,8 +159,10 @@ static void io_futex_wakev_fn(struct wake_q_head *wake_q, struct futex_q *q)
> struct io_kiocb *req = q->wake_data;
> struct io_futexv_data *ifd = req->async_data;
>
> - if (!io_futexv_claim(ifd))
> + if (!io_futexv_claim(ifd)) {
> + __futex_wake_mark(q);
> return;
> + }
> if (unlikely(!__futex_wake_mark(q)))
> return;
--
Gabriel Krisman Bertazi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 6/6] io_uring/register: fix ring resizing with mixed/large SQEs/CQEs
2026-04-21 13:51 ` [PATCH 6/6] io_uring/register: fix ring resizing with mixed/large SQEs/CQEs Jens Axboe
@ 2026-04-21 17:12 ` Gabriel Krisman Bertazi
0 siblings, 0 replies; 12+ messages in thread
From: Gabriel Krisman Bertazi @ 2026-04-21 17:12 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, stable
Jens Axboe <axboe@kernel.dk> writes:
> The ring resizing only properly handles "normal" sized SQEs or CQEs, if
> there are pending entries around a resize. This normally should not be
> the case, but the code is supposed to handle this regardless.
>
> For the mixed SQE/CQE cases, the current copying works fine as they
> are indexed in the same way. Each half is just copied separately. But
> for fixed large SQEs and CQEs, the iteration and copy need to take that
> into account.
>
> Cc: stable@kernel.org
> Fixes: 79cfe9e59c2a ("io_uring/register: add IORING_REGISTER_RESIZE_RINGS")
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
> io_uring/register.c | 36 ++++++++++++++++++++++++++++--------
> 1 file changed, 28 insertions(+), 8 deletions(-)
>
> diff --git a/io_uring/register.c b/io_uring/register.c
> index 24e593332d1a..dce5e2f9cf77 100644
> --- a/io_uring/register.c
> +++ b/io_uring/register.c
> @@ -599,10 +599,20 @@ static int io_register_resize_rings(struct io_ring_ctx *ctx, void __user *arg)
> if (tail - old_head > p->sq_entries)
> goto overflow;
> for (i = old_head; i < tail; i++) {
> - unsigned src_head = i & (ctx->sq_entries - 1);
> - unsigned dst_head = i & (p->sq_entries - 1);
> -
> - n.sq_sqes[dst_head] = o.sq_sqes[src_head];
> + unsigned index, dst_mask, src_mask;
> + size_t sq_size;
> +
> + index = i;
> + sq_size = sizeof(struct io_uring_sqe);
> + src_mask = ctx->sq_entries - 1;
> + dst_mask = p->sq_entries - 1;
> + if (ctx->flags & IORING_SETUP_SQE128) {
> + index <<= 1;
> + sq_size <<= 1;
> + src_mask = (ctx->sq_entries << 1) - 1;
> + dst_mask = (p->sq_entries << 1) - 1;
> + }
> + memcpy(&n.sq_sqes[index & dst_mask], &o.sq_sqes[index & src_mask], sq_size);
> }
> WRITE_ONCE(n.rings->sq.head, old_head);
> WRITE_ONCE(n.rings->sq.tail, tail);
> @@ -619,10 +629,20 @@ static int io_register_resize_rings(struct io_ring_ctx *ctx, void __user *arg)
> goto out;
> }
> for (i = old_head; i < tail; i++) {
> - unsigned src_head = i & (ctx->cq_entries - 1);
> - unsigned dst_head = i & (p->cq_entries - 1);
> -
> - n.rings->cqes[dst_head] = o.rings->cqes[src_head];
> + unsigned index, dst_mask, src_mask;
> + size_t cq_size;
> +
> + index = i;
> + cq_size = sizeof(struct io_uring_cqe);
> + src_mask = ctx->cq_entries - 1;
> + dst_mask = p->cq_entries - 1;
> + if (ctx->flags & IORING_SETUP_CQE32) {
> + index <<= 1;
> + cq_size <<= 1;
> + src_mask = (ctx->cq_entries << 1) - 1;
> + dst_mask = (p->cq_entries << 1) - 1;
> + }
> + memcpy(&n.rings->cqes[index & dst_mask], &o.rings->cqes[index & src_mask], cq_size);
> }
> WRITE_ONCE(n.rings->cq.head, old_head);
> WRITE_ONCE(n.rings->cq.tail, tail);
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
--
Gabriel Krisman Bertazi
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-04-21 17:12 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 13:51 [PATCHSET 0/6] Various bug fixes Jens Axboe
2026-04-21 13:51 ` [PATCH 1/6] io_uring: fix spurious fput in registered ring path Jens Axboe
2026-04-21 17:05 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 2/6] io_uring/rsrc: unify nospec indexing for direct descriptors Jens Axboe
2026-04-21 17:09 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 3/6] io_uring/rsrc: use kvfree() for the imu cache Jens Axboe
2026-04-21 13:51 ` [PATCH 4/6] io_uring/rw: add defensive hardening for negative kbuf lengths Jens Axboe
2026-04-21 17:10 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 5/6] io_uring/futex: ensure partial wakes are appropriately dequeued Jens Axboe
2026-04-21 17:11 ` Gabriel Krisman Bertazi
2026-04-21 13:51 ` [PATCH 6/6] io_uring/register: fix ring resizing with mixed/large SQEs/CQEs Jens Axboe
2026-04-21 17:12 ` Gabriel Krisman Bertazi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox