* [PATCH 0/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
@ 2025-09-03 3:26 Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h Caleb Sander Mateos
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 3:26 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-kernel, Caleb Sander Mateos
As far as I can tell, setting IORING_SETUP_SINGLE_ISSUER when creating
an io_uring doesn't actually enable any additional optimizations (aside
from being a requirement for IORING_SETUP_DEFER_TASKRUN). This series
leverages IORING_SETUP_SINGLE_ISSUER's guarantee that only one task
submits SQEs to skip taking the uring_lock mutex in the submission and
task work paths.
First, we need to close a hole in the IORING_SETUP_SINGLE_ISSUER checks
where IORING_REGISTER_CLONE_BUFFERS only checks whether the thread is
allowed to access one of the two io_urings. It assumes the uring_lock
will prevent concurrent access to the other io_uring, but this will no
longer be the case after the optimization to skip taking uring_lock.
We also need to remove the unused filetable.h #include from io_uring.h
to avoid an #include cycle.
Caleb Sander Mateos (4):
io_uring: don't include filetable.h in io_uring.h
io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
io_uring: factor out uring_lock helpers
io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
io_uring/cancel.c | 1 +
io_uring/fdinfo.c | 2 +-
io_uring/filetable.c | 3 ++-
io_uring/io_uring.c | 58 +++++++++++++++++++++++++++-----------------
io_uring/io_uring.h | 43 ++++++++++++++++++++++++++------
io_uring/kbuf.c | 6 ++---
io_uring/net.c | 1 +
io_uring/notif.c | 5 ++--
io_uring/notif.h | 3 ++-
io_uring/openclose.c | 1 +
io_uring/poll.c | 2 +-
io_uring/register.c | 1 +
io_uring/rsrc.c | 10 +++++++-
io_uring/rsrc.h | 3 ++-
io_uring/rw.c | 3 ++-
io_uring/splice.c | 1 +
io_uring/waitid.c | 2 +-
17 files changed, 102 insertions(+), 43 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h
2025-09-03 3:26 [PATCH 0/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
@ 2025-09-03 3:26 ` Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers() Caleb Sander Mateos
` (3 subsequent siblings)
4 siblings, 0 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 3:26 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-kernel, Caleb Sander Mateos
io_uring/io_uring.h doesn't use anything declared in
io_uring/filetable.h, so drop the unnecessary #include. Add filetable.h
includes in .c files previously relying on the transitive include from
io_uring.h.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
io_uring/cancel.c | 1 +
io_uring/fdinfo.c | 2 +-
io_uring/io_uring.c | 1 +
io_uring/io_uring.h | 1 -
io_uring/net.c | 1 +
io_uring/openclose.c | 1 +
io_uring/register.c | 1 +
io_uring/rsrc.c | 1 +
io_uring/rw.c | 1 +
io_uring/splice.c | 1 +
10 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/io_uring/cancel.c b/io_uring/cancel.c
index 6d57602304df..64b51e82baa2 100644
--- a/io_uring/cancel.c
+++ b/io_uring/cancel.c
@@ -9,10 +9,11 @@
#include <linux/nospec.h>
#include <linux/io_uring.h>
#include <uapi/linux/io_uring.h>
+#include "filetable.h"
#include "io_uring.h"
#include "tctx.h"
#include "poll.h"
#include "timeout.h"
#include "waitid.h"
diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
index 5c7339838769..ff3364531c77 100644
--- a/io_uring/fdinfo.c
+++ b/io_uring/fdinfo.c
@@ -7,11 +7,11 @@
#include <linux/seq_file.h>
#include <linux/io_uring.h>
#include <uapi/linux/io_uring.h>
-#include "io_uring.h"
+#include "filetable.h"
#include "sqpoll.h"
#include "fdinfo.h"
#include "cancel.h"
#include "rsrc.h"
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 545a7d5eefec..9c1190b19adf 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -77,10 +77,11 @@
#include <uapi/linux/io_uring.h>
#include "io-wq.h"
+#include "filetable.h"
#include "io_uring.h"
#include "opdef.h"
#include "refs.h"
#include "tctx.h"
#include "register.h"
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index fa8a66b34d4e..d62b7d9fafed 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -9,11 +9,10 @@
#include <linux/io_uring_types.h>
#include <uapi/linux/eventpoll.h>
#include "alloc_cache.h"
#include "io-wq.h"
#include "slist.h"
-#include "filetable.h"
#include "opdef.h"
#ifndef CREATE_TRACE_POINTS
#include <trace/events/io_uring.h>
#endif
diff --git a/io_uring/net.c b/io_uring/net.c
index d2ca49ceb79d..cf4bf4a2264b 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -8,10 +8,11 @@
#include <net/compat.h>
#include <linux/io_uring.h>
#include <uapi/linux/io_uring.h>
+#include "filetable.h"
#include "io_uring.h"
#include "kbuf.h"
#include "alloc_cache.h"
#include "net.h"
#include "notif.h"
diff --git a/io_uring/openclose.c b/io_uring/openclose.c
index d70700e5cef8..bfeb91b31bba 100644
--- a/io_uring/openclose.c
+++ b/io_uring/openclose.c
@@ -12,10 +12,11 @@
#include <uapi/linux/io_uring.h>
#include "../fs/internal.h"
+#include "filetable.h"
#include "io_uring.h"
#include "rsrc.h"
#include "openclose.h"
struct io_open {
diff --git a/io_uring/register.c b/io_uring/register.c
index aa5f56ad8358..5e493917a1a8 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -16,10 +16,11 @@
#include <linux/nospec.h>
#include <linux/compat.h>
#include <linux/io_uring.h>
#include <linux/io_uring_types.h>
+#include "filetable.h"
#include "io_uring.h"
#include "opdef.h"
#include "tctx.h"
#include "rsrc.h"
#include "sqpoll.h"
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index f75f5e43fa4a..2d15b8785a95 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -11,10 +11,11 @@
#include <linux/io_uring.h>
#include <linux/io_uring/cmd.h>
#include <uapi/linux/io_uring.h>
+#include "filetable.h"
#include "io_uring.h"
#include "openclose.h"
#include "rsrc.h"
#include "memmap.h"
#include "register.h"
diff --git a/io_uring/rw.c b/io_uring/rw.c
index dcde5bb7421a..ab6b4afccec3 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -13,10 +13,11 @@
#include <linux/io_uring/cmd.h>
#include <linux/indirect_call_wrapper.h>
#include <uapi/linux/io_uring.h>
+#include "filetable.h"
#include "io_uring.h"
#include "opdef.h"
#include "kbuf.h"
#include "alloc_cache.h"
#include "rsrc.h"
diff --git a/io_uring/splice.c b/io_uring/splice.c
index 35ce4e60b495..e81ebbb91925 100644
--- a/io_uring/splice.c
+++ b/io_uring/splice.c
@@ -9,10 +9,11 @@
#include <linux/io_uring.h>
#include <linux/splice.h>
#include <uapi/linux/io_uring.h>
+#include "filetable.h"
#include "io_uring.h"
#include "splice.h"
struct io_splice {
struct file *file_out;
--
2.45.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
2025-09-03 3:26 [PATCH 0/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h Caleb Sander Mateos
@ 2025-09-03 3:26 ` Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 3/4] io_uring: factor out uring_lock helpers Caleb Sander Mateos
` (2 subsequent siblings)
4 siblings, 0 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 3:26 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-kernel, Caleb Sander Mateos
io_ring_ctx's enabled with IORING_SETUP_SINGLE_ISSUER are only allowed
a single task submitting to the ctx. Although the documentation only
mentions this restriction applying to io_uring_enter() syscalls,
commit d7cce96c449e ("io_uring: limit registration w/ SINGLE_ISSUER")
extends it to io_uring_register(). Ensuring only one task interacts
with the io_ring_ctx will be important to allow this task to avoid
taking the uring_lock.
There is, however, one gap in these checks: io_register_clone_buffers()
may take the uring_lock on a second (source) io_ring_ctx, but
__io_uring_register() only checks the current thread against the
*destination* io_ring_ctx's submitter_task. Fail the
IORING_REGISTER_CLONE_BUFFERS with -EEXIST if the source io_ring_ctx has
a registered submitter_task other than the current task.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
io_uring/rsrc.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 2d15b8785a95..1e5b7833076a 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -1298,14 +1298,21 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg)
src_ctx = file->private_data;
if (src_ctx != ctx) {
mutex_unlock(&ctx->uring_lock);
lock_two_rings(ctx, src_ctx);
+
+ if (src_ctx->submitter_task &&
+ src_ctx->submitter_task != current) {
+ ret = -EEXIST;
+ goto out;
+ }
}
ret = io_clone_buffers(ctx, src_ctx, &buf);
+out:
if (src_ctx != ctx)
mutex_unlock(&src_ctx->uring_lock);
fput(file);
return ret;
--
2.45.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 3/4] io_uring: factor out uring_lock helpers
2025-09-03 3:26 [PATCH 0/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers() Caleb Sander Mateos
@ 2025-09-03 3:26 ` Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
2025-09-03 21:55 ` [syzbot ci] " syzbot ci
4 siblings, 0 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 3:26 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-kernel, Caleb Sander Mateos
A subsequent commit will skip acquiring the io_ring_ctx uring_lock in
io_uring_enter() and io_handle_tw_list() for IORING_SETUP_SINGLE_ISSUER.
Prepare for this change by factoring out the uring_lock accesses under
these functions into helper functions:
- io_ring_ctx_lock() for mutex_lock(&ctx->uring_lock)
- io_ring_ctx_unlock() for mutex_unlock(&ctx->uring_lock)
- io_ring_ctx_assert_locked() for lockdep_assert_held(&ctx->uring_lock)
For now, the helpers unconditionally call the mutex functions. But a
subsequent commit will condition them on !IORING_SETUP_SINGLE_ISSUER.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
io_uring/filetable.c | 3 ++-
io_uring/io_uring.c | 51 ++++++++++++++++++++++++++------------------
io_uring/io_uring.h | 28 ++++++++++++++++++------
io_uring/kbuf.c | 6 +++---
io_uring/notif.c | 5 +++--
io_uring/notif.h | 3 ++-
io_uring/poll.c | 2 +-
io_uring/rsrc.c | 2 +-
io_uring/rsrc.h | 3 ++-
io_uring/rw.c | 2 +-
io_uring/waitid.c | 2 +-
11 files changed, 67 insertions(+), 40 deletions(-)
diff --git a/io_uring/filetable.c b/io_uring/filetable.c
index a21660e3145a..aae283e77856 100644
--- a/io_uring/filetable.c
+++ b/io_uring/filetable.c
@@ -55,14 +55,15 @@ void io_free_file_tables(struct io_ring_ctx *ctx, struct io_file_table *table)
table->bitmap = NULL;
}
static int io_install_fixed_file(struct io_ring_ctx *ctx, struct file *file,
u32 slot_index)
- __must_hold(&req->ctx->uring_lock)
{
struct io_rsrc_node *node;
+ io_ring_ctx_assert_locked(ctx);
+
if (io_is_uring_fops(file))
return -EBADF;
if (!ctx->file_table.data.nr)
return -ENXIO;
if (slot_index >= ctx->file_table.data.nr)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 9c1190b19adf..7f19b6da5d3d 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -554,11 +554,11 @@ static unsigned io_linked_nr(struct io_kiocb *req)
static __cold noinline void io_queue_deferred(struct io_ring_ctx *ctx)
{
bool drain_seen = false, first = true;
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
__io_req_caches_free(ctx);
while (!list_empty(&ctx->defer_list)) {
struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
struct io_defer_entry, list);
@@ -925,11 +925,11 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
* Must be called from inline task_work so we now a flush will happen later,
* and obviously with ctx->uring_lock held (tw always has that).
*/
void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
{
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
lockdep_assert(ctx->lockless_cq);
if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
struct io_cqe cqe = io_init_cqe(user_data, res, cflags);
@@ -954,11 +954,11 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
*/
if (!wq_list_empty(&ctx->submit_state.compl_reqs))
__io_submit_flush_completions(ctx);
lockdep_assert(!io_wq_current_is_worker());
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
if (!ctx->lockless_cq) {
spin_lock(&ctx->completion_lock);
posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);
spin_unlock(&ctx->completion_lock);
@@ -978,11 +978,11 @@ bool io_req_post_cqe32(struct io_kiocb *req, struct io_uring_cqe cqe[2])
{
struct io_ring_ctx *ctx = req->ctx;
bool posted;
lockdep_assert(!io_wq_current_is_worker());
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
cqe[0].user_data = req->cqe.user_data;
if (!ctx->lockless_cq) {
spin_lock(&ctx->completion_lock);
posted = io_fill_cqe_aux32(ctx, cqe);
@@ -1032,15 +1032,14 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
*/
req_ref_put(req);
}
void io_req_defer_failed(struct io_kiocb *req, s32 res)
- __must_hold(&ctx->uring_lock)
{
const struct io_cold_def *def = &io_cold_defs[req->opcode];
- lockdep_assert_held(&req->ctx->uring_lock);
+ io_ring_ctx_assert_locked(req->ctx);
req_set_fail(req);
io_req_set_res(req, res, io_put_kbuf(req, res, NULL));
if (def->fail)
def->fail(req);
@@ -1052,16 +1051,17 @@ void io_req_defer_failed(struct io_kiocb *req, s32 res)
* handlers and io_issue_sqe() are done with it, e.g. inline completion path.
* Because of that, io_alloc_req() should be called only under ->uring_lock
* and with extra caution to not get a request that is still worked on.
*/
__cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
- __must_hold(&ctx->uring_lock)
{
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO;
void *reqs[IO_REQ_ALLOC_BATCH];
int ret;
+ io_ring_ctx_assert_locked(ctx);
+
ret = kmem_cache_alloc_bulk(req_cachep, gfp, ARRAY_SIZE(reqs), reqs);
/*
* Bulk alloc is all-or-nothing. If we fail to get a batch,
* retry single alloc to be on the safe side.
@@ -1126,11 +1126,11 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx, io_tw_token_t tw)
return;
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
io_submit_flush_completions(ctx);
- mutex_unlock(&ctx->uring_lock);
+ io_ring_ctx_unlock(ctx);
percpu_ref_put(&ctx->refs);
}
/*
* Run queued task_work, returning the number of entries processed in *count.
@@ -1150,11 +1150,11 @@ struct llist_node *io_handle_tw_list(struct llist_node *node,
io_task_work.node);
if (req->ctx != ctx) {
ctx_flush_and_put(ctx, ts);
ctx = req->ctx;
- mutex_lock(&ctx->uring_lock);
+ io_ring_ctx_lock(ctx);
percpu_ref_get(&ctx->refs);
}
INDIRECT_CALL_2(req->io_task_work.func,
io_poll_task_func, io_req_rw_complete,
req, ts);
@@ -1502,12 +1502,13 @@ static inline void io_req_put_rsrc_nodes(struct io_kiocb *req)
io_put_rsrc_node(req->ctx, req->buf_node);
}
static void io_free_batch_list(struct io_ring_ctx *ctx,
struct io_wq_work_node *node)
- __must_hold(&ctx->uring_lock)
{
+ io_ring_ctx_assert_locked(ctx);
+
do {
struct io_kiocb *req = container_of(node, struct io_kiocb,
comp_list);
if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
@@ -1543,15 +1544,16 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
io_req_add_to_cache(req, ctx);
} while (node);
}
void __io_submit_flush_completions(struct io_ring_ctx *ctx)
- __must_hold(&ctx->uring_lock)
{
struct io_submit_state *state = &ctx->submit_state;
struct io_wq_work_node *node;
+ io_ring_ctx_assert_locked(ctx);
+
__io_cq_lock(ctx);
__wq_list_for_each(node, &state->compl_reqs) {
struct io_kiocb *req = container_of(node, struct io_kiocb,
comp_list);
@@ -1767,16 +1769,17 @@ io_req_flags_t io_file_get_flags(struct file *file)
res |= REQ_F_SUPPORT_NOWAIT;
return res;
}
static __cold void io_drain_req(struct io_kiocb *req)
- __must_hold(&ctx->uring_lock)
{
struct io_ring_ctx *ctx = req->ctx;
bool drain = req->flags & IOSQE_IO_DRAIN;
struct io_defer_entry *de;
+ io_ring_ctx_assert_locked(ctx);
+
de = kmalloc(sizeof(*de), GFP_KERNEL_ACCOUNT);
if (!de) {
io_req_defer_failed(req, -ENOMEM);
return;
}
@@ -2043,12 +2046,13 @@ static int io_req_sqe_copy(struct io_kiocb *req, unsigned int issue_flags)
def->sqe_copy(req);
return 0;
}
static void io_queue_async(struct io_kiocb *req, unsigned int issue_flags, int ret)
- __must_hold(&req->ctx->uring_lock)
{
+ io_ring_ctx_assert_locked(req->ctx);
+
if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
fail:
io_req_defer_failed(req, ret);
return;
}
@@ -2068,16 +2072,17 @@ static void io_queue_async(struct io_kiocb *req, unsigned int issue_flags, int r
break;
}
}
static inline void io_queue_sqe(struct io_kiocb *req, unsigned int extra_flags)
- __must_hold(&req->ctx->uring_lock)
{
unsigned int issue_flags = IO_URING_F_NONBLOCK |
IO_URING_F_COMPLETE_DEFER | extra_flags;
int ret;
+ io_ring_ctx_assert_locked(req->ctx);
+
ret = io_issue_sqe(req, issue_flags);
/*
* We async punt it if the file wasn't marked NOWAIT, or if the file
* doesn't support non-blocking read/write attempts
@@ -2085,12 +2090,13 @@ static inline void io_queue_sqe(struct io_kiocb *req, unsigned int extra_flags)
if (unlikely(ret))
io_queue_async(req, issue_flags, ret);
}
static void io_queue_sqe_fallback(struct io_kiocb *req)
- __must_hold(&req->ctx->uring_lock)
{
+ io_ring_ctx_assert_locked(req->ctx);
+
if (unlikely(req->flags & REQ_F_FAIL)) {
/*
* We don't submit, fail them all, for that replace hardlinks
* with normal links. Extra REQ_F_LINK is tolerated.
*/
@@ -2155,17 +2161,18 @@ static __cold int io_init_fail_req(struct io_kiocb *req, int err)
return err;
}
static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
const struct io_uring_sqe *sqe)
- __must_hold(&ctx->uring_lock)
{
const struct io_issue_def *def;
unsigned int sqe_flags;
int personality;
u8 opcode;
+ io_ring_ctx_assert_locked(ctx);
+
req->ctx = ctx;
req->opcode = opcode = READ_ONCE(sqe->opcode);
/* same numerical values with corresponding REQ_F_*, safe to copy */
sqe_flags = READ_ONCE(sqe->flags);
req->flags = (__force io_req_flags_t) sqe_flags;
@@ -2290,15 +2297,16 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
return 0;
}
static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
const struct io_uring_sqe *sqe)
- __must_hold(&ctx->uring_lock)
{
struct io_submit_link *link = &ctx->submit_state.link;
int ret;
+ io_ring_ctx_assert_locked(ctx);
+
ret = io_init_req(ctx, req, sqe);
if (unlikely(ret))
return io_submit_fail_init(sqe, req, ret);
trace_io_uring_submit_req(req);
@@ -2419,16 +2427,17 @@ static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe)
*sqe = &ctx->sq_sqes[head];
return true;
}
int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
- __must_hold(&ctx->uring_lock)
{
unsigned int entries = io_sqring_entries(ctx);
unsigned int left;
int ret;
+ io_ring_ctx_assert_locked(ctx);
+
if (unlikely(!entries))
return 0;
/* make sure SQ entry isn't read before tail */
ret = left = min(nr, entries);
io_get_task_refs(left);
@@ -3518,14 +3527,14 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
} else if (to_submit) {
ret = io_uring_add_tctx_node(ctx);
if (unlikely(ret))
goto out;
- mutex_lock(&ctx->uring_lock);
+ io_ring_ctx_lock(ctx);
ret = io_submit_sqes(ctx, to_submit);
if (ret != to_submit) {
- mutex_unlock(&ctx->uring_lock);
+ io_ring_ctx_unlock(ctx);
goto out;
}
if (flags & IORING_ENTER_GETEVENTS) {
if (ctx->syscall_iopoll)
goto iopoll_locked;
@@ -3534,11 +3543,11 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
* it should handle ownership problems if any.
*/
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
(void)io_run_local_work_locked(ctx, min_complete);
}
- mutex_unlock(&ctx->uring_lock);
+ io_ring_ctx_unlock(ctx);
}
if (flags & IORING_ENTER_GETEVENTS) {
int ret2;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index d62b7d9fafed..a0580a1bf6b5 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -119,20 +119,35 @@ bool __io_alloc_req_refill(struct io_ring_ctx *ctx);
bool io_match_task_safe(struct io_kiocb *head, struct io_uring_task *tctx,
bool cancel_all);
void io_activate_pollwq(struct io_ring_ctx *ctx);
+static inline void io_ring_ctx_lock(struct io_ring_ctx *ctx)
+{
+ mutex_lock(&ctx->uring_lock);
+}
+
+static inline void io_ring_ctx_unlock(struct io_ring_ctx *ctx)
+{
+ mutex_unlock(&ctx->uring_lock);
+}
+
+static inline void io_ring_ctx_assert_locked(const struct io_ring_ctx *ctx)
+{
+ lockdep_assert_held(&ctx->uring_lock);
+}
+
static inline void io_lockdep_assert_cq_locked(struct io_ring_ctx *ctx)
{
#if defined(CONFIG_PROVE_LOCKING)
lockdep_assert(in_task());
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
if (ctx->flags & IORING_SETUP_IOPOLL) {
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
} else if (!ctx->task_complete) {
lockdep_assert_held(&ctx->completion_lock);
} else if (ctx->submitter_task) {
/*
* ->submitter_task may be NULL and we can still post a CQE,
@@ -300,11 +315,11 @@ static inline void io_put_file(struct io_kiocb *req)
}
static inline void io_ring_submit_unlock(struct io_ring_ctx *ctx,
unsigned issue_flags)
{
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
if (unlikely(issue_flags & IO_URING_F_UNLOCKED))
mutex_unlock(&ctx->uring_lock);
}
static inline void io_ring_submit_lock(struct io_ring_ctx *ctx,
@@ -316,11 +331,11 @@ static inline void io_ring_submit_lock(struct io_ring_ctx *ctx,
* The only exception is when we've detached the request and issue it
* from an async worker thread, grab the lock for that case.
*/
if (unlikely(issue_flags & IO_URING_F_UNLOCKED))
mutex_lock(&ctx->uring_lock);
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
}
static inline void io_commit_cqring(struct io_ring_ctx *ctx)
{
/* order cqe stores with ring update */
@@ -428,24 +443,23 @@ static inline bool io_task_work_pending(struct io_ring_ctx *ctx)
return task_work_pending(current) || io_local_work_pending(ctx);
}
static inline void io_tw_lock(struct io_ring_ctx *ctx, io_tw_token_t tw)
{
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
}
/*
* Don't complete immediately but use deferred completion infrastructure.
* Protected by ->uring_lock and can only be used either with
* IO_URING_F_COMPLETE_DEFER or inside a tw handler holding the mutex.
*/
static inline void io_req_complete_defer(struct io_kiocb *req)
- __must_hold(&req->ctx->uring_lock)
{
struct io_submit_state *state = &req->ctx->submit_state;
- lockdep_assert_held(&req->ctx->uring_lock);
+ io_ring_ctx_assert_locked(req->ctx);
wq_list_add_tail(&req->comp_list, &state->compl_reqs);
}
static inline void io_commit_cqring_flush(struct io_ring_ctx *ctx)
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index 3e9aab21af9d..ea6f3588d875 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -68,11 +68,11 @@ bool io_kbuf_commit(struct io_kiocb *req,
}
static inline struct io_buffer_list *io_buffer_get_list(struct io_ring_ctx *ctx,
unsigned int bgid)
{
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
return xa_load(&ctx->io_bl_xa, bgid);
}
static int io_buffer_add_list(struct io_ring_ctx *ctx,
@@ -337,11 +337,11 @@ int io_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg,
{
struct io_ring_ctx *ctx = req->ctx;
struct io_buffer_list *bl;
int ret;
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
bl = io_buffer_get_list(ctx, arg->buf_group);
if (unlikely(!bl))
return -ENOENT;
@@ -393,11 +393,11 @@ static int io_remove_buffers_legacy(struct io_ring_ctx *ctx,
{
unsigned long i = 0;
struct io_buffer *nxt;
/* protects io_buffers_cache */
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
WARN_ON_ONCE(bl->flags & IOBL_BUF_RING);
for (i = 0; i < nbufs && !list_empty(&bl->buf_list); i++) {
nxt = list_first_entry(&bl->buf_list, struct io_buffer, list);
list_del(&nxt->list);
diff --git a/io_uring/notif.c b/io_uring/notif.c
index 8c92e9cde2c6..9dd248fcb213 100644
--- a/io_uring/notif.c
+++ b/io_uring/notif.c
@@ -14,11 +14,11 @@ static const struct ubuf_info_ops io_ubuf_ops;
static void io_notif_tw_complete(struct io_kiocb *notif, io_tw_token_t tw)
{
struct io_notif_data *nd = io_notif_to_data(notif);
struct io_ring_ctx *ctx = notif->ctx;
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
do {
notif = cmd_to_io_kiocb(nd);
if (WARN_ON_ONCE(ctx != notif->ctx))
@@ -108,15 +108,16 @@ static const struct ubuf_info_ops io_ubuf_ops = {
.complete = io_tx_ubuf_complete,
.link_skb = io_link_skb,
};
struct io_kiocb *io_alloc_notif(struct io_ring_ctx *ctx)
- __must_hold(&ctx->uring_lock)
{
struct io_kiocb *notif;
struct io_notif_data *nd;
+ io_ring_ctx_assert_locked(ctx);
+
if (unlikely(!io_alloc_req(ctx, ¬if)))
return NULL;
notif->ctx = ctx;
notif->opcode = IORING_OP_NOP;
notif->flags = 0;
diff --git a/io_uring/notif.h b/io_uring/notif.h
index f3589cfef4a9..c33c9a1179c9 100644
--- a/io_uring/notif.h
+++ b/io_uring/notif.h
@@ -31,14 +31,15 @@ static inline struct io_notif_data *io_notif_to_data(struct io_kiocb *notif)
{
return io_kiocb_to_cmd(notif, struct io_notif_data);
}
static inline void io_notif_flush(struct io_kiocb *notif)
- __must_hold(¬if->ctx->uring_lock)
{
struct io_notif_data *nd = io_notif_to_data(notif);
+ io_ring_ctx_assert_locked(notif->ctx);
+
io_tx_ubuf_complete(NULL, &nd->uarg, true);
}
static inline int io_notif_account_mem(struct io_kiocb *notif, unsigned len)
{
diff --git a/io_uring/poll.c b/io_uring/poll.c
index ea75c5cd81a0..ba71403c8fd8 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -121,11 +121,11 @@ static struct io_poll *io_poll_get_single(struct io_kiocb *req)
static void io_poll_req_insert(struct io_kiocb *req)
{
struct io_hash_table *table = &req->ctx->cancel_table;
u32 index = hash_long(req->cqe.user_data, table->hash_bits);
- lockdep_assert_held(&req->ctx->uring_lock);
+ io_ring_ctx_assert_locked(req->ctx);
hlist_add_head(&req->hash_node, &table->hbs[index].list);
}
static void io_init_poll_iocb(struct io_poll *poll, __poll_t events)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 1e5b7833076a..1c1753de7340 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -347,11 +347,11 @@ static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
struct io_uring_rsrc_update2 *up,
unsigned nr_args)
{
__u32 tmp;
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
if (check_add_overflow(up->offset, nr_args, &tmp))
return -EOVERFLOW;
switch (type) {
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index a3ca6ba66596..d537a3b895d6 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -2,10 +2,11 @@
#ifndef IOU_RSRC_H
#define IOU_RSRC_H
#include <linux/io_uring_types.h>
#include <linux/lockdep.h>
+#include "io_uring.h"
#define IO_VEC_CACHE_SOFT_CAP 256
enum {
IORING_RSRC_FILE = 0,
@@ -97,11 +98,11 @@ static inline struct io_rsrc_node *io_rsrc_node_lookup(struct io_rsrc_data *data
return NULL;
}
static inline void io_put_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
{
- lockdep_assert_held(&ctx->uring_lock);
+ io_ring_ctx_assert_locked(ctx);
if (!--node->refs)
io_free_rsrc_node(ctx, node);
}
static inline bool io_reset_rsrc_node(struct io_ring_ctx *ctx,
diff --git a/io_uring/rw.c b/io_uring/rw.c
index ab6b4afccec3..f00e02a02dc7 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -461,11 +461,11 @@ int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
return 0;
}
void io_readv_writev_cleanup(struct io_kiocb *req)
{
- lockdep_assert_held(&req->ctx->uring_lock);
+ io_ring_ctx_assert_locked(req->ctx);
io_rw_recycle(req, 0);
}
static inline loff_t *io_kiocb_update_pos(struct io_kiocb *req)
{
diff --git a/io_uring/waitid.c b/io_uring/waitid.c
index 26c118f3918d..f7a5054d4d81 100644
--- a/io_uring/waitid.c
+++ b/io_uring/waitid.c
@@ -114,11 +114,11 @@ static void io_waitid_complete(struct io_kiocb *req, int ret)
struct io_waitid *iw = io_kiocb_to_cmd(req, struct io_waitid);
/* anyone completing better be holding a reference */
WARN_ON_ONCE(!(atomic_read(&iw->refs) & IO_WAITID_REF_MASK));
- lockdep_assert_held(&req->ctx->uring_lock);
+ io_ring_ctx_assert_locked(req->ctx);
hlist_del_init(&req->hash_node);
ret = io_waitid_finish(req, ret);
if (ret < 0)
--
2.45.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-09-03 3:26 [PATCH 0/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
` (2 preceding siblings ...)
2025-09-03 3:26 ` [PATCH 3/4] io_uring: factor out uring_lock helpers Caleb Sander Mateos
@ 2025-09-03 3:26 ` Caleb Sander Mateos
2025-09-03 21:55 ` [syzbot ci] " syzbot ci
4 siblings, 0 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 3:26 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-kernel, Caleb Sander Mateos
io_ring_ctx's mutex uring_lock can be quite expensive in high-IOPS
workloads. Even when only one thread pinned to a single CPU is accessing
the io_ring_ctx, the atomic CAS required to lock and unlock the mutex is
a very hot instruction. The mutex's primary purpose is to prevent
concurrent io_uring system calls on the same io_ring_ctx. However, there
is already a flag IORING_SETUP_SINGLE_ISSUER that promises only one
task will make io_uring_enter() and io_uring_register() system calls on
the io_ring_ctx once it's enabled.
So if the io_ring_ctx is setup with IORING_SETUP_SINGLE_ISSUER, skip the
uring_lock mutex_lock() and mutex_unlock() for the io_uring_enter()
submission as well as for io_handle_tw_list(). io_uring_enter()
submission calls __io_uring_add_tctx_node_from_submit() to verify the
current task matches submitter_task for IORING_SETUP_SINGLE_ISSUER. And
task work can only be scheduled on tasks that submit io_uring requests,
so io_handle_tw_list() will also only be called on submitter_task.
There is a goto from the io_uring_enter() submission to the middle of
the IOPOLL block which assumed the uring_lock would already be held.
This is no longer the case for IORING_SETUP_SINGLE_ISSUER, so goto the
preceding mutex_lock() in that case.
It may be possible to avoid taking uring_lock in other places too for
IORING_SETUP_SINGLE_ISSUER, but these two cover the primary hot paths.
The uring_lock in io_uring_register() is necessary at least before the
io_uring is enabled because submitter_task isn't set yet. uring_lock is
also used to synchronize IOPOLL on submitting tasks with io_uring worker
tasks, so it's still needed there. But in principle, it should be
possible to remove the mutex entirely for IORING_SETUP_SINGLE_ISSUER by
running any code needing exclusive access to the io_ring_ctx in task
work context on submitter_task.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
io_uring/io_uring.c | 6 +++++-
io_uring/io_uring.h | 14 ++++++++++++++
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 7f19b6da5d3d..5793f6122159 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3534,12 +3534,15 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
if (ret != to_submit) {
io_ring_ctx_unlock(ctx);
goto out;
}
if (flags & IORING_ENTER_GETEVENTS) {
- if (ctx->syscall_iopoll)
+ if (ctx->syscall_iopoll) {
+ if (ctx->flags & IORING_SETUP_SINGLE_ISSUER)
+ goto iopoll;
goto iopoll_locked;
+ }
/*
* Ignore errors, we'll soon call io_cqring_wait() and
* it should handle ownership problems if any.
*/
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
@@ -3556,10 +3559,11 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
* We disallow the app entering submit/complete with
* polling, but we still need to lock the ring to
* prevent racing with polled issue that got punted to
* a workqueue.
*/
+iopoll:
mutex_lock(&ctx->uring_lock);
iopoll_locked:
ret2 = io_validate_ext_arg(ctx, flags, argp, argsz);
if (likely(!ret2))
ret2 = io_iopoll_check(ctx, min_complete);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index a0580a1bf6b5..7296b12b0897 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -121,20 +121,34 @@ bool io_match_task_safe(struct io_kiocb *head, struct io_uring_task *tctx,
void io_activate_pollwq(struct io_ring_ctx *ctx);
static inline void io_ring_ctx_lock(struct io_ring_ctx *ctx)
{
+ if (ctx->flags & IORING_SETUP_SINGLE_ISSUER) {
+ WARN_ON_ONCE(current != ctx->submitter_task);
+ return;
+ }
+
mutex_lock(&ctx->uring_lock);
}
static inline void io_ring_ctx_unlock(struct io_ring_ctx *ctx)
{
+ if (ctx->flags & IORING_SETUP_SINGLE_ISSUER) {
+ WARN_ON_ONCE(current != ctx->submitter_task);
+ return;
+ }
+
mutex_unlock(&ctx->uring_lock);
}
static inline void io_ring_ctx_assert_locked(const struct io_ring_ctx *ctx)
{
+ if (ctx->flags & IORING_SETUP_SINGLE_ISSUER &&
+ current == ctx->submitter_task)
+ return;
+
lockdep_assert_held(&ctx->uring_lock);
}
static inline void io_lockdep_assert_cq_locked(struct io_ring_ctx *ctx)
{
--
2.45.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-09-03 3:26 [PATCH 0/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
` (3 preceding siblings ...)
2025-09-03 3:26 ` [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
@ 2025-09-03 21:55 ` syzbot ci
2025-09-03 23:29 ` Jens Axboe
4 siblings, 1 reply; 17+ messages in thread
From: syzbot ci @ 2025-09-03 21:55 UTC (permalink / raw)
To: axboe, csander, io-uring, linux-kernel; +Cc: syzbot, syzkaller-bugs
syzbot ci has tested the following series
[v1] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
https://lore.kernel.org/all/20250903032656.2012337-1-csander@purestorage.com
* [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h
* [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
* [PATCH 3/4] io_uring: factor out uring_lock helpers
* [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
and found the following issue:
WARNING in io_handle_tw_list
Full report is available here:
https://ci.syzbot.org/series/54ae0eae-5e47-4cfe-9ae7-9eaaf959b5ae
***
WARNING in io_handle_tw_list
tree: linux-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
base: 5d50cf9f7cf20a17ac469c20a2e07c29c1f6aab7
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/1de646dd-4ee2-418d-9c62-617d88ed4fd2/config
syz repro: https://ci.syzbot.org/findings/e229a878-375f-4286-89fe-b6724c23addd/syz_repro
------------[ cut here ]------------
WARNING: io_uring/io_uring.h:127 at io_ring_ctx_lock io_uring/io_uring.h:127 [inline], CPU#1: iou-sqp-6294/6297
WARNING: io_uring/io_uring.h:127 at io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155, CPU#1: iou-sqp-6294/6297
Modules linked in:
CPU: 1 UID: 0 PID: 6297 Comm: iou-sqp-6294 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:127 [inline]
RIP: 0010:io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155
Code: 00 00 48 c7 c7 e0 90 02 8c be 8e 04 00 00 31 d2 e8 01 e5 d2 fc 2e 2e 2e 31 c0 45 31 e4 4d 85 ff 75 89 eb 7c e8 ad fb 00 fd 90 <0f> 0b 90 e9 cf fe ff ff 89 e9 80 e1 07 80 c1 03 38 c1 0f 8c 22 ff
RSP: 0018:ffffc900032cf938 EFLAGS: 00010293
RAX: ffffffff84bfcba3 RBX: dffffc0000000000 RCX: ffff888107f61cc0
RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
RBP: ffff8881119a8008 R08: ffff888110bb69c7 R09: 1ffff11022176d38
R10: dffffc0000000000 R11: ffffed1022176d39 R12: ffff8881119a8000
R13: ffff888108441e90 R14: ffff888107f61cc0 R15: 0000000000000000
FS: 00007f81f25716c0(0000) GS:ffff8881a39f5000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b31b63fff CR3: 000000010f24c000 CR4: 00000000000006f0
Call Trace:
<TASK>
tctx_task_work_run+0x99/0x370 io_uring/io_uring.c:1223
io_sq_tw io_uring/sqpoll.c:244 [inline]
io_sq_thread+0xed1/0x1e50 io_uring/sqpoll.c:327
ret_from_fork+0x47f/0x820 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-09-03 21:55 ` [syzbot ci] " syzbot ci
@ 2025-09-03 23:29 ` Jens Axboe
2025-09-04 14:52 ` Caleb Sander Mateos
0 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2025-09-03 23:29 UTC (permalink / raw)
To: syzbot ci, csander, io-uring, linux-kernel; +Cc: syzbot, syzkaller-bugs
On 9/3/25 3:55 PM, syzbot ci wrote:
> syzbot ci has tested the following series
>
> [v1] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> https://lore.kernel.org/all/20250903032656.2012337-1-csander@purestorage.com
> * [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h
> * [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
> * [PATCH 3/4] io_uring: factor out uring_lock helpers
> * [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
>
> and found the following issue:
> WARNING in io_handle_tw_list
>
> Full report is available here:
> https://ci.syzbot.org/series/54ae0eae-5e47-4cfe-9ae7-9eaaf959b5ae
>
> ***
>
> WARNING in io_handle_tw_list
>
> tree: linux-next
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> base: 5d50cf9f7cf20a17ac469c20a2e07c29c1f6aab7
> arch: amd64
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> config: https://ci.syzbot.org/builds/1de646dd-4ee2-418d-9c62-617d88ed4fd2/config
> syz repro: https://ci.syzbot.org/findings/e229a878-375f-4286-89fe-b6724c23addd/syz_repro
>
> ------------[ cut here ]------------
> WARNING: io_uring/io_uring.h:127 at io_ring_ctx_lock io_uring/io_uring.h:127 [inline], CPU#1: iou-sqp-6294/6297
> WARNING: io_uring/io_uring.h:127 at io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155, CPU#1: iou-sqp-6294/6297
> Modules linked in:
> CPU: 1 UID: 0 PID: 6297 Comm: iou-sqp-6294 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:127 [inline]
> RIP: 0010:io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155
> Code: 00 00 48 c7 c7 e0 90 02 8c be 8e 04 00 00 31 d2 e8 01 e5 d2 fc 2e 2e 2e 31 c0 45 31 e4 4d 85 ff 75 89 eb 7c e8 ad fb 00 fd 90 <0f> 0b 90 e9 cf fe ff ff 89 e9 80 e1 07 80 c1 03 38 c1 0f 8c 22 ff
> RSP: 0018:ffffc900032cf938 EFLAGS: 00010293
> RAX: ffffffff84bfcba3 RBX: dffffc0000000000 RCX: ffff888107f61cc0
> RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
> RBP: ffff8881119a8008 R08: ffff888110bb69c7 R09: 1ffff11022176d38
> R10: dffffc0000000000 R11: ffffed1022176d39 R12: ffff8881119a8000
> R13: ffff888108441e90 R14: ffff888107f61cc0 R15: 0000000000000000
> FS: 00007f81f25716c0(0000) GS:ffff8881a39f5000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000001b31b63fff CR3: 000000010f24c000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> tctx_task_work_run+0x99/0x370 io_uring/io_uring.c:1223
> io_sq_tw io_uring/sqpoll.c:244 [inline]
> io_sq_thread+0xed1/0x1e50 io_uring/sqpoll.c:327
> ret_from_fork+0x47f/0x820 arch/x86/kernel/process.c:148
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
> </TASK>
Probably the sanest thing to do here is to clear
IORING_SETUP_SINGLE_ISSUER if it's set with IORING_SETUP_SQPOLL. If we
allow it, it'll be impossible to uphold the locking criteria on both the
issue and register side.
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-09-03 23:29 ` Jens Axboe
@ 2025-09-04 14:52 ` Caleb Sander Mateos
2025-09-04 16:46 ` Caleb Sander Mateos
0 siblings, 1 reply; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-04 14:52 UTC (permalink / raw)
To: Jens Axboe; +Cc: syzbot ci, io-uring, linux-kernel, syzbot, syzkaller-bugs
On Wed, Sep 3, 2025 at 4:30 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> On 9/3/25 3:55 PM, syzbot ci wrote:
> > syzbot ci has tested the following series
> >
> > [v1] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> > https://lore.kernel.org/all/20250903032656.2012337-1-csander@purestorage.com
> > * [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h
> > * [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
> > * [PATCH 3/4] io_uring: factor out uring_lock helpers
> > * [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> >
> > and found the following issue:
> > WARNING in io_handle_tw_list
> >
> > Full report is available here:
> > https://ci.syzbot.org/series/54ae0eae-5e47-4cfe-9ae7-9eaaf959b5ae
> >
> > ***
> >
> > WARNING in io_handle_tw_list
> >
> > tree: linux-next
> > URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> > base: 5d50cf9f7cf20a17ac469c20a2e07c29c1f6aab7
> > arch: amd64
> > compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> > config: https://ci.syzbot.org/builds/1de646dd-4ee2-418d-9c62-617d88ed4fd2/config
> > syz repro: https://ci.syzbot.org/findings/e229a878-375f-4286-89fe-b6724c23addd/syz_repro
> >
> > ------------[ cut here ]------------
> > WARNING: io_uring/io_uring.h:127 at io_ring_ctx_lock io_uring/io_uring.h:127 [inline], CPU#1: iou-sqp-6294/6297
> > WARNING: io_uring/io_uring.h:127 at io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155, CPU#1: iou-sqp-6294/6297
> > Modules linked in:
> > CPU: 1 UID: 0 PID: 6297 Comm: iou-sqp-6294 Not tainted syzkaller #0 PREEMPT(full)
> > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:127 [inline]
> > RIP: 0010:io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155
> > Code: 00 00 48 c7 c7 e0 90 02 8c be 8e 04 00 00 31 d2 e8 01 e5 d2 fc 2e 2e 2e 31 c0 45 31 e4 4d 85 ff 75 89 eb 7c e8 ad fb 00 fd 90 <0f> 0b 90 e9 cf fe ff ff 89 e9 80 e1 07 80 c1 03 38 c1 0f 8c 22 ff
> > RSP: 0018:ffffc900032cf938 EFLAGS: 00010293
> > RAX: ffffffff84bfcba3 RBX: dffffc0000000000 RCX: ffff888107f61cc0
> > RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
> > RBP: ffff8881119a8008 R08: ffff888110bb69c7 R09: 1ffff11022176d38
> > R10: dffffc0000000000 R11: ffffed1022176d39 R12: ffff8881119a8000
> > R13: ffff888108441e90 R14: ffff888107f61cc0 R15: 0000000000000000
> > FS: 00007f81f25716c0(0000) GS:ffff8881a39f5000(0000) knlGS:0000000000000000
> > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 0000001b31b63fff CR3: 000000010f24c000 CR4: 00000000000006f0
> > Call Trace:
> > <TASK>
> > tctx_task_work_run+0x99/0x370 io_uring/io_uring.c:1223
> > io_sq_tw io_uring/sqpoll.c:244 [inline]
> > io_sq_thread+0xed1/0x1e50 io_uring/sqpoll.c:327
> > ret_from_fork+0x47f/0x820 arch/x86/kernel/process.c:148
> > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
> > </TASK>
>
> Probably the sanest thing to do here is to clear
> IORING_SETUP_SINGLE_ISSUER if it's set with IORING_SETUP_SQPOLL. If we
> allow it, it'll be impossible to uphold the locking criteria on both the
> issue and register side.
Yup, I was thinking the same thing. Thanks for taking a look.
Best,
Caleb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-09-04 14:52 ` Caleb Sander Mateos
@ 2025-09-04 16:46 ` Caleb Sander Mateos
2025-09-04 16:50 ` Caleb Sander Mateos
0 siblings, 1 reply; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-04 16:46 UTC (permalink / raw)
To: Jens Axboe; +Cc: syzbot ci, io-uring, linux-kernel, syzbot, syzkaller-bugs
On Thu, Sep 4, 2025 at 7:52 AM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> On Wed, Sep 3, 2025 at 4:30 PM Jens Axboe <axboe@kernel.dk> wrote:
> >
> > On 9/3/25 3:55 PM, syzbot ci wrote:
> > > syzbot ci has tested the following series
> > >
> > > [v1] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> > > https://lore.kernel.org/all/20250903032656.2012337-1-csander@purestorage.com
> > > * [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h
> > > * [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
> > > * [PATCH 3/4] io_uring: factor out uring_lock helpers
> > > * [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> > >
> > > and found the following issue:
> > > WARNING in io_handle_tw_list
> > >
> > > Full report is available here:
> > > https://ci.syzbot.org/series/54ae0eae-5e47-4cfe-9ae7-9eaaf959b5ae
> > >
> > > ***
> > >
> > > WARNING in io_handle_tw_list
> > >
> > > tree: linux-next
> > > URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> > > base: 5d50cf9f7cf20a17ac469c20a2e07c29c1f6aab7
> > > arch: amd64
> > > compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> > > config: https://ci.syzbot.org/builds/1de646dd-4ee2-418d-9c62-617d88ed4fd2/config
> > > syz repro: https://ci.syzbot.org/findings/e229a878-375f-4286-89fe-b6724c23addd/syz_repro
> > >
> > > ------------[ cut here ]------------
> > > WARNING: io_uring/io_uring.h:127 at io_ring_ctx_lock io_uring/io_uring.h:127 [inline], CPU#1: iou-sqp-6294/6297
> > > WARNING: io_uring/io_uring.h:127 at io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155, CPU#1: iou-sqp-6294/6297
> > > Modules linked in:
> > > CPU: 1 UID: 0 PID: 6297 Comm: iou-sqp-6294 Not tainted syzkaller #0 PREEMPT(full)
> > > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > > RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:127 [inline]
> > > RIP: 0010:io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155
> > > Code: 00 00 48 c7 c7 e0 90 02 8c be 8e 04 00 00 31 d2 e8 01 e5 d2 fc 2e 2e 2e 31 c0 45 31 e4 4d 85 ff 75 89 eb 7c e8 ad fb 00 fd 90 <0f> 0b 90 e9 cf fe ff ff 89 e9 80 e1 07 80 c1 03 38 c1 0f 8c 22 ff
> > > RSP: 0018:ffffc900032cf938 EFLAGS: 00010293
> > > RAX: ffffffff84bfcba3 RBX: dffffc0000000000 RCX: ffff888107f61cc0
> > > RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
> > > RBP: ffff8881119a8008 R08: ffff888110bb69c7 R09: 1ffff11022176d38
> > > R10: dffffc0000000000 R11: ffffed1022176d39 R12: ffff8881119a8000
> > > R13: ffff888108441e90 R14: ffff888107f61cc0 R15: 0000000000000000
> > > FS: 00007f81f25716c0(0000) GS:ffff8881a39f5000(0000) knlGS:0000000000000000
> > > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > CR2: 0000001b31b63fff CR3: 000000010f24c000 CR4: 00000000000006f0
> > > Call Trace:
> > > <TASK>
> > > tctx_task_work_run+0x99/0x370 io_uring/io_uring.c:1223
> > > io_sq_tw io_uring/sqpoll.c:244 [inline]
> > > io_sq_thread+0xed1/0x1e50 io_uring/sqpoll.c:327
> > > ret_from_fork+0x47f/0x820 arch/x86/kernel/process.c:148
> > > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
> > > </TASK>
> >
> > Probably the sanest thing to do here is to clear
> > IORING_SETUP_SINGLE_ISSUER if it's set with IORING_SETUP_SQPOLL. If we
> > allow it, it'll be impossible to uphold the locking criteria on both the
> > issue and register side.
>
> Yup, I was thinking the same thing. Thanks for taking a look.
On further thought, IORING_SETUP_SQPOLL actually does guarantee a
single issuer. io_uring_enter() already avoids taking the uring_lock
in the IORING_SETUP_SQPOLL case because it doesn't issue any SQEs
itself. Only the SQ thread does that, so it *is* the single issuer.
The assertions I added in io_ring_ctx_lock()/io_ring_ctx_unlock() is
just unnecessarily strict. It should expect current ==
ctx->sq_data->thread in the IORING_SETUP_SQPOLL case.
Best,
Caleb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-09-04 16:46 ` Caleb Sander Mateos
@ 2025-09-04 16:50 ` Caleb Sander Mateos
2025-09-04 23:25 ` Jens Axboe
0 siblings, 1 reply; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-09-04 16:50 UTC (permalink / raw)
To: Jens Axboe; +Cc: syzbot ci, io-uring, linux-kernel, syzbot, syzkaller-bugs
On Thu, Sep 4, 2025 at 9:46 AM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> On Thu, Sep 4, 2025 at 7:52 AM Caleb Sander Mateos
> <csander@purestorage.com> wrote:
> >
> > On Wed, Sep 3, 2025 at 4:30 PM Jens Axboe <axboe@kernel.dk> wrote:
> > >
> > > On 9/3/25 3:55 PM, syzbot ci wrote:
> > > > syzbot ci has tested the following series
> > > >
> > > > [v1] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> > > > https://lore.kernel.org/all/20250903032656.2012337-1-csander@purestorage.com
> > > > * [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h
> > > > * [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
> > > > * [PATCH 3/4] io_uring: factor out uring_lock helpers
> > > > * [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> > > >
> > > > and found the following issue:
> > > > WARNING in io_handle_tw_list
> > > >
> > > > Full report is available here:
> > > > https://ci.syzbot.org/series/54ae0eae-5e47-4cfe-9ae7-9eaaf959b5ae
> > > >
> > > > ***
> > > >
> > > > WARNING in io_handle_tw_list
> > > >
> > > > tree: linux-next
> > > > URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> > > > base: 5d50cf9f7cf20a17ac469c20a2e07c29c1f6aab7
> > > > arch: amd64
> > > > compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> > > > config: https://ci.syzbot.org/builds/1de646dd-4ee2-418d-9c62-617d88ed4fd2/config
> > > > syz repro: https://ci.syzbot.org/findings/e229a878-375f-4286-89fe-b6724c23addd/syz_repro
> > > >
> > > > ------------[ cut here ]------------
> > > > WARNING: io_uring/io_uring.h:127 at io_ring_ctx_lock io_uring/io_uring.h:127 [inline], CPU#1: iou-sqp-6294/6297
> > > > WARNING: io_uring/io_uring.h:127 at io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155, CPU#1: iou-sqp-6294/6297
> > > > Modules linked in:
> > > > CPU: 1 UID: 0 PID: 6297 Comm: iou-sqp-6294 Not tainted syzkaller #0 PREEMPT(full)
> > > > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > > > RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:127 [inline]
> > > > RIP: 0010:io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155
> > > > Code: 00 00 48 c7 c7 e0 90 02 8c be 8e 04 00 00 31 d2 e8 01 e5 d2 fc 2e 2e 2e 31 c0 45 31 e4 4d 85 ff 75 89 eb 7c e8 ad fb 00 fd 90 <0f> 0b 90 e9 cf fe ff ff 89 e9 80 e1 07 80 c1 03 38 c1 0f 8c 22 ff
> > > > RSP: 0018:ffffc900032cf938 EFLAGS: 00010293
> > > > RAX: ffffffff84bfcba3 RBX: dffffc0000000000 RCX: ffff888107f61cc0
> > > > RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
> > > > RBP: ffff8881119a8008 R08: ffff888110bb69c7 R09: 1ffff11022176d38
> > > > R10: dffffc0000000000 R11: ffffed1022176d39 R12: ffff8881119a8000
> > > > R13: ffff888108441e90 R14: ffff888107f61cc0 R15: 0000000000000000
> > > > FS: 00007f81f25716c0(0000) GS:ffff8881a39f5000(0000) knlGS:0000000000000000
> > > > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > > CR2: 0000001b31b63fff CR3: 000000010f24c000 CR4: 00000000000006f0
> > > > Call Trace:
> > > > <TASK>
> > > > tctx_task_work_run+0x99/0x370 io_uring/io_uring.c:1223
> > > > io_sq_tw io_uring/sqpoll.c:244 [inline]
> > > > io_sq_thread+0xed1/0x1e50 io_uring/sqpoll.c:327
> > > > ret_from_fork+0x47f/0x820 arch/x86/kernel/process.c:148
> > > > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
> > > > </TASK>
> > >
> > > Probably the sanest thing to do here is to clear
> > > IORING_SETUP_SINGLE_ISSUER if it's set with IORING_SETUP_SQPOLL. If we
> > > allow it, it'll be impossible to uphold the locking criteria on both the
> > > issue and register side.
> >
> > Yup, I was thinking the same thing. Thanks for taking a look.
>
> On further thought, IORING_SETUP_SQPOLL actually does guarantee a
> single issuer. io_uring_enter() already avoids taking the uring_lock
> in the IORING_SETUP_SQPOLL case because it doesn't issue any SQEs
> itself. Only the SQ thread does that, so it *is* the single issuer.
> The assertions I added in io_ring_ctx_lock()/io_ring_ctx_unlock() is
> just unnecessarily strict. It should expect current ==
> ctx->sq_data->thread in the IORING_SETUP_SQPOLL case.
Oh, but you are totally correct about needing the mutex to synchronize
between issue on the SQ thread and io_uring_register() on other
threads. Yeah, I don't see an easy way to avoid taking the mutex on
the SQ thread unless we disallowed io_uring_register() completely.
Clearing IORING_SETUP_SINGLE_ISSUER seems like the best option for
now.
Best,
Caleb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-09-04 16:50 ` Caleb Sander Mateos
@ 2025-09-04 23:25 ` Jens Axboe
0 siblings, 0 replies; 17+ messages in thread
From: Jens Axboe @ 2025-09-04 23:25 UTC (permalink / raw)
To: Caleb Sander Mateos
Cc: syzbot ci, io-uring, linux-kernel, syzbot, syzkaller-bugs
On 9/4/25 10:50 AM, Caleb Sander Mateos wrote:
> On Thu, Sep 4, 2025 at 9:46?AM Caleb Sander Mateos
> <csander@purestorage.com> wrote:
>>
>> On Thu, Sep 4, 2025 at 7:52?AM Caleb Sander Mateos
>> <csander@purestorage.com> wrote:
>>>
>>> On Wed, Sep 3, 2025 at 4:30?PM Jens Axboe <axboe@kernel.dk> wrote:
>>>>
>>>> On 9/3/25 3:55 PM, syzbot ci wrote:
>>>>> syzbot ci has tested the following series
>>>>>
>>>>> [v1] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
>>>>> https://lore.kernel.org/all/20250903032656.2012337-1-csander@purestorage.com
>>>>> * [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h
>>>>> * [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers()
>>>>> * [PATCH 3/4] io_uring: factor out uring_lock helpers
>>>>> * [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
>>>>>
>>>>> and found the following issue:
>>>>> WARNING in io_handle_tw_list
>>>>>
>>>>> Full report is available here:
>>>>> https://ci.syzbot.org/series/54ae0eae-5e47-4cfe-9ae7-9eaaf959b5ae
>>>>>
>>>>> ***
>>>>>
>>>>> WARNING in io_handle_tw_list
>>>>>
>>>>> tree: linux-next
>>>>> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
>>>>> base: 5d50cf9f7cf20a17ac469c20a2e07c29c1f6aab7
>>>>> arch: amd64
>>>>> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
>>>>> config: https://ci.syzbot.org/builds/1de646dd-4ee2-418d-9c62-617d88ed4fd2/config
>>>>> syz repro: https://ci.syzbot.org/findings/e229a878-375f-4286-89fe-b6724c23addd/syz_repro
>>>>>
>>>>> ------------[ cut here ]------------
>>>>> WARNING: io_uring/io_uring.h:127 at io_ring_ctx_lock io_uring/io_uring.h:127 [inline], CPU#1: iou-sqp-6294/6297
>>>>> WARNING: io_uring/io_uring.h:127 at io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155, CPU#1: iou-sqp-6294/6297
>>>>> Modules linked in:
>>>>> CPU: 1 UID: 0 PID: 6297 Comm: iou-sqp-6294 Not tainted syzkaller #0 PREEMPT(full)
>>>>> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
>>>>> RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:127 [inline]
>>>>> RIP: 0010:io_handle_tw_list+0x234/0x2e0 io_uring/io_uring.c:1155
>>>>> Code: 00 00 48 c7 c7 e0 90 02 8c be 8e 04 00 00 31 d2 e8 01 e5 d2 fc 2e 2e 2e 31 c0 45 31 e4 4d 85 ff 75 89 eb 7c e8 ad fb 00 fd 90 <0f> 0b 90 e9 cf fe ff ff 89 e9 80 e1 07 80 c1 03 38 c1 0f 8c 22 ff
>>>>> RSP: 0018:ffffc900032cf938 EFLAGS: 00010293
>>>>> RAX: ffffffff84bfcba3 RBX: dffffc0000000000 RCX: ffff888107f61cc0
>>>>> RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
>>>>> RBP: ffff8881119a8008 R08: ffff888110bb69c7 R09: 1ffff11022176d38
>>>>> R10: dffffc0000000000 R11: ffffed1022176d39 R12: ffff8881119a8000
>>>>> R13: ffff888108441e90 R14: ffff888107f61cc0 R15: 0000000000000000
>>>>> FS: 00007f81f25716c0(0000) GS:ffff8881a39f5000(0000) knlGS:0000000000000000
>>>>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>> CR2: 0000001b31b63fff CR3: 000000010f24c000 CR4: 00000000000006f0
>>>>> Call Trace:
>>>>> <TASK>
>>>>> tctx_task_work_run+0x99/0x370 io_uring/io_uring.c:1223
>>>>> io_sq_tw io_uring/sqpoll.c:244 [inline]
>>>>> io_sq_thread+0xed1/0x1e50 io_uring/sqpoll.c:327
>>>>> ret_from_fork+0x47f/0x820 arch/x86/kernel/process.c:148
>>>>> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
>>>>> </TASK>
>>>>
>>>> Probably the sanest thing to do here is to clear
>>>> IORING_SETUP_SINGLE_ISSUER if it's set with IORING_SETUP_SQPOLL. If we
>>>> allow it, it'll be impossible to uphold the locking criteria on both the
>>>> issue and register side.
>>>
>>> Yup, I was thinking the same thing. Thanks for taking a look.
>>
>> On further thought, IORING_SETUP_SQPOLL actually does guarantee a
>> single issuer. io_uring_enter() already avoids taking the uring_lock
>> in the IORING_SETUP_SQPOLL case because it doesn't issue any SQEs
>> itself. Only the SQ thread does that, so it *is* the single issuer.
>> The assertions I added in io_ring_ctx_lock()/io_ring_ctx_unlock() is
>> just unnecessarily strict. It should expect current ==
>> ctx->sq_data->thread in the IORING_SETUP_SQPOLL case.
>
> Oh, but you are totally correct about needing the mutex to synchronize
> between issue on the SQ thread and io_uring_register() on other
> threads. Yeah, I don't see an easy way to avoid taking the mutex on
> the SQ thread unless we disallowed io_uring_register() completely.
> Clearing IORING_SETUP_SINGLE_ISSUER seems like the best option for
> now.
Right - I don't disagree that SQPOLL is the very definition of "single
issuer", but it'll still have to contend with the creating task doing
other operations that they would need mutual exclusion for. I don't
think clearing SINGLE_ISSUER on SQPOLL is a big deal, it's not like it's
worse off than before. It's just not getting the same optimizations that
the !SQPOLL single issuer path would get.
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
* [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-11-25 23:39 [PATCH v3 0/4] " Caleb Sander Mateos
@ 2025-11-26 8:15 ` syzbot ci
2025-11-26 17:30 ` Caleb Sander Mateos
0 siblings, 1 reply; 17+ messages in thread
From: syzbot ci @ 2025-11-26 8:15 UTC (permalink / raw)
To: axboe, csander, io-uring, linux-kernel; +Cc: syzbot, syzkaller-bugs
syzbot ci has tested the following series
[v3] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
https://lore.kernel.org/all/20251125233928.3962947-1-csander@purestorage.com
* [PATCH v3 1/4] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL
* [PATCH v3 2/4] io_uring: use io_ring_submit_lock() in io_iopoll_req_issued()
* [PATCH v3 3/4] io_uring: factor out uring_lock helpers
* [PATCH v3 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
and found the following issues:
* SYZFAIL: failed to recv rpc
* WARNING in io_ring_ctx_wait_and_kill
* WARNING in io_uring_alloc_task_context
* WARNING: suspicious RCU usage in io_eventfd_unregister
Full report is available here:
https://ci.syzbot.org/series/dde98852-0135-44b2-bbef-9ff9d772f924
***
SYZFAIL: failed to recv rpc
tree: linux-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
C repro: https://ci.syzbot.org/findings/19ae4090-3486-4e2a-973e-dcb6ec3ba0d1/c_repro
syz repro: https://ci.syzbot.org/findings/19ae4090-3486-4e2a-973e-dcb6ec3ba0d1/syz_repro
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
***
WARNING in io_ring_ctx_wait_and_kill
tree: linux-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
C repro: https://ci.syzbot.org/findings/f5ff9320-bf6f-40b4-a6b3-eee18fa83053/c_repro
syz repro: https://ci.syzbot.org/findings/f5ff9320-bf6f-40b4-a6b3-eee18fa83053/syz_repro
------------[ cut here ]------------
WARNING: io_uring/io_uring.h:266 at io_ring_ctx_lock io_uring/io_uring.h:266 [inline], CPU#0: syz.0.17/5967
WARNING: io_uring/io_uring.h:266 at io_ring_ctx_wait_and_kill+0x35f/0x490 io_uring/io_uring.c:3119, CPU#0: syz.0.17/5967
Modules linked in:
CPU: 0 UID: 0 PID: 5967 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:266 [inline]
RIP: 0010:io_ring_ctx_wait_and_kill+0x35f/0x490 io_uring/io_uring.c:3119
Code: 4e 11 48 3b 84 24 20 01 00 00 0f 85 1e 01 00 00 48 8d 65 d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc cc e8 92 fa 96 00 90 <0f> 0b 90 e9 be fd ff ff 48 8d 7c 24 40 ba 70 00 00 00 31 f6 e8 08
RSP: 0018:ffffc90004117b80 EFLAGS: 00010293
RAX: ffffffff812ac5ee RBX: ffff88810d784000 RCX: ffff888104363a80
RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
RBP: ffffc90004117d00 R08: ffffc90004117c7f R09: 0000000000000000
R10: ffffc90004117c40 R11: fffff52000822f90 R12: 1ffff92000822f74
R13: dffffc0000000000 R14: ffffc90004117c70 R15: 0000000000000000
FS: 000055558ddb3500(0000) GS:ffff88818e88a000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f07135e7dac CR3: 00000001728f4000 CR4: 00000000000006f0
Call Trace:
<TASK>
io_uring_create+0x6b6/0x940 io_uring/io_uring.c:3738
io_uring_setup io_uring/io_uring.c:3764 [inline]
__do_sys_io_uring_setup io_uring/io_uring.c:3798 [inline]
__se_sys_io_uring_setup+0x235/0x240 io_uring/io_uring.c:3789
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f071338f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff80b05b58 EFLAGS: 00000246 ORIG_RAX: 00000000000001a9
RAX: ffffffffffffffda RBX: 00007f07135e5fa0 RCX: 00007f071338f749
RDX: 0000000000000000 RSI: 0000200000000040 RDI: 0000000000000024
RBP: 00007f0713413f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f07135e5fa0 R14: 00007f07135e5fa0 R15: 0000000000000002
</TASK>
***
WARNING in io_uring_alloc_task_context
tree: linux-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
C repro: https://ci.syzbot.org/findings/7aa56677-dbe1-4fdc-bbc4-cc701c10fa7e/c_repro
syz repro: https://ci.syzbot.org/findings/7aa56677-dbe1-4fdc-bbc4-cc701c10fa7e/syz_repro
------------[ cut here ]------------
WARNING: io_uring/io_uring.h:266 at io_ring_ctx_lock io_uring/io_uring.h:266 [inline], CPU#0: syz.0.17/5982
WARNING: io_uring/io_uring.h:266 at io_init_wq_offload io_uring/tctx.c:23 [inline], CPU#0: syz.0.17/5982
WARNING: io_uring/io_uring.h:266 at io_uring_alloc_task_context+0x677/0x8c0 io_uring/tctx.c:86, CPU#0: syz.0.17/5982
Modules linked in:
CPU: 0 UID: 0 PID: 5982 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:266 [inline]
RIP: 0010:io_init_wq_offload io_uring/tctx.c:23 [inline]
RIP: 0010:io_uring_alloc_task_context+0x677/0x8c0 io_uring/tctx.c:86
Code: d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc cc e8 3d ad 96 00 bb f4 ff ff ff eb ab e8 31 ad 96 00 eb 9c e8 2a ad 96 00 90 <0f> 0b 90 e9 12 fb ff ff 4c 8d 64 24 60 4c 8d b4 24 f0 00 00 00 ba
RSP: 0018:ffffc90003dcf9c0 EFLAGS: 00010293
RAX: ffffffff812b1356 RBX: 0000000000000000 RCX: ffff8881777957c0
RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
RBP: ffffc90003dcfb50 R08: ffffffff8f7de377 R09: 1ffffffff1efbc6e
R10: dffffc0000000000 R11: fffffbfff1efbc6f R12: ffff8881052bf000
R13: ffff888104bf2000 R14: 0000000000001000 R15: 1ffff1102097e400
FS: 00005555613bd500(0000) GS:ffff88818e88a000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7773fe7dac CR3: 000000016cd1c000 CR4: 00000000000006f0
Call Trace:
<TASK>
__io_uring_add_tctx_node+0x455/0x710 io_uring/tctx.c:112
io_uring_create+0x559/0x940 io_uring/io_uring.c:3719
io_uring_setup io_uring/io_uring.c:3764 [inline]
__do_sys_io_uring_setup io_uring/io_uring.c:3798 [inline]
__se_sys_io_uring_setup+0x235/0x240 io_uring/io_uring.c:3789
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7773d8f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe094f0b68 EFLAGS: 00000246 ORIG_RAX: 00000000000001a9
RAX: ffffffffffffffda RBX: 00007f7773fe5fa0 RCX: 00007f7773d8f749
RDX: 0000000000000000 RSI: 0000200000000780 RDI: 0000000000000f08
RBP: 00007f7773e13f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7773fe5fa0 R14: 00007f7773fe5fa0 R15: 0000000000000002
</TASK>
***
WARNING: suspicious RCU usage in io_eventfd_unregister
tree: linux-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
C repro: https://ci.syzbot.org/findings/84c08f15-f4f9-4123-b889-1d8d19f3e0b1/c_repro
syz repro: https://ci.syzbot.org/findings/84c08f15-f4f9-4123-b889-1d8d19f3e0b1/syz_repro
=============================
WARNING: suspicious RCU usage
syzkaller #0 Not tainted
-----------------------------
io_uring/eventfd.c:160 suspicious rcu_dereference_protected() usage!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
2 locks held by kworker/u10:12/3941:
#0: ffff888168f41148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x841/0x15a0 kernel/workqueue.c:3236
#1: ffffc90021f3fb80 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x868/0x15a0 kernel/workqueue.c:3237
stack backtrace:
CPU: 1 UID: 0 PID: 3941 Comm: kworker/u10:12 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: iou_exit io_ring_exit_work
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
lockdep_rcu_suspicious+0x140/0x1d0 kernel/locking/lockdep.c:6876
io_eventfd_unregister+0x18b/0x1c0 io_uring/eventfd.c:159
io_ring_ctx_free+0x18a/0x820 io_uring/io_uring.c:2882
io_ring_exit_work+0xe71/0x1030 io_uring/io_uring.c:3110
process_one_work+0x93a/0x15a0 kernel/workqueue.c:3261
process_scheduled_works kernel/workqueue.c:3344 [inline]
worker_thread+0x9b0/0xee0 kernel/workqueue.c:3425
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-11-26 8:15 ` [syzbot ci] " syzbot ci
@ 2025-11-26 17:30 ` Caleb Sander Mateos
0 siblings, 0 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-11-26 17:30 UTC (permalink / raw)
To: syzbot ci; +Cc: axboe, io-uring, linux-kernel, syzbot, syzkaller-bugs
On Wed, Nov 26, 2025 at 12:15 AM syzbot ci
<syzbot+ci500177af251d1ddc@syzkaller.appspotmail.com> wrote:
>
> syzbot ci has tested the following series
>
> [v3] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> https://lore.kernel.org/all/20251125233928.3962947-1-csander@purestorage.com
> * [PATCH v3 1/4] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL
> * [PATCH v3 2/4] io_uring: use io_ring_submit_lock() in io_iopoll_req_issued()
> * [PATCH v3 3/4] io_uring: factor out uring_lock helpers
> * [PATCH v3 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
>
> and found the following issues:
> * SYZFAIL: failed to recv rpc
Looks like this might be a side effect of the "WARNING: suspicious RCU
usage in io_eventfd_unregister" report.
> * WARNING in io_ring_ctx_wait_and_kill
Looks like io_ring_ctx_wait_and_kill() can be called on a
IORING_SETUP_SINGLE_ISSUER io_ring_ctx before submitter_task has been
set if io_uring_create() errors out or a IORING_SETUP_R_DISABLED
io_ring_ctx is never enabled. I can relax this WARN_ON_ONCE()
condition.
> * WARNING in io_uring_alloc_task_context
Similar issue, __io_uring_add_tctx_node() is always called in
io_uring_create(), where submitter_task won't exist yet for
IORING_SETUP_SINGLE_ISSUER and IORING_SETUP_R_DISABLED.
> * WARNING: suspicious RCU usage in io_eventfd_unregister
Missed that io_eventfd_unregister() is also called from
io_ring_ctx_free(), not just __io_uring_register(). So we can't assert
that the uring_lock mutex is held.
Thanks, syzbot!
>
> Full report is available here:
> https://ci.syzbot.org/series/dde98852-0135-44b2-bbef-9ff9d772f924
>
> ***
>
> SYZFAIL: failed to recv rpc
>
> tree: linux-next
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
> arch: amd64
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
> C repro: https://ci.syzbot.org/findings/19ae4090-3486-4e2a-973e-dcb6ec3ba0d1/c_repro
> syz repro: https://ci.syzbot.org/findings/19ae4090-3486-4e2a-973e-dcb6ec3ba0d1/syz_repro
>
> SYZFAIL: failed to recv rpc
> fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
>
>
> ***
>
> WARNING in io_ring_ctx_wait_and_kill
>
> tree: linux-next
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
> arch: amd64
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
> C repro: https://ci.syzbot.org/findings/f5ff9320-bf6f-40b4-a6b3-eee18fa83053/c_repro
> syz repro: https://ci.syzbot.org/findings/f5ff9320-bf6f-40b4-a6b3-eee18fa83053/syz_repro
>
> ------------[ cut here ]------------
> WARNING: io_uring/io_uring.h:266 at io_ring_ctx_lock io_uring/io_uring.h:266 [inline], CPU#0: syz.0.17/5967
> WARNING: io_uring/io_uring.h:266 at io_ring_ctx_wait_and_kill+0x35f/0x490 io_uring/io_uring.c:3119, CPU#0: syz.0.17/5967
> Modules linked in:
> CPU: 0 UID: 0 PID: 5967 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:266 [inline]
> RIP: 0010:io_ring_ctx_wait_and_kill+0x35f/0x490 io_uring/io_uring.c:3119
> Code: 4e 11 48 3b 84 24 20 01 00 00 0f 85 1e 01 00 00 48 8d 65 d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc cc e8 92 fa 96 00 90 <0f> 0b 90 e9 be fd ff ff 48 8d 7c 24 40 ba 70 00 00 00 31 f6 e8 08
> RSP: 0018:ffffc90004117b80 EFLAGS: 00010293
> RAX: ffffffff812ac5ee RBX: ffff88810d784000 RCX: ffff888104363a80
> RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
> RBP: ffffc90004117d00 R08: ffffc90004117c7f R09: 0000000000000000
> R10: ffffc90004117c40 R11: fffff52000822f90 R12: 1ffff92000822f74
> R13: dffffc0000000000 R14: ffffc90004117c70 R15: 0000000000000000
> FS: 000055558ddb3500(0000) GS:ffff88818e88a000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f07135e7dac CR3: 00000001728f4000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> io_uring_create+0x6b6/0x940 io_uring/io_uring.c:3738
> io_uring_setup io_uring/io_uring.c:3764 [inline]
> __do_sys_io_uring_setup io_uring/io_uring.c:3798 [inline]
> __se_sys_io_uring_setup+0x235/0x240 io_uring/io_uring.c:3789
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f071338f749
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007fff80b05b58 EFLAGS: 00000246 ORIG_RAX: 00000000000001a9
> RAX: ffffffffffffffda RBX: 00007f07135e5fa0 RCX: 00007f071338f749
> RDX: 0000000000000000 RSI: 0000200000000040 RDI: 0000000000000024
> RBP: 00007f0713413f91 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007f07135e5fa0 R14: 00007f07135e5fa0 R15: 0000000000000002
> </TASK>
>
>
> ***
>
> WARNING in io_uring_alloc_task_context
>
> tree: linux-next
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
> arch: amd64
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
> C repro: https://ci.syzbot.org/findings/7aa56677-dbe1-4fdc-bbc4-cc701c10fa7e/c_repro
> syz repro: https://ci.syzbot.org/findings/7aa56677-dbe1-4fdc-bbc4-cc701c10fa7e/syz_repro
>
> ------------[ cut here ]------------
> WARNING: io_uring/io_uring.h:266 at io_ring_ctx_lock io_uring/io_uring.h:266 [inline], CPU#0: syz.0.17/5982
> WARNING: io_uring/io_uring.h:266 at io_init_wq_offload io_uring/tctx.c:23 [inline], CPU#0: syz.0.17/5982
> WARNING: io_uring/io_uring.h:266 at io_uring_alloc_task_context+0x677/0x8c0 io_uring/tctx.c:86, CPU#0: syz.0.17/5982
> Modules linked in:
> CPU: 0 UID: 0 PID: 5982 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:io_ring_ctx_lock io_uring/io_uring.h:266 [inline]
> RIP: 0010:io_init_wq_offload io_uring/tctx.c:23 [inline]
> RIP: 0010:io_uring_alloc_task_context+0x677/0x8c0 io_uring/tctx.c:86
> Code: d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc cc e8 3d ad 96 00 bb f4 ff ff ff eb ab e8 31 ad 96 00 eb 9c e8 2a ad 96 00 90 <0f> 0b 90 e9 12 fb ff ff 4c 8d 64 24 60 4c 8d b4 24 f0 00 00 00 ba
> RSP: 0018:ffffc90003dcf9c0 EFLAGS: 00010293
> RAX: ffffffff812b1356 RBX: 0000000000000000 RCX: ffff8881777957c0
> RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000000000
> RBP: ffffc90003dcfb50 R08: ffffffff8f7de377 R09: 1ffffffff1efbc6e
> R10: dffffc0000000000 R11: fffffbfff1efbc6f R12: ffff8881052bf000
> R13: ffff888104bf2000 R14: 0000000000001000 R15: 1ffff1102097e400
> FS: 00005555613bd500(0000) GS:ffff88818e88a000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f7773fe7dac CR3: 000000016cd1c000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> __io_uring_add_tctx_node+0x455/0x710 io_uring/tctx.c:112
> io_uring_create+0x559/0x940 io_uring/io_uring.c:3719
> io_uring_setup io_uring/io_uring.c:3764 [inline]
> __do_sys_io_uring_setup io_uring/io_uring.c:3798 [inline]
> __se_sys_io_uring_setup+0x235/0x240 io_uring/io_uring.c:3789
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f7773d8f749
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007ffe094f0b68 EFLAGS: 00000246 ORIG_RAX: 00000000000001a9
> RAX: ffffffffffffffda RBX: 00007f7773fe5fa0 RCX: 00007f7773d8f749
> RDX: 0000000000000000 RSI: 0000200000000780 RDI: 0000000000000f08
> RBP: 00007f7773e13f91 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007f7773fe5fa0 R14: 00007f7773fe5fa0 R15: 0000000000000002
> </TASK>
>
>
> ***
>
> WARNING: suspicious RCU usage in io_eventfd_unregister
>
> tree: linux-next
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
> base: 92fd6e84175befa1775e5c0ab682938eca27c0b2
> arch: amd64
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> config: https://ci.syzbot.org/builds/9d67ded7-d9a8-41e3-8b58-51340991cf96/config
> C repro: https://ci.syzbot.org/findings/84c08f15-f4f9-4123-b889-1d8d19f3e0b1/c_repro
> syz repro: https://ci.syzbot.org/findings/84c08f15-f4f9-4123-b889-1d8d19f3e0b1/syz_repro
>
> =============================
> WARNING: suspicious RCU usage
> syzkaller #0 Not tainted
> -----------------------------
> io_uring/eventfd.c:160 suspicious rcu_dereference_protected() usage!
>
> other info that might help us debug this:
>
>
> rcu_scheduler_active = 2, debug_locks = 1
> 2 locks held by kworker/u10:12/3941:
> #0: ffff888168f41148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x841/0x15a0 kernel/workqueue.c:3236
> #1: ffffc90021f3fb80 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x868/0x15a0 kernel/workqueue.c:3237
>
> stack backtrace:
> CPU: 1 UID: 0 PID: 3941 Comm: kworker/u10:12 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> Workqueue: iou_exit io_ring_exit_work
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> lockdep_rcu_suspicious+0x140/0x1d0 kernel/locking/lockdep.c:6876
> io_eventfd_unregister+0x18b/0x1c0 io_uring/eventfd.c:159
> io_ring_ctx_free+0x18a/0x820 io_uring/io_uring.c:2882
> io_ring_exit_work+0xe71/0x1030 io_uring/io_uring.c:3110
> process_one_work+0x93a/0x15a0 kernel/workqueue.c:3261
> process_scheduled_works kernel/workqueue.c:3344 [inline]
> worker_thread+0x9b0/0xee0 kernel/workqueue.c:3425
> kthread+0x711/0x8a0 kernel/kthread.c:463
> ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
> </TASK>
>
>
> ***
>
> If these findings have caused you to resend the series or submit a
> separate fix, please add the following tag to your commit message:
> Tested-by: syzbot@syzkaller.appspotmail.com
>
> ---
> This report is generated by a bot. It may contain errors.
> syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-12-15 20:09 [PATCH v5 0/6] " Caleb Sander Mateos
@ 2025-12-16 5:21 ` syzbot ci
2025-12-18 1:24 ` Caleb Sander Mateos
0 siblings, 1 reply; 17+ messages in thread
From: syzbot ci @ 2025-12-16 5:21 UTC (permalink / raw)
To: axboe, csander, io-uring, joannelkoong, linux-kernel, oliver.sang,
syzbot
Cc: syzbot, syzkaller-bugs
syzbot ci has tested the following series
[v5] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
https://lore.kernel.org/all/20251215200909.3505001-1-csander@purestorage.com
* [PATCH v5 1/6] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
* [PATCH v5 2/6] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL
* [PATCH v5 3/6] io_uring: ensure io_uring_create() initializes submitter_task
* [PATCH v5 4/6] io_uring: use io_ring_submit_lock() in io_iopoll_req_issued()
* [PATCH v5 5/6] io_uring: factor out uring_lock helpers
* [PATCH v5 6/6] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
and found the following issue:
KASAN: slab-use-after-free Read in task_work_add
Full report is available here:
https://ci.syzbot.org/series/bce89909-ebf2-45f6-be49-bbd46e33e966
***
KASAN: slab-use-after-free Read in task_work_add
tree: torvalds
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux
base: d358e5254674b70f34c847715ca509e46eb81e6f
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/db5ac991-f49c-460f-80e4-2a33be76fe7c/config
syz repro: https://ci.syzbot.org/findings/ddbf1feb-6618-4c0f-9a16-15b856f20d71/syz_repro
==================================================================
BUG: KASAN: slab-use-after-free in task_work_add+0xd7/0x440 kernel/task_work.c:73
Read of size 8 at addr ffff88816a8826f8 by task kworker/u9:2/54
CPU: 0 UID: 0 PID: 54 Comm: kworker/u9:2 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: iou_exit io_ring_exit_work
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0xca/0x240 mm/kasan/report.c:482
kasan_report+0x118/0x150 mm/kasan/report.c:595
task_work_add+0xd7/0x440 kernel/task_work.c:73
io_ring_ctx_lock_nested io_uring/io_uring.h:271 [inline]
io_ring_ctx_lock io_uring/io_uring.h:282 [inline]
io_req_caches_free+0x342/0x3e0 io_uring/io_uring.c:2869
io_ring_ctx_free+0x56a/0x8e0 io_uring/io_uring.c:2908
io_ring_exit_work+0xff9/0x1220 io_uring/io_uring.c:3113
process_one_work kernel/workqueue.c:3257 [inline]
process_scheduled_works+0xad1/0x1770 kernel/workqueue.c:3340
worker_thread+0x8a0/0xda0 kernel/workqueue.c:3421
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
Allocated by task 7671:
kasan_save_stack mm/kasan/common.c:56 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:77
unpoison_slab_object mm/kasan/common.c:339 [inline]
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:365
kasan_slab_alloc include/linux/kasan.h:252 [inline]
slab_post_alloc_hook mm/slub.c:4953 [inline]
slab_alloc_node mm/slub.c:5263 [inline]
kmem_cache_alloc_node_noprof+0x43c/0x720 mm/slub.c:5315
alloc_task_struct_node kernel/fork.c:184 [inline]
dup_task_struct+0x57/0x9a0 kernel/fork.c:915
copy_process+0x4ea/0x3950 kernel/fork.c:2052
kernel_clone+0x21e/0x820 kernel/fork.c:2651
__do_sys_clone3 kernel/fork.c:2953 [inline]
__se_sys_clone3+0x256/0x2d0 kernel/fork.c:2932
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Freed by task 6024:
kasan_save_stack mm/kasan/common.c:56 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:77
kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584
poison_slab_object mm/kasan/common.c:252 [inline]
__kasan_slab_free+0x5c/0x80 mm/kasan/common.c:284
kasan_slab_free include/linux/kasan.h:234 [inline]
slab_free_hook mm/slub.c:2540 [inline]
slab_free mm/slub.c:6668 [inline]
kmem_cache_free+0x197/0x620 mm/slub.c:6779
rcu_do_batch kernel/rcu/tree.c:2605 [inline]
rcu_core+0xd70/0x1870 kernel/rcu/tree.c:2857
handle_softirqs+0x27d/0x850 kernel/softirq.c:622
__do_softirq kernel/softirq.c:656 [inline]
invoke_softirq kernel/softirq.c:496 [inline]
__irq_exit_rcu+0xca/0x1f0 kernel/softirq.c:723
irq_exit_rcu+0x9/0x30 kernel/softirq.c:739
instr_sysvec_call_function_single arch/x86/kernel/smp.c:266 [inline]
sysvec_call_function_single+0xa3/0xc0 arch/x86/kernel/smp.c:266
asm_sysvec_call_function_single+0x1a/0x20 arch/x86/include/asm/idtentry.h:704
Last potentially related work creation:
kasan_save_stack+0x3e/0x60 mm/kasan/common.c:56
kasan_record_aux_stack+0xbd/0xd0 mm/kasan/generic.c:556
__call_rcu_common kernel/rcu/tree.c:3119 [inline]
call_rcu+0x157/0x9c0 kernel/rcu/tree.c:3239
rcu_do_batch kernel/rcu/tree.c:2605 [inline]
rcu_core+0xd70/0x1870 kernel/rcu/tree.c:2857
handle_softirqs+0x27d/0x850 kernel/softirq.c:622
run_ksoftirqd+0x9b/0x100 kernel/softirq.c:1063
smpboot_thread_fn+0x542/0xa60 kernel/smpboot.c:160
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
Second to last potentially related work creation:
kasan_save_stack+0x3e/0x60 mm/kasan/common.c:56
kasan_record_aux_stack+0xbd/0xd0 mm/kasan/generic.c:556
__call_rcu_common kernel/rcu/tree.c:3119 [inline]
call_rcu+0x157/0x9c0 kernel/rcu/tree.c:3239
context_switch kernel/sched/core.c:5259 [inline]
__schedule+0x14c4/0x5000 kernel/sched/core.c:6863
preempt_schedule_irq+0xb5/0x150 kernel/sched/core.c:7190
irqentry_exit+0x5d8/0x660 kernel/entry/common.c:216
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
The buggy address belongs to the object at ffff88816a881d40
which belongs to the cache task_struct of size 7232
The buggy address is located 2488 bytes inside of
freed 7232-byte region [ffff88816a881d40, ffff88816a883980)
The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x16a880
head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
memcg:ffff8881726b0441
anon flags: 0x57ff00000000040(head|node=1|zone=2|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 057ff00000000040 ffff88816040a500 0000000000000000 0000000000000001
raw: 0000000000000000 0000000080040004 00000000f5000000 ffff8881726b0441
head: 057ff00000000040 ffff88816040a500 0000000000000000 0000000000000001
head: 0000000000000000 0000000080040004 00000000f5000000 ffff8881726b0441
head: 057ff00000000003 ffffea0005aa2001 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 7291, tgid 7291 (syz.2.649), ts 88142964676, free_ts 88127352940
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x234/0x290 mm/page_alloc.c:1846
prep_new_page mm/page_alloc.c:1854 [inline]
get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3915
__alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5210
alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2486
alloc_slab_page mm/slub.c:3075 [inline]
allocate_slab+0x86/0x3b0 mm/slub.c:3248
new_slab mm/slub.c:3302 [inline]
___slab_alloc+0xf2b/0x1960 mm/slub.c:4656
__slab_alloc+0x65/0x100 mm/slub.c:4779
__slab_alloc_node mm/slub.c:4855 [inline]
slab_alloc_node mm/slub.c:5251 [inline]
kmem_cache_alloc_node_noprof+0x4ce/0x720 mm/slub.c:5315
alloc_task_struct_node kernel/fork.c:184 [inline]
dup_task_struct+0x57/0x9a0 kernel/fork.c:915
copy_process+0x4ea/0x3950 kernel/fork.c:2052
kernel_clone+0x21e/0x820 kernel/fork.c:2651
__do_sys_clone3 kernel/fork.c:2953 [inline]
__se_sys_clone3+0x256/0x2d0 kernel/fork.c:2932
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
page last free pid 5275 tgid 5275 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1395 [inline]
__free_frozen_pages+0xbc8/0xd30 mm/page_alloc.c:2943
__slab_free+0x21b/0x2a0 mm/slub.c:6004
qlink_free mm/kasan/quarantine.c:163 [inline]
qlist_free_all+0x97/0x100 mm/kasan/quarantine.c:179
kasan_quarantine_reduce+0x148/0x160 mm/kasan/quarantine.c:286
__kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:349
kasan_slab_alloc include/linux/kasan.h:252 [inline]
slab_post_alloc_hook mm/slub.c:4953 [inline]
slab_alloc_node mm/slub.c:5263 [inline]
kmem_cache_alloc_noprof+0x37d/0x710 mm/slub.c:5270
getname_flags+0xb8/0x540 fs/namei.c:146
getname include/linux/fs.h:2498 [inline]
do_sys_openat2+0xbc/0x200 fs/open.c:1426
do_sys_open fs/open.c:1436 [inline]
__do_sys_openat fs/open.c:1452 [inline]
__se_sys_openat fs/open.c:1447 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1447
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Memory state around the buggy address:
ffff88816a882580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88816a882600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88816a882680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88816a882700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88816a882780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-12-16 5:21 ` [syzbot ci] " syzbot ci
@ 2025-12-18 1:24 ` Caleb Sander Mateos
0 siblings, 0 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-12-18 1:24 UTC (permalink / raw)
To: syzbot ci
Cc: axboe, io-uring, joannelkoong, linux-kernel, oliver.sang, syzbot,
syzbot, syzkaller-bugs
On Mon, Dec 15, 2025 at 9:21 PM syzbot ci
<syzbot+ci3ff889516a0b26a2@syzkaller.appspotmail.com> wrote:
>
> syzbot ci has tested the following series
>
> [v5] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> https://lore.kernel.org/all/20251215200909.3505001-1-csander@purestorage.com
> * [PATCH v5 1/6] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
> * [PATCH v5 2/6] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL
> * [PATCH v5 3/6] io_uring: ensure io_uring_create() initializes submitter_task
> * [PATCH v5 4/6] io_uring: use io_ring_submit_lock() in io_iopoll_req_issued()
> * [PATCH v5 5/6] io_uring: factor out uring_lock helpers
> * [PATCH v5 6/6] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
>
> and found the following issue:
> KASAN: slab-use-after-free Read in task_work_add
>
> Full report is available here:
> https://ci.syzbot.org/series/bce89909-ebf2-45f6-be49-bbd46e33e966
>
> ***
>
> KASAN: slab-use-after-free Read in task_work_add
>
> tree: torvalds
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux
> base: d358e5254674b70f34c847715ca509e46eb81e6f
> arch: amd64
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> config: https://ci.syzbot.org/builds/db5ac991-f49c-460f-80e4-2a33be76fe7c/config
> syz repro: https://ci.syzbot.org/findings/ddbf1feb-6618-4c0f-9a16-15b856f20d71/syz_repro
>
> ==================================================================
> BUG: KASAN: slab-use-after-free in task_work_add+0xd7/0x440 kernel/task_work.c:73
> Read of size 8 at addr ffff88816a8826f8 by task kworker/u9:2/54
>
> CPU: 0 UID: 0 PID: 54 Comm: kworker/u9:2 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> Workqueue: iou_exit io_ring_exit_work
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> print_address_description mm/kasan/report.c:378 [inline]
> print_report+0xca/0x240 mm/kasan/report.c:482
> kasan_report+0x118/0x150 mm/kasan/report.c:595
> task_work_add+0xd7/0x440 kernel/task_work.c:73
> io_ring_ctx_lock_nested io_uring/io_uring.h:271 [inline]
> io_ring_ctx_lock io_uring/io_uring.h:282 [inline]
> io_req_caches_free+0x342/0x3e0 io_uring/io_uring.c:2869
> io_ring_ctx_free+0x56a/0x8e0 io_uring/io_uring.c:2908
The call to io_req_caches_free() comes after the
put_task_struct(ctx->submitter_task) call in io_ring_ctx_free(), so I
guess the task_struct may have already been freed when
io_ring_ctx_lock() is called. Should be simple enough to fix by just
moving the put_task_struct() call to the end of io_ring_ctx_free().
Looking at this made me realize one other small bug, it's incorrect to
assume that if task_work_add() fails because the submitter_task has
exited, the uring lock has been acquired successfully. Even though
submitter_task will no longer be using the uring lock, other tasks
could. So this path needs to acquire the uring_lock mutex, similar to
the IORING_SETUP_SINGLE_ISSUER && IORING_SETUP_R_DISABLED case.
Thanks,
Caleb
> io_ring_exit_work+0xff9/0x1220 io_uring/io_uring.c:3113
> process_one_work kernel/workqueue.c:3257 [inline]
> process_scheduled_works+0xad1/0x1770 kernel/workqueue.c:3340
> worker_thread+0x8a0/0xda0 kernel/workqueue.c:3421
> kthread+0x711/0x8a0 kernel/kthread.c:463
> ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
> </TASK>
>
> Allocated by task 7671:
> kasan_save_stack mm/kasan/common.c:56 [inline]
> kasan_save_track+0x3e/0x80 mm/kasan/common.c:77
> unpoison_slab_object mm/kasan/common.c:339 [inline]
> __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:365
> kasan_slab_alloc include/linux/kasan.h:252 [inline]
> slab_post_alloc_hook mm/slub.c:4953 [inline]
> slab_alloc_node mm/slub.c:5263 [inline]
> kmem_cache_alloc_node_noprof+0x43c/0x720 mm/slub.c:5315
> alloc_task_struct_node kernel/fork.c:184 [inline]
> dup_task_struct+0x57/0x9a0 kernel/fork.c:915
> copy_process+0x4ea/0x3950 kernel/fork.c:2052
> kernel_clone+0x21e/0x820 kernel/fork.c:2651
> __do_sys_clone3 kernel/fork.c:2953 [inline]
> __se_sys_clone3+0x256/0x2d0 kernel/fork.c:2932
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> Freed by task 6024:
> kasan_save_stack mm/kasan/common.c:56 [inline]
> kasan_save_track+0x3e/0x80 mm/kasan/common.c:77
> kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584
> poison_slab_object mm/kasan/common.c:252 [inline]
> __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:284
> kasan_slab_free include/linux/kasan.h:234 [inline]
> slab_free_hook mm/slub.c:2540 [inline]
> slab_free mm/slub.c:6668 [inline]
> kmem_cache_free+0x197/0x620 mm/slub.c:6779
> rcu_do_batch kernel/rcu/tree.c:2605 [inline]
> rcu_core+0xd70/0x1870 kernel/rcu/tree.c:2857
> handle_softirqs+0x27d/0x850 kernel/softirq.c:622
> __do_softirq kernel/softirq.c:656 [inline]
> invoke_softirq kernel/softirq.c:496 [inline]
> __irq_exit_rcu+0xca/0x1f0 kernel/softirq.c:723
> irq_exit_rcu+0x9/0x30 kernel/softirq.c:739
> instr_sysvec_call_function_single arch/x86/kernel/smp.c:266 [inline]
> sysvec_call_function_single+0xa3/0xc0 arch/x86/kernel/smp.c:266
> asm_sysvec_call_function_single+0x1a/0x20 arch/x86/include/asm/idtentry.h:704
>
> Last potentially related work creation:
> kasan_save_stack+0x3e/0x60 mm/kasan/common.c:56
> kasan_record_aux_stack+0xbd/0xd0 mm/kasan/generic.c:556
> __call_rcu_common kernel/rcu/tree.c:3119 [inline]
> call_rcu+0x157/0x9c0 kernel/rcu/tree.c:3239
> rcu_do_batch kernel/rcu/tree.c:2605 [inline]
> rcu_core+0xd70/0x1870 kernel/rcu/tree.c:2857
> handle_softirqs+0x27d/0x850 kernel/softirq.c:622
> run_ksoftirqd+0x9b/0x100 kernel/softirq.c:1063
> smpboot_thread_fn+0x542/0xa60 kernel/smpboot.c:160
> kthread+0x711/0x8a0 kernel/kthread.c:463
> ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
>
> Second to last potentially related work creation:
> kasan_save_stack+0x3e/0x60 mm/kasan/common.c:56
> kasan_record_aux_stack+0xbd/0xd0 mm/kasan/generic.c:556
> __call_rcu_common kernel/rcu/tree.c:3119 [inline]
> call_rcu+0x157/0x9c0 kernel/rcu/tree.c:3239
> context_switch kernel/sched/core.c:5259 [inline]
> __schedule+0x14c4/0x5000 kernel/sched/core.c:6863
> preempt_schedule_irq+0xb5/0x150 kernel/sched/core.c:7190
> irqentry_exit+0x5d8/0x660 kernel/entry/common.c:216
> asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
>
> The buggy address belongs to the object at ffff88816a881d40
> which belongs to the cache task_struct of size 7232
> The buggy address is located 2488 bytes inside of
> freed 7232-byte region [ffff88816a881d40, ffff88816a883980)
>
> The buggy address belongs to the physical page:
> page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x16a880
> head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> memcg:ffff8881726b0441
> anon flags: 0x57ff00000000040(head|node=1|zone=2|lastcpupid=0x7ff)
> page_type: f5(slab)
> raw: 057ff00000000040 ffff88816040a500 0000000000000000 0000000000000001
> raw: 0000000000000000 0000000080040004 00000000f5000000 ffff8881726b0441
> head: 057ff00000000040 ffff88816040a500 0000000000000000 0000000000000001
> head: 0000000000000000 0000000080040004 00000000f5000000 ffff8881726b0441
> head: 057ff00000000003 ffffea0005aa2001 00000000ffffffff 00000000ffffffff
> head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
> page dumped because: kasan: bad access detected
> page_owner tracks the page as allocated
> page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 7291, tgid 7291 (syz.2.649), ts 88142964676, free_ts 88127352940
> set_page_owner include/linux/page_owner.h:32 [inline]
> post_alloc_hook+0x234/0x290 mm/page_alloc.c:1846
> prep_new_page mm/page_alloc.c:1854 [inline]
> get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3915
> __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5210
> alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2486
> alloc_slab_page mm/slub.c:3075 [inline]
> allocate_slab+0x86/0x3b0 mm/slub.c:3248
> new_slab mm/slub.c:3302 [inline]
> ___slab_alloc+0xf2b/0x1960 mm/slub.c:4656
> __slab_alloc+0x65/0x100 mm/slub.c:4779
> __slab_alloc_node mm/slub.c:4855 [inline]
> slab_alloc_node mm/slub.c:5251 [inline]
> kmem_cache_alloc_node_noprof+0x4ce/0x720 mm/slub.c:5315
> alloc_task_struct_node kernel/fork.c:184 [inline]
> dup_task_struct+0x57/0x9a0 kernel/fork.c:915
> copy_process+0x4ea/0x3950 kernel/fork.c:2052
> kernel_clone+0x21e/0x820 kernel/fork.c:2651
> __do_sys_clone3 kernel/fork.c:2953 [inline]
> __se_sys_clone3+0x256/0x2d0 kernel/fork.c:2932
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> page last free pid 5275 tgid 5275 stack trace:
> reset_page_owner include/linux/page_owner.h:25 [inline]
> free_pages_prepare mm/page_alloc.c:1395 [inline]
> __free_frozen_pages+0xbc8/0xd30 mm/page_alloc.c:2943
> __slab_free+0x21b/0x2a0 mm/slub.c:6004
> qlink_free mm/kasan/quarantine.c:163 [inline]
> qlist_free_all+0x97/0x100 mm/kasan/quarantine.c:179
> kasan_quarantine_reduce+0x148/0x160 mm/kasan/quarantine.c:286
> __kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:349
> kasan_slab_alloc include/linux/kasan.h:252 [inline]
> slab_post_alloc_hook mm/slub.c:4953 [inline]
> slab_alloc_node mm/slub.c:5263 [inline]
> kmem_cache_alloc_noprof+0x37d/0x710 mm/slub.c:5270
> getname_flags+0xb8/0x540 fs/namei.c:146
> getname include/linux/fs.h:2498 [inline]
> do_sys_openat2+0xbc/0x200 fs/open.c:1426
> do_sys_open fs/open.c:1436 [inline]
> __do_sys_openat fs/open.c:1452 [inline]
> __se_sys_openat fs/open.c:1447 [inline]
> __x64_sys_openat+0x138/0x170 fs/open.c:1447
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> Memory state around the buggy address:
> ffff88816a882580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ffff88816a882600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> >ffff88816a882680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ^
> ffff88816a882700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ffff88816a882780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ==================================================================
>
>
> ***
>
> If these findings have caused you to resend the series or submit a
> separate fix, please add the following tag to your commit message:
> Tested-by: syzbot@syzkaller.appspotmail.com
>
> ---
> This report is generated by a bot. It may contain errors.
> syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-12-18 2:44 [PATCH v6 0/6] " Caleb Sander Mateos
@ 2025-12-18 8:01 ` syzbot ci
2025-12-22 20:19 ` Caleb Sander Mateos
0 siblings, 1 reply; 17+ messages in thread
From: syzbot ci @ 2025-12-18 8:01 UTC (permalink / raw)
To: axboe, csander, io-uring, joannelkoong, linux-kernel, oliver.sang,
syzbot
Cc: syzbot, syzkaller-bugs
syzbot ci has tested the following series
[v6] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
https://lore.kernel.org/all/20251218024459.1083572-1-csander@purestorage.com
* [PATCH v6 1/6] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
* [PATCH v6 2/6] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL
* [PATCH v6 3/6] io_uring: ensure submitter_task is valid for io_ring_ctx's lifetime
* [PATCH v6 4/6] io_uring: use io_ring_submit_lock() in io_iopoll_req_issued()
* [PATCH v6 5/6] io_uring: factor out uring_lock helpers
* [PATCH v6 6/6] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
and found the following issue:
INFO: task hung in io_wq_put_and_exit
Full report is available here:
https://ci.syzbot.org/series/21eac721-670b-4f34-9696-66f9b28233ac
***
INFO: task hung in io_wq_put_and_exit
tree: torvalds
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux
base: d358e5254674b70f34c847715ca509e46eb81e6f
arch: amd64
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
config: https://ci.syzbot.org/builds/1710cffe-7d78-4489-9aa1-823b8c2532ed/config
syz repro: https://ci.syzbot.org/findings/74ae8703-9484-4d82-aa78-84cc37dcb1ef/syz_repro
INFO: task syz.1.18:6046 blocked for more than 143 seconds.
Not tainted syzkaller #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.18 state:D stack:25672 pid:6046 tgid:6045 ppid:5971 task_flags:0x400548 flags:0x00080004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x14bc/0x5000 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6960
schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common kernel/sched/completion.c:121 [inline]
wait_for_common kernel/sched/completion.c:132 [inline]
wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
io_wq_put_and_exit+0x316/0x650 io_uring/io-wq.c:1356
io_uring_clean_tctx+0x11f/0x1a0 io_uring/tctx.c:207
io_uring_cancel_generic+0x6ca/0x7d0 io_uring/cancel.c:652
io_uring_files_cancel include/linux/io_uring.h:19 [inline]
do_exit+0x345/0x2310 kernel/exit.c:911
do_group_exit+0x21c/0x2d0 kernel/exit.c:1112
get_signal+0x1285/0x1340 kernel/signal.c:3034
arch_do_signal_or_restart+0x9a/0x7a0 arch/x86/kernel/signal.c:337
__exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
exit_to_user_mode_loop+0x87/0x4f0 kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6a8b58f7c9
RSP: 002b:00007f6a8c4a00e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: 0000000000000001 RBX: 00007f6a8b7e5fa8 RCX: 00007f6a8b58f7c9
RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007f6a8b7e5fac
RBP: 00007f6a8b7e5fa0 R08: 3fffffffffffffff R09: 0000000000000000
R10: 0000000000000800 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6a8b7e6038 R14: 00007ffcac96d220 R15: 00007ffcac96d308
</TASK>
INFO: task iou-wrk-6046:6047 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:iou-wrk-6046 state:D stack:27760 pid:6047 tgid:6045 ppid:5971 task_flags:0x404050 flags:0x00080002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x14bc/0x5000 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6960
schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common kernel/sched/completion.c:121 [inline]
wait_for_common kernel/sched/completion.c:132 [inline]
wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
io_ring_ctx_lock_nested+0x2b3/0x380 io_uring/io_uring.h:283
io_ring_ctx_lock io_uring/io_uring.h:290 [inline]
io_ring_submit_lock io_uring/io_uring.h:554 [inline]
io_files_update+0x677/0x7f0 io_uring/rsrc.c:504
__io_issue_sqe+0x181/0x4b0 io_uring/io_uring.c:1818
io_issue_sqe+0x1de/0x1190 io_uring/io_uring.c:1841
io_wq_submit_work+0x6e9/0xb90 io_uring/io_uring.c:1953
io_worker_handle_work+0x7cd/0x1180 io_uring/io-wq.c:650
io_wq_worker+0x42f/0xeb0 io_uring/io-wq.c:704
ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
INFO: task syz.0.17:6049 blocked for more than 143 seconds.
Not tainted syzkaller #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17 state:D stack:25592 pid:6049 tgid:6048 ppid:5967 task_flags:0x400548 flags:0x00080004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x14bc/0x5000 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6960
schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common kernel/sched/completion.c:121 [inline]
wait_for_common kernel/sched/completion.c:132 [inline]
wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
io_wq_put_and_exit+0x316/0x650 io_uring/io-wq.c:1356
io_uring_clean_tctx+0x11f/0x1a0 io_uring/tctx.c:207
io_uring_cancel_generic+0x6ca/0x7d0 io_uring/cancel.c:652
io_uring_files_cancel include/linux/io_uring.h:19 [inline]
do_exit+0x345/0x2310 kernel/exit.c:911
do_group_exit+0x21c/0x2d0 kernel/exit.c:1112
get_signal+0x1285/0x1340 kernel/signal.c:3034
arch_do_signal_or_restart+0x9a/0x7a0 arch/x86/kernel/signal.c:337
__exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
exit_to_user_mode_loop+0x87/0x4f0 kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa96a98f7c9
RSP: 002b:00007fa96b7430e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: 0000000000000001 RBX: 00007fa96abe5fa8 RCX: 00007fa96a98f7c9
RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007fa96abe5fac
RBP: 00007fa96abe5fa0 R08: 3fffffffffffffff R09: 0000000000000000
R10: 0000000000000800 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fa96abe6038 R14: 00007ffd9fcc00d0 R15: 00007ffd9fcc01b8
</TASK>
INFO: task iou-wrk-6049:6050 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:iou-wrk-6049 state:D stack:27760 pid:6050 tgid:6048 ppid:5967 task_flags:0x404050 flags:0x00080002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x14bc/0x5000 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6960
schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common kernel/sched/completion.c:121 [inline]
wait_for_common kernel/sched/completion.c:132 [inline]
wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
io_ring_ctx_lock_nested+0x2b3/0x380 io_uring/io_uring.h:283
io_ring_ctx_lock io_uring/io_uring.h:290 [inline]
io_ring_submit_lock io_uring/io_uring.h:554 [inline]
io_files_update+0x677/0x7f0 io_uring/rsrc.c:504
__io_issue_sqe+0x181/0x4b0 io_uring/io_uring.c:1818
io_issue_sqe+0x1de/0x1190 io_uring/io_uring.c:1841
io_wq_submit_work+0x6e9/0xb90 io_uring/io_uring.c:1953
io_worker_handle_work+0x7cd/0x1180 io_uring/io-wq.c:650
io_wq_worker+0x42f/0xeb0 io_uring/io-wq.c:704
ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
INFO: task syz.2.19:6052 blocked for more than 144 seconds.
Not tainted syzkaller #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.19 state:D stack:26208 pid:6052 tgid:6051 ppid:5972 task_flags:0x400548 flags:0x00080004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x14bc/0x5000 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6960
schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common kernel/sched/completion.c:121 [inline]
wait_for_common kernel/sched/completion.c:132 [inline]
wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
io_wq_put_and_exit+0x316/0x650 io_uring/io-wq.c:1356
io_uring_clean_tctx+0x11f/0x1a0 io_uring/tctx.c:207
io_uring_cancel_generic+0x6ca/0x7d0 io_uring/cancel.c:652
io_uring_files_cancel include/linux/io_uring.h:19 [inline]
do_exit+0x345/0x2310 kernel/exit.c:911
do_group_exit+0x21c/0x2d0 kernel/exit.c:1112
get_signal+0x1285/0x1340 kernel/signal.c:3034
arch_do_signal_or_restart+0x9a/0x7a0 arch/x86/kernel/signal.c:337
__exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
exit_to_user_mode_loop+0x87/0x4f0 kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4b5cb8f7c9
RSP: 002b:00007f4b5d9a80e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: 0000000000000001 RBX: 00007f4b5cde5fa8 RCX: 00007f4b5cb8f7c9
RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007f4b5cde5fac
RBP: 00007f4b5cde5fa0 R08: 3fffffffffffffff R09: 0000000000000000
R10: 0000000000000800 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f4b5cde6038 R14: 00007ffcdd64aed0 R15: 00007ffcdd64afb8
</TASK>
INFO: task iou-wrk-6052:6053 blocked for more than 144 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:iou-wrk-6052 state:D stack:27760 pid:6053 tgid:6051 ppid:5972 task_flags:0x404050 flags:0x00080006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x14bc/0x5000 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0x165/0x360 kernel/sched/core.c:6960
schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common kernel/sched/completion.c:121 [inline]
wait_for_common kernel/sched/completion.c:132 [inline]
wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
io_ring_ctx_lock_nested+0x2b3/0x380 io_uring/io_uring.h:283
io_ring_ctx_lock io_uring/io_uring.h:290 [inline]
io_ring_submit_lock io_uring/io_uring.h:554 [inline]
io_files_update+0x677/0x7f0 io_uring/rsrc.c:504
__io_issue_sqe+0x181/0x4b0 io_uring/io_uring.c:1818
io_issue_sqe+0x1de/0x1190 io_uring/io_uring.c:1841
io_wq_submit_work+0x6e9/0xb90 io_uring/io_uring.c:1953
io_worker_handle_work+0x7cd/0x1180 io_uring/io-wq.c:650
io_wq_worker+0x42f/0xeb0 io_uring/io-wq.c:704
ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
Showing all locks held in the system:
1 lock held by khungtaskd/35:
#0: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#0: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
#0: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
5 locks held by kworker/u10:8/1120:
#0: ffff88823c63a918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:639
#1: ffff88823c624588 (psi_seq){-.-.}-{0:0}, at: psi_task_switch+0x53/0x880 kernel/sched/psi.c:933
#2: ffff88810ac50788 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6363 [inline]
#2: ffff88810ac50788 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: cfg80211_wiphy_work+0xb4/0x450 net/wireless/core.c:424
#3: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#3: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
#3: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: ieee80211_sta_active_ibss+0xc3/0x330 net/mac80211/ibss.c:635
#4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
#4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: class_rcu_constructor include/linux/rcupdate.h:1195 [inline]
#4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: unwind_next_frame+0xa5/0x2390 arch/x86/kernel/unwind_orc.c:479
2 locks held by getty/5656:
#0: ffff8881133040a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc900035732f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x449/0x1460 drivers/tty/n_tty.c:2211
3 locks held by kworker/0:9/6480:
#0: ffff888100075948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
#0: ffff888100075948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
#1: ffffc9000546fb80 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
#1: ffffc9000546fb80 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
#2: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
1 lock held by syz-executor/6649:
#0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
#0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
#0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
2 locks held by syz-executor/6651:
#0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
#0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
#0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
#1: ffff88823c63a918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:639
4 locks held by syz-executor/6653:
=============================================
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 35 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
__sys_info lib/sys_info.c:157 [inline]
sys_info+0x135/0x170 lib/sys_info.c:165
check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
watchdog+0xf95/0xfe0 kernel/hung_task.c:515
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 6653 Comm: syz-executor Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:io_serial_out+0x7c/0xc0 drivers/tty/serial/8250/8250_port.c:407
Code: 3f a6 fc 44 89 f9 d3 e5 49 83 c6 40 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 ec 91 0c fd 41 03 2e 89 d8 89 ea ee <5b> 41 5c 41 5e 41 5f 5d c3 cc cc cc cc cc 44 89 f9 80 e1 07 38 c1
RSP: 0018:ffffc90008156590 EFLAGS: 00000002
RAX: 000000000000005b RBX: 000000000000005b RCX: 0000000000000000
RDX: 00000000000003f8 RSI: 0000000000000000 RDI: 0000000000000020
RBP: 00000000000003f8 R08: ffff888102f08237 R09: 1ffff110205e1046
R10: dffffc0000000000 R11: ffffffff851b9060 R12: dffffc0000000000
R13: ffffffff998dd9e1 R14: ffffffff99bf2420 R15: 0000000000000000
FS: 0000555595186500(0000) GS:ffff8882a9e37000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055599f9c9018 CR3: 0000000112ed8000 CR4: 00000000000006f0
Call Trace:
<TASK>
serial_port_out include/linux/serial_core.h:811 [inline]
serial8250_console_putchar drivers/tty/serial/8250/8250_port.c:3192 [inline]
serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:-1 [inline]
serial8250_console_write+0x1410/0x1ba0 drivers/tty/serial/8250/8250_port.c:3342
console_emit_next_record kernel/printk/printk.c:3129 [inline]
console_flush_one_record kernel/printk/printk.c:3215 [inline]
console_flush_all+0x745/0xb60 kernel/printk/printk.c:3289
__console_flush_and_unlock kernel/printk/printk.c:3319 [inline]
console_unlock+0xbb/0x190 kernel/printk/printk.c:3359
vprintk_emit+0x4f8/0x5f0 kernel/printk/printk.c:2426
_printk+0xcf/0x120 kernel/printk/printk.c:2451
br_set_state+0x475/0x710 net/bridge/br_stp.c:57
br_init_port+0x99/0x200 net/bridge/br_stp_if.c:39
new_nbp+0x2f9/0x440 net/bridge/br_if.c:443
br_add_if+0x283/0xeb0 net/bridge/br_if.c:586
do_set_master+0x533/0x6d0 net/core/rtnetlink.c:2963
do_setlink+0xcf0/0x41c0 net/core/rtnetlink.c:3165
rtnl_changelink net/core/rtnetlink.c:3776 [inline]
__rtnl_newlink net/core/rtnetlink.c:3935 [inline]
rtnl_newlink+0x161c/0x1c90 net/core/rtnetlink.c:4072
rtnetlink_rcv_msg+0x7cf/0xb70 net/core/rtnetlink.c:6958
netlink_rcv_skb+0x208/0x470 net/netlink/af_netlink.c:2550
netlink_unicast_kernel net/netlink/af_netlink.c:1318 [inline]
netlink_unicast+0x82f/0x9e0 net/netlink/af_netlink.c:1344
netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1894
sock_sendmsg_nosec net/socket.c:727 [inline]
__sock_sendmsg+0x21c/0x270 net/socket.c:742
__sys_sendto+0x3bd/0x520 net/socket.c:2206
__do_sys_sendto net/socket.c:2213 [inline]
__se_sys_sendto net/socket.c:2209 [inline]
__x64_sys_sendto+0xde/0x100 net/socket.c:2209
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f780c39165c
Code: 2a 5f 02 00 44 8b 4c 24 2c 4c 8b 44 24 20 89 c5 44 8b 54 24 28 48 8b 54 24 18 b8 2c 00 00 00 48 8b 74 24 10 8b 7c 24 08 0f 05 <48> 3d 00 f0 ff ff 77 34 89 ef 48 89 44 24 08 e8 70 5f 02 00 48 8b
RSP: 002b:00007ffcecb618b0 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f780d114620 RCX: 00007f780c39165c
RDX: 0000000000000028 RSI: 00007f780d114670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007ffcecb61904 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f780d114670 R15: 0000000000000000
</TASK>
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [syzbot ci] Re: io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
2025-12-18 8:01 ` [syzbot ci] " syzbot ci
@ 2025-12-22 20:19 ` Caleb Sander Mateos
0 siblings, 0 replies; 17+ messages in thread
From: Caleb Sander Mateos @ 2025-12-22 20:19 UTC (permalink / raw)
To: syzbot ci
Cc: axboe, io-uring, joannelkoong, linux-kernel, oliver.sang, syzbot,
syzbot, syzkaller-bugs
On Thu, Dec 18, 2025 at 3:01 AM syzbot ci
<syzbot+ci6d21afd0455de45a@syzkaller.appspotmail.com> wrote:
>
> syzbot ci has tested the following series
>
> [v6] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
> https://lore.kernel.org/all/20251218024459.1083572-1-csander@purestorage.com
> * [PATCH v6 1/6] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
> * [PATCH v6 2/6] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL
> * [PATCH v6 3/6] io_uring: ensure submitter_task is valid for io_ring_ctx's lifetime
> * [PATCH v6 4/6] io_uring: use io_ring_submit_lock() in io_iopoll_req_issued()
> * [PATCH v6 5/6] io_uring: factor out uring_lock helpers
> * [PATCH v6 6/6] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER
>
> and found the following issue:
> INFO: task hung in io_wq_put_and_exit
>
> Full report is available here:
> https://ci.syzbot.org/series/21eac721-670b-4f34-9696-66f9b28233ac
>
> ***
>
> INFO: task hung in io_wq_put_and_exit
>
> tree: torvalds
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux
> base: d358e5254674b70f34c847715ca509e46eb81e6f
> arch: amd64
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> config: https://ci.syzbot.org/builds/1710cffe-7d78-4489-9aa1-823b8c2532ed/config
> syz repro: https://ci.syzbot.org/findings/74ae8703-9484-4d82-aa78-84cc37dcb1ef/syz_repro
>
> INFO: task syz.1.18:6046 blocked for more than 143 seconds.
> Not tainted syzkaller #0
> Blocked by coredump.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:syz.1.18 state:D stack:25672 pid:6046 tgid:6045 ppid:5971 task_flags:0x400548 flags:0x00080004
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:5256 [inline]
> __schedule+0x14bc/0x5000 kernel/sched/core.c:6863
> __schedule_loop kernel/sched/core.c:6945 [inline]
> schedule+0x165/0x360 kernel/sched/core.c:6960
> schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
> do_wait_for_common kernel/sched/completion.c:100 [inline]
> __wait_for_common kernel/sched/completion.c:121 [inline]
> wait_for_common kernel/sched/completion.c:132 [inline]
> wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
> io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
> io_wq_put_and_exit+0x316/0x650 io_uring/io-wq.c:1356
> io_uring_clean_tctx+0x11f/0x1a0 io_uring/tctx.c:207
> io_uring_cancel_generic+0x6ca/0x7d0 io_uring/cancel.c:652
> io_uring_files_cancel include/linux/io_uring.h:19 [inline]
> do_exit+0x345/0x2310 kernel/exit.c:911
> do_group_exit+0x21c/0x2d0 kernel/exit.c:1112
> get_signal+0x1285/0x1340 kernel/signal.c:3034
> arch_do_signal_or_restart+0x9a/0x7a0 arch/x86/kernel/signal.c:337
> __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
> exit_to_user_mode_loop+0x87/0x4f0 kernel/entry/common.c:75
> __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
> syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
> syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
> syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
> do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f6a8b58f7c9
> RSP: 002b:00007f6a8c4a00e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
> RAX: 0000000000000001 RBX: 00007f6a8b7e5fa8 RCX: 00007f6a8b58f7c9
> RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007f6a8b7e5fac
> RBP: 00007f6a8b7e5fa0 R08: 3fffffffffffffff R09: 0000000000000000
> R10: 0000000000000800 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007f6a8b7e6038 R14: 00007ffcac96d220 R15: 00007ffcac96d308
> </TASK>
> INFO: task iou-wrk-6046:6047 blocked for more than 143 seconds.
> Not tainted syzkaller #0
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:iou-wrk-6046 state:D stack:27760 pid:6047 tgid:6045 ppid:5971 task_flags:0x404050 flags:0x00080002
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:5256 [inline]
> __schedule+0x14bc/0x5000 kernel/sched/core.c:6863
> __schedule_loop kernel/sched/core.c:6945 [inline]
> schedule+0x165/0x360 kernel/sched/core.c:6960
> schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
> do_wait_for_common kernel/sched/completion.c:100 [inline]
> __wait_for_common kernel/sched/completion.c:121 [inline]
> wait_for_common kernel/sched/completion.c:132 [inline]
> wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
> io_ring_ctx_lock_nested+0x2b3/0x380 io_uring/io_uring.h:283
> io_ring_ctx_lock io_uring/io_uring.h:290 [inline]
> io_ring_submit_lock io_uring/io_uring.h:554 [inline]
> io_files_update+0x677/0x7f0 io_uring/rsrc.c:504
> __io_issue_sqe+0x181/0x4b0 io_uring/io_uring.c:1818
> io_issue_sqe+0x1de/0x1190 io_uring/io_uring.c:1841
> io_wq_submit_work+0x6e9/0xb90 io_uring/io_uring.c:1953
> io_worker_handle_work+0x7cd/0x1180 io_uring/io-wq.c:650
> io_wq_worker+0x42f/0xeb0 io_uring/io-wq.c:704
> ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
> </TASK>
Interesting, a deadlock between io_wq_exit_workers() on submitter_task
(which is exiting) and io_ring_ctx_lock() on an io_uring worker
thread. io_ring_ctx_lock() is blocked until submitter_task runs task
work, but that will never happen because it's waiting on the
completion. Not sure what the best approach is here. Maybe have the
submitter_task alternate between running task work and waiting on the
completion? Or have some way for submitter_task to indicate that it's
exiting and disable the IORING_SETUP_SINGLE_ISSUER optimization in
io_ring_ctx_lock()?
Thanks,
Caleb
> INFO: task syz.0.17:6049 blocked for more than 143 seconds.
> Not tainted syzkaller #0
> Blocked by coredump.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:syz.0.17 state:D stack:25592 pid:6049 tgid:6048 ppid:5967 task_flags:0x400548 flags:0x00080004
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:5256 [inline]
> __schedule+0x14bc/0x5000 kernel/sched/core.c:6863
> __schedule_loop kernel/sched/core.c:6945 [inline]
> schedule+0x165/0x360 kernel/sched/core.c:6960
> schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
> do_wait_for_common kernel/sched/completion.c:100 [inline]
> __wait_for_common kernel/sched/completion.c:121 [inline]
> wait_for_common kernel/sched/completion.c:132 [inline]
> wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
> io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
> io_wq_put_and_exit+0x316/0x650 io_uring/io-wq.c:1356
> io_uring_clean_tctx+0x11f/0x1a0 io_uring/tctx.c:207
> io_uring_cancel_generic+0x6ca/0x7d0 io_uring/cancel.c:652
> io_uring_files_cancel include/linux/io_uring.h:19 [inline]
> do_exit+0x345/0x2310 kernel/exit.c:911
> do_group_exit+0x21c/0x2d0 kernel/exit.c:1112
> get_signal+0x1285/0x1340 kernel/signal.c:3034
> arch_do_signal_or_restart+0x9a/0x7a0 arch/x86/kernel/signal.c:337
> __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
> exit_to_user_mode_loop+0x87/0x4f0 kernel/entry/common.c:75
> __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
> syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
> syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
> syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
> do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7fa96a98f7c9
> RSP: 002b:00007fa96b7430e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
> RAX: 0000000000000001 RBX: 00007fa96abe5fa8 RCX: 00007fa96a98f7c9
> RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007fa96abe5fac
> RBP: 00007fa96abe5fa0 R08: 3fffffffffffffff R09: 0000000000000000
> R10: 0000000000000800 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007fa96abe6038 R14: 00007ffd9fcc00d0 R15: 00007ffd9fcc01b8
> </TASK>
> INFO: task iou-wrk-6049:6050 blocked for more than 143 seconds.
> Not tainted syzkaller #0
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:iou-wrk-6049 state:D stack:27760 pid:6050 tgid:6048 ppid:5967 task_flags:0x404050 flags:0x00080002
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:5256 [inline]
> __schedule+0x14bc/0x5000 kernel/sched/core.c:6863
> __schedule_loop kernel/sched/core.c:6945 [inline]
> schedule+0x165/0x360 kernel/sched/core.c:6960
> schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
> do_wait_for_common kernel/sched/completion.c:100 [inline]
> __wait_for_common kernel/sched/completion.c:121 [inline]
> wait_for_common kernel/sched/completion.c:132 [inline]
> wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
> io_ring_ctx_lock_nested+0x2b3/0x380 io_uring/io_uring.h:283
> io_ring_ctx_lock io_uring/io_uring.h:290 [inline]
> io_ring_submit_lock io_uring/io_uring.h:554 [inline]
> io_files_update+0x677/0x7f0 io_uring/rsrc.c:504
> __io_issue_sqe+0x181/0x4b0 io_uring/io_uring.c:1818
> io_issue_sqe+0x1de/0x1190 io_uring/io_uring.c:1841
> io_wq_submit_work+0x6e9/0xb90 io_uring/io_uring.c:1953
> io_worker_handle_work+0x7cd/0x1180 io_uring/io-wq.c:650
> io_wq_worker+0x42f/0xeb0 io_uring/io-wq.c:704
> ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
> </TASK>
> INFO: task syz.2.19:6052 blocked for more than 144 seconds.
> Not tainted syzkaller #0
> Blocked by coredump.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:syz.2.19 state:D stack:26208 pid:6052 tgid:6051 ppid:5972 task_flags:0x400548 flags:0x00080004
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:5256 [inline]
> __schedule+0x14bc/0x5000 kernel/sched/core.c:6863
> __schedule_loop kernel/sched/core.c:6945 [inline]
> schedule+0x165/0x360 kernel/sched/core.c:6960
> schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
> do_wait_for_common kernel/sched/completion.c:100 [inline]
> __wait_for_common kernel/sched/completion.c:121 [inline]
> wait_for_common kernel/sched/completion.c:132 [inline]
> wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
> io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
> io_wq_put_and_exit+0x316/0x650 io_uring/io-wq.c:1356
> io_uring_clean_tctx+0x11f/0x1a0 io_uring/tctx.c:207
> io_uring_cancel_generic+0x6ca/0x7d0 io_uring/cancel.c:652
> io_uring_files_cancel include/linux/io_uring.h:19 [inline]
> do_exit+0x345/0x2310 kernel/exit.c:911
> do_group_exit+0x21c/0x2d0 kernel/exit.c:1112
> get_signal+0x1285/0x1340 kernel/signal.c:3034
> arch_do_signal_or_restart+0x9a/0x7a0 arch/x86/kernel/signal.c:337
> __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
> exit_to_user_mode_loop+0x87/0x4f0 kernel/entry/common.c:75
> __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
> syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
> syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
> syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
> do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f4b5cb8f7c9
> RSP: 002b:00007f4b5d9a80e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
> RAX: 0000000000000001 RBX: 00007f4b5cde5fa8 RCX: 00007f4b5cb8f7c9
> RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007f4b5cde5fac
> RBP: 00007f4b5cde5fa0 R08: 3fffffffffffffff R09: 0000000000000000
> R10: 0000000000000800 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007f4b5cde6038 R14: 00007ffcdd64aed0 R15: 00007ffcdd64afb8
> </TASK>
> INFO: task iou-wrk-6052:6053 blocked for more than 144 seconds.
> Not tainted syzkaller #0
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:iou-wrk-6052 state:D stack:27760 pid:6053 tgid:6051 ppid:5972 task_flags:0x404050 flags:0x00080006
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:5256 [inline]
> __schedule+0x14bc/0x5000 kernel/sched/core.c:6863
> __schedule_loop kernel/sched/core.c:6945 [inline]
> schedule+0x165/0x360 kernel/sched/core.c:6960
> schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
> do_wait_for_common kernel/sched/completion.c:100 [inline]
> __wait_for_common kernel/sched/completion.c:121 [inline]
> wait_for_common kernel/sched/completion.c:132 [inline]
> wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
> io_ring_ctx_lock_nested+0x2b3/0x380 io_uring/io_uring.h:283
> io_ring_ctx_lock io_uring/io_uring.h:290 [inline]
> io_ring_submit_lock io_uring/io_uring.h:554 [inline]
> io_files_update+0x677/0x7f0 io_uring/rsrc.c:504
> __io_issue_sqe+0x181/0x4b0 io_uring/io_uring.c:1818
> io_issue_sqe+0x1de/0x1190 io_uring/io_uring.c:1841
> io_wq_submit_work+0x6e9/0xb90 io_uring/io_uring.c:1953
> io_worker_handle_work+0x7cd/0x1180 io_uring/io-wq.c:650
> io_wq_worker+0x42f/0xeb0 io_uring/io-wq.c:704
> ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
> </TASK>
>
> Showing all locks held in the system:
> 1 lock held by khungtaskd/35:
> #0: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> #0: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
> #0: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
> 5 locks held by kworker/u10:8/1120:
> #0: ffff88823c63a918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:639
> #1: ffff88823c624588 (psi_seq){-.-.}-{0:0}, at: psi_task_switch+0x53/0x880 kernel/sched/psi.c:933
> #2: ffff88810ac50788 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6363 [inline]
> #2: ffff88810ac50788 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: cfg80211_wiphy_work+0xb4/0x450 net/wireless/core.c:424
> #3: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> #3: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
> #3: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: ieee80211_sta_active_ibss+0xc3/0x330 net/mac80211/ibss.c:635
> #4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> #4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
> #4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: class_rcu_constructor include/linux/rcupdate.h:1195 [inline]
> #4: ffffffff8df419e0 (rcu_read_lock){....}-{1:3}, at: unwind_next_frame+0xa5/0x2390 arch/x86/kernel/unwind_orc.c:479
> 2 locks held by getty/5656:
> #0: ffff8881133040a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
> #1: ffffc900035732f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x449/0x1460 drivers/tty/n_tty.c:2211
> 3 locks held by kworker/0:9/6480:
> #0: ffff888100075948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
> #0: ffff888100075948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
> #1: ffffc9000546fb80 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
> #1: ffffc9000546fb80 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
> #2: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
> 1 lock held by syz-executor/6649:
> #0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
> #0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
> #0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
> 2 locks held by syz-executor/6651:
> #0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
> #0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
> #0: ffffffff8f30ffc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
> #1: ffff88823c63a918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:639
> 4 locks held by syz-executor/6653:
>
> =============================================
>
> NMI backtrace for cpu 0
> CPU: 0 UID: 0 PID: 35 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
> nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
> trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
> __sys_info lib/sys_info.c:157 [inline]
> sys_info+0x135/0x170 lib/sys_info.c:165
> check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
> watchdog+0xf95/0xfe0 kernel/hung_task.c:515
> kthread+0x711/0x8a0 kernel/kthread.c:463
> ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
> </TASK>
> Sending NMI from CPU 0 to CPUs 1:
> NMI backtrace for cpu 1
> CPU: 1 UID: 0 PID: 6653 Comm: syz-executor Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:io_serial_out+0x7c/0xc0 drivers/tty/serial/8250/8250_port.c:407
> Code: 3f a6 fc 44 89 f9 d3 e5 49 83 c6 40 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 ec 91 0c fd 41 03 2e 89 d8 89 ea ee <5b> 41 5c 41 5e 41 5f 5d c3 cc cc cc cc cc 44 89 f9 80 e1 07 38 c1
> RSP: 0018:ffffc90008156590 EFLAGS: 00000002
> RAX: 000000000000005b RBX: 000000000000005b RCX: 0000000000000000
> RDX: 00000000000003f8 RSI: 0000000000000000 RDI: 0000000000000020
> RBP: 00000000000003f8 R08: ffff888102f08237 R09: 1ffff110205e1046
> R10: dffffc0000000000 R11: ffffffff851b9060 R12: dffffc0000000000
> R13: ffffffff998dd9e1 R14: ffffffff99bf2420 R15: 0000000000000000
> FS: 0000555595186500(0000) GS:ffff8882a9e37000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 000055599f9c9018 CR3: 0000000112ed8000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> serial_port_out include/linux/serial_core.h:811 [inline]
> serial8250_console_putchar drivers/tty/serial/8250/8250_port.c:3192 [inline]
> serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:-1 [inline]
> serial8250_console_write+0x1410/0x1ba0 drivers/tty/serial/8250/8250_port.c:3342
> console_emit_next_record kernel/printk/printk.c:3129 [inline]
> console_flush_one_record kernel/printk/printk.c:3215 [inline]
> console_flush_all+0x745/0xb60 kernel/printk/printk.c:3289
> __console_flush_and_unlock kernel/printk/printk.c:3319 [inline]
> console_unlock+0xbb/0x190 kernel/printk/printk.c:3359
> vprintk_emit+0x4f8/0x5f0 kernel/printk/printk.c:2426
> _printk+0xcf/0x120 kernel/printk/printk.c:2451
> br_set_state+0x475/0x710 net/bridge/br_stp.c:57
> br_init_port+0x99/0x200 net/bridge/br_stp_if.c:39
> new_nbp+0x2f9/0x440 net/bridge/br_if.c:443
> br_add_if+0x283/0xeb0 net/bridge/br_if.c:586
> do_set_master+0x533/0x6d0 net/core/rtnetlink.c:2963
> do_setlink+0xcf0/0x41c0 net/core/rtnetlink.c:3165
> rtnl_changelink net/core/rtnetlink.c:3776 [inline]
> __rtnl_newlink net/core/rtnetlink.c:3935 [inline]
> rtnl_newlink+0x161c/0x1c90 net/core/rtnetlink.c:4072
> rtnetlink_rcv_msg+0x7cf/0xb70 net/core/rtnetlink.c:6958
> netlink_rcv_skb+0x208/0x470 net/netlink/af_netlink.c:2550
> netlink_unicast_kernel net/netlink/af_netlink.c:1318 [inline]
> netlink_unicast+0x82f/0x9e0 net/netlink/af_netlink.c:1344
> netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1894
> sock_sendmsg_nosec net/socket.c:727 [inline]
> __sock_sendmsg+0x21c/0x270 net/socket.c:742
> __sys_sendto+0x3bd/0x520 net/socket.c:2206
> __do_sys_sendto net/socket.c:2213 [inline]
> __se_sys_sendto net/socket.c:2209 [inline]
> __x64_sys_sendto+0xde/0x100 net/socket.c:2209
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f780c39165c
> Code: 2a 5f 02 00 44 8b 4c 24 2c 4c 8b 44 24 20 89 c5 44 8b 54 24 28 48 8b 54 24 18 b8 2c 00 00 00 48 8b 74 24 10 8b 7c 24 08 0f 05 <48> 3d 00 f0 ff ff 77 34 89 ef 48 89 44 24 08 e8 70 5f 02 00 48 8b
> RSP: 002b:00007ffcecb618b0 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
> RAX: ffffffffffffffda RBX: 00007f780d114620 RCX: 00007f780c39165c
> RDX: 0000000000000028 RSI: 00007f780d114670 RDI: 0000000000000003
> RBP: 0000000000000000 R08: 00007ffcecb61904 R09: 000000000000000c
> R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
> R13: 0000000000000000 R14: 00007f780d114670 R15: 0000000000000000
> </TASK>
>
>
> ***
>
> If these findings have caused you to resend the series or submit a
> separate fix, please add the following tag to your commit message:
> Tested-by: syzbot@syzkaller.appspotmail.com
>
> ---
> This report is generated by a bot. It may contain errors.
> syzbot ci engineers can be reached at syzkaller@googlegroups.com.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-12-22 20:19 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-03 3:26 [PATCH 0/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 1/4] io_uring: don't include filetable.h in io_uring.h Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 2/4] io_uring/rsrc: respect submitter_task in io_register_clone_buffers() Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 3/4] io_uring: factor out uring_lock helpers Caleb Sander Mateos
2025-09-03 3:26 ` [PATCH 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Caleb Sander Mateos
2025-09-03 21:55 ` [syzbot ci] " syzbot ci
2025-09-03 23:29 ` Jens Axboe
2025-09-04 14:52 ` Caleb Sander Mateos
2025-09-04 16:46 ` Caleb Sander Mateos
2025-09-04 16:50 ` Caleb Sander Mateos
2025-09-04 23:25 ` Jens Axboe
-- strict thread matches above, loose matches on Subject: below --
2025-11-25 23:39 [PATCH v3 0/4] " Caleb Sander Mateos
2025-11-26 8:15 ` [syzbot ci] " syzbot ci
2025-11-26 17:30 ` Caleb Sander Mateos
2025-12-15 20:09 [PATCH v5 0/6] " Caleb Sander Mateos
2025-12-16 5:21 ` [syzbot ci] " syzbot ci
2025-12-18 1:24 ` Caleb Sander Mateos
2025-12-18 2:44 [PATCH v6 0/6] " Caleb Sander Mateos
2025-12-18 8:01 ` [syzbot ci] " syzbot ci
2025-12-22 20:19 ` Caleb Sander Mateos
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox