* [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
@ 2026-01-05 21:05 Caleb Sander Mateos
2026-01-05 21:05 ` [PATCH v7 1/3] " Caleb Sander Mateos
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Caleb Sander Mateos @ 2026-01-05 21:05 UTC (permalink / raw)
To: Jens Axboe; +Cc: Joanne Koong, io-uring, linux-kernel, Caleb Sander Mateos
io_uring_enter(), __io_msg_ring_data(), and io_msg_send_fd() read
ctx->flags and ctx->submitter_task without holding the ctx's uring_lock.
This means they may race with the assignment to ctx->submitter_task and
the clearing of IORING_SETUP_R_DISABLED from ctx->flags in
io_register_enable_rings(). Ensure the correct ordering of the
ctx->flags and ctx->submitter_task memory accesses by storing to
ctx->flags using release ordering and loading it using acquire ordering.
Using release-acquire ordering for IORING_SETUP_R_DISABLED ensures the
assignment to ctx->submitter_task in io_register_enable_rings() can't
race with msg_ring's accesses, so drop the unneeded {READ,WRITE}_ONCE()
and the NULL checks.
v7:
- Split from IORING_SETUP_SINGLE_ISSUER optimization series
- Drop unnecessary submitter_task {READ,WRITE}_ONCE() and NULL checks
- Drop redundant submitter_task check in io_register_enable_rings()
- Add comments explaining need for release-acquire ordering
Caleb Sander Mateos (3):
io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
io_uring/msg_ring: drop unnecessary submitter_task checks
io_uring/register: drop io_register_enable_rings() submitter_task
check
io_uring/io_uring.c | 13 ++++++-------
io_uring/msg_ring.c | 28 ++++++++++++++--------------
io_uring/register.c | 7 ++++---
3 files changed, 24 insertions(+), 24 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v7 1/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
2026-01-05 21:05 [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Caleb Sander Mateos
@ 2026-01-05 21:05 ` Caleb Sander Mateos
2026-01-05 21:05 ` [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks Caleb Sander Mateos
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Caleb Sander Mateos @ 2026-01-05 21:05 UTC (permalink / raw)
To: Jens Axboe; +Cc: Joanne Koong, io-uring, linux-kernel, Caleb Sander Mateos
io_uring_enter(), __io_msg_ring_data(), and io_msg_send_fd() read
ctx->flags and ctx->submitter_task without holding the ctx's uring_lock.
This means they may race with the assignment to ctx->submitter_task and
the clearing of IORING_SETUP_R_DISABLED from ctx->flags in
io_register_enable_rings(). Ensure the correct ordering of the
ctx->flags and ctx->submitter_task memory accesses by storing to
ctx->flags using release ordering and loading it using acquire ordering.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Fixes: 4add705e4eeb ("io_uring: remove io_register_submitter")
Reviewed-by: Joanne Koong <joannelkoong@gmail.com>
---
io_uring/io_uring.c | 6 +++++-
io_uring/msg_ring.c | 12 ++++++++++--
io_uring/register.c | 3 ++-
3 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 87a87396e940..ec27fafcb213 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3254,11 +3254,15 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
goto out;
}
ctx = file->private_data;
ret = -EBADFD;
- if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
+ /*
+ * Keep IORING_SETUP_R_DISABLED check before submitter_task load
+ * in io_uring_add_tctx_node() -> __io_uring_add_tctx_node_from_submit()
+ */
+ if (unlikely(smp_load_acquire(&ctx->flags) & IORING_SETUP_R_DISABLED))
goto out;
/*
* For SQ polling, the thread will do all submissions and completions.
* Just return the requested submit count, and wake the thread if
diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
index 7063ea7964e7..87b4d306cf1b 100644
--- a/io_uring/msg_ring.c
+++ b/io_uring/msg_ring.c
@@ -123,11 +123,15 @@ static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
if (msg->src_fd || msg->flags & ~IORING_MSG_RING_FLAGS_PASS)
return -EINVAL;
if (!(msg->flags & IORING_MSG_RING_FLAGS_PASS) && msg->dst_fd)
return -EINVAL;
- if (target_ctx->flags & IORING_SETUP_R_DISABLED)
+ /*
+ * Keep IORING_SETUP_R_DISABLED check before submitter_task load
+ * in io_msg_data_remote() -> io_msg_remote_post()
+ */
+ if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
return -EBADFD;
if (io_msg_need_remote(target_ctx))
return io_msg_data_remote(target_ctx, msg);
@@ -243,11 +247,15 @@ static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags)
if (msg->len)
return -EINVAL;
if (target_ctx == ctx)
return -EINVAL;
- if (target_ctx->flags & IORING_SETUP_R_DISABLED)
+ /*
+ * Keep IORING_SETUP_R_DISABLED check before submitter_task load
+ * in io_msg_fd_remote()
+ */
+ if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
return -EBADFD;
if (!msg->src_file) {
int ret = io_msg_grab_file(req, issue_flags);
if (unlikely(ret))
return ret;
diff --git a/io_uring/register.c b/io_uring/register.c
index 3d3822ff3fd9..12318c276068 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -191,11 +191,12 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
}
if (ctx->restrictions.registered)
ctx->restricted = 1;
- ctx->flags &= ~IORING_SETUP_R_DISABLED;
+ /* Keep submitter_task store before clearing IORING_SETUP_R_DISABLED */
+ smp_store_release(&ctx->flags, ctx->flags & ~IORING_SETUP_R_DISABLED);
if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
wake_up(&ctx->sq_data->wait);
return 0;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks
2026-01-05 21:05 [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Caleb Sander Mateos
2026-01-05 21:05 ` [PATCH v7 1/3] " Caleb Sander Mateos
@ 2026-01-05 21:05 ` Caleb Sander Mateos
2026-01-08 4:25 ` Joanne Koong
2026-01-05 21:05 ` [PATCH v7 3/3] io_uring/register: drop io_register_enable_rings() submitter_task check Caleb Sander Mateos
` (3 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Caleb Sander Mateos @ 2026-01-05 21:05 UTC (permalink / raw)
To: Jens Axboe; +Cc: Joanne Koong, io-uring, linux-kernel, Caleb Sander Mateos
__io_msg_ring_data() checks that the target_ctx isn't
IORING_SETUP_R_DISABLED before calling io_msg_data_remote(), which calls
io_msg_remote_post(). So submitter_task can't be modified concurrently
with the read in io_msg_remote_post(). Additionally, submitter_task must
exist, as io_msg_data_remote() is only called for io_msg_need_remote(),
i.e. task_complete is set, which requires IORING_SETUP_DEFER_TASKRUN,
which in turn requires IORING_SETUP_SINGLE_ISSUER. And submitter_task is
assigned in io_uring_create() or io_register_enable_rings() before
enabling any IORING_SETUP_SINGLE_ISSUER io_ring_ctx.
Similarly, io_msg_send_fd() checks IORING_SETUP_R_DISABLED and
io_msg_need_remote() before calling io_msg_fd_remote(). submitter_task
therefore can't be modified concurrently with the read in
io_msg_fd_remote() and must be non-null.
io_register_enable_rings() can't run concurrently because it's called
from io_uring_register() -> __io_uring_register() with uring_lock held.
Thus, replace the READ_ONCE() and WRITE_ONCE() of submitter_task with
plain loads and stores. And remove the NULL checks of submitter_task in
io_msg_remote_post() and io_msg_fd_remote().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
io_uring/io_uring.c | 7 +------
io_uring/msg_ring.c | 18 +++++-------------
io_uring/register.c | 2 +-
3 files changed, 7 insertions(+), 20 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index ec27fafcb213..b31d88295297 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3663,17 +3663,12 @@ static __cold int io_uring_create(struct io_ctx_config *config)
ret = -EFAULT;
goto err;
}
if (ctx->flags & IORING_SETUP_SINGLE_ISSUER
- && !(ctx->flags & IORING_SETUP_R_DISABLED)) {
- /*
- * Unlike io_register_enable_rings(), don't need WRITE_ONCE()
- * since ctx isn't yet accessible from other tasks
- */
+ && !(ctx->flags & IORING_SETUP_R_DISABLED))
ctx->submitter_task = get_task_struct(current);
- }
file = io_uring_get_file(ctx);
if (IS_ERR(file)) {
ret = PTR_ERR(file);
goto err;
diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
index 87b4d306cf1b..57ad0085869a 100644
--- a/io_uring/msg_ring.c
+++ b/io_uring/msg_ring.c
@@ -78,26 +78,21 @@ static void io_msg_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw)
io_add_aux_cqe(ctx, req->cqe.user_data, req->cqe.res, req->cqe.flags);
kfree_rcu(req, rcu_head);
percpu_ref_put(&ctx->refs);
}
-static int io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
+static void io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
int res, u32 cflags, u64 user_data)
{
- if (!READ_ONCE(ctx->submitter_task)) {
- kfree_rcu(req, rcu_head);
- return -EOWNERDEAD;
- }
req->opcode = IORING_OP_NOP;
req->cqe.user_data = user_data;
io_req_set_res(req, res, cflags);
percpu_ref_get(&ctx->refs);
req->ctx = ctx;
req->tctx = NULL;
req->io_task_work.func = io_msg_tw_complete;
io_req_task_work_add_remote(req, IOU_F_TWQ_LAZY_WAKE);
- return 0;
}
static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
struct io_msg *msg)
{
@@ -109,12 +104,12 @@ static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
return -ENOMEM;
if (msg->flags & IORING_MSG_RING_FLAGS_PASS)
flags = msg->cqe_flags;
- return io_msg_remote_post(target_ctx, target, msg->len, flags,
- msg->user_data);
+ io_msg_remote_post(target_ctx, target, msg->len, flags, msg->user_data);
+ return 0;
}
static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
struct io_msg *msg, unsigned int issue_flags)
{
@@ -125,11 +120,11 @@ static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
return -EINVAL;
if (!(msg->flags & IORING_MSG_RING_FLAGS_PASS) && msg->dst_fd)
return -EINVAL;
/*
* Keep IORING_SETUP_R_DISABLED check before submitter_task load
- * in io_msg_data_remote() -> io_msg_remote_post()
+ * in io_msg_data_remote() -> io_req_task_work_add_remote()
*/
if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
return -EBADFD;
if (io_msg_need_remote(target_ctx))
@@ -225,14 +220,11 @@ static void io_msg_tw_fd_complete(struct callback_head *head)
static int io_msg_fd_remote(struct io_kiocb *req)
{
struct io_ring_ctx *ctx = req->file->private_data;
struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
- struct task_struct *task = READ_ONCE(ctx->submitter_task);
-
- if (unlikely(!task))
- return -EOWNERDEAD;
+ struct task_struct *task = ctx->submitter_task;
init_task_work(&msg->tw, io_msg_tw_fd_complete);
if (task_work_add(task, &msg->tw, TWA_SIGNAL))
return -EOWNERDEAD;
diff --git a/io_uring/register.c b/io_uring/register.c
index 12318c276068..8104728af294 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -179,11 +179,11 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
{
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
return -EBADFD;
if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
- WRITE_ONCE(ctx->submitter_task, get_task_struct(current));
+ ctx->submitter_task = get_task_struct(current);
/*
* Lazy activation attempts would fail if it was polled before
* submitter_task is set.
*/
if (wq_has_sleeper(&ctx->poll_wq))
--
2.45.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v7 3/3] io_uring/register: drop io_register_enable_rings() submitter_task check
2026-01-05 21:05 [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Caleb Sander Mateos
2026-01-05 21:05 ` [PATCH v7 1/3] " Caleb Sander Mateos
2026-01-05 21:05 ` [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks Caleb Sander Mateos
@ 2026-01-05 21:05 ` Caleb Sander Mateos
2026-01-08 22:10 ` Joanne Koong
2026-01-06 18:41 ` [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Gabriel Krisman Bertazi
` (2 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Caleb Sander Mateos @ 2026-01-05 21:05 UTC (permalink / raw)
To: Jens Axboe; +Cc: Joanne Koong, io-uring, linux-kernel, Caleb Sander Mateos
io_register_enable_rings() checks that the io_ring_ctx is
IORING_SETUP_R_DISABLED, which ensures submitter_task hasn't been
assigned by io_uring_create() or a previous io_register_enable_rings()
call. So drop the redundant check that submitter_task is NULL.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
io_uring/register.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/io_uring/register.c b/io_uring/register.c
index 8104728af294..78434869270c 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -178,11 +178,11 @@ static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
static int io_register_enable_rings(struct io_ring_ctx *ctx)
{
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
return -EBADFD;
- if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
+ if (ctx->flags & IORING_SETUP_SINGLE_ISSUER) {
ctx->submitter_task = get_task_struct(current);
/*
* Lazy activation attempts would fail if it was polled before
* submitter_task is set.
*/
--
2.45.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
2026-01-05 21:05 [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Caleb Sander Mateos
` (2 preceding siblings ...)
2026-01-05 21:05 ` [PATCH v7 3/3] io_uring/register: drop io_register_enable_rings() submitter_task check Caleb Sander Mateos
@ 2026-01-06 18:41 ` Gabriel Krisman Bertazi
2026-01-08 4:54 ` Hillf Danton
2026-01-12 18:34 ` Jens Axboe
5 siblings, 0 replies; 11+ messages in thread
From: Gabriel Krisman Bertazi @ 2026-01-06 18:41 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, Joanne Koong, io-uring, linux-kernel
Caleb Sander Mateos <csander@purestorage.com> writes:
> io_uring_enter(), __io_msg_ring_data(), and io_msg_send_fd() read
> ctx->flags and ctx->submitter_task without holding the ctx's uring_lock.
> This means they may race with the assignment to ctx->submitter_task and
> the clearing of IORING_SETUP_R_DISABLED from ctx->flags in
> io_register_enable_rings(). Ensure the correct ordering of the
> ctx->flags and ctx->submitter_task memory accesses by storing to
> ctx->flags using release ordering and loading it using acquire ordering.
>
> Using release-acquire ordering for IORING_SETUP_R_DISABLED ensures the
> assignment to ctx->submitter_task in io_register_enable_rings() can't
> race with msg_ring's accesses, so drop the unneeded {READ,WRITE}_ONCE()
> and the NULL checks.
Hi Caleb,
This looks good, I don't have any comments. Feel free to add:
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
Thanks,
--
Gabriel Krisman Bertazi
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks
2026-01-05 21:05 ` [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks Caleb Sander Mateos
@ 2026-01-08 4:25 ` Joanne Koong
2026-01-08 7:06 ` Caleb Sander Mateos
0 siblings, 1 reply; 11+ messages in thread
From: Joanne Koong @ 2026-01-08 4:25 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, io-uring, linux-kernel
On Mon, Jan 5, 2026 at 1:05 PM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> __io_msg_ring_data() checks that the target_ctx isn't
> IORING_SETUP_R_DISABLED before calling io_msg_data_remote(), which calls
> io_msg_remote_post(). So submitter_task can't be modified concurrently
> with the read in io_msg_remote_post(). Additionally, submitter_task must
> exist, as io_msg_data_remote() is only called for io_msg_need_remote(),
> i.e. task_complete is set, which requires IORING_SETUP_DEFER_TASKRUN,
> which in turn requires IORING_SETUP_SINGLE_ISSUER. And submitter_task is
> assigned in io_uring_create() or io_register_enable_rings() before
> enabling any IORING_SETUP_SINGLE_ISSUER io_ring_ctx.
> Similarly, io_msg_send_fd() checks IORING_SETUP_R_DISABLED and
> io_msg_need_remote() before calling io_msg_fd_remote(). submitter_task
> therefore can't be modified concurrently with the read in
> io_msg_fd_remote() and must be non-null.
> io_register_enable_rings() can't run concurrently because it's called
> from io_uring_register() -> __io_uring_register() with uring_lock held.
> Thus, replace the READ_ONCE() and WRITE_ONCE() of submitter_task with
> plain loads and stores. And remove the NULL checks of submitter_task in
> io_msg_remote_post() and io_msg_fd_remote().
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
> ---
> io_uring/io_uring.c | 7 +------
> io_uring/msg_ring.c | 18 +++++-------------
> io_uring/register.c | 2 +-
> 3 files changed, 7 insertions(+), 20 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index ec27fafcb213..b31d88295297 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -3663,17 +3663,12 @@ static __cold int io_uring_create(struct io_ctx_config *config)
> ret = -EFAULT;
> goto err;
> }
>
> if (ctx->flags & IORING_SETUP_SINGLE_ISSUER
> - && !(ctx->flags & IORING_SETUP_R_DISABLED)) {
> - /*
> - * Unlike io_register_enable_rings(), don't need WRITE_ONCE()
> - * since ctx isn't yet accessible from other tasks
> - */
> + && !(ctx->flags & IORING_SETUP_R_DISABLED))
> ctx->submitter_task = get_task_struct(current);
> - }
>
> file = io_uring_get_file(ctx);
> if (IS_ERR(file)) {
> ret = PTR_ERR(file);
> goto err;
> diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
> index 87b4d306cf1b..57ad0085869a 100644
> --- a/io_uring/msg_ring.c
> +++ b/io_uring/msg_ring.c
> @@ -78,26 +78,21 @@ static void io_msg_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw)
> io_add_aux_cqe(ctx, req->cqe.user_data, req->cqe.res, req->cqe.flags);
> kfree_rcu(req, rcu_head);
> percpu_ref_put(&ctx->refs);
> }
>
> -static int io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
> +static void io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
> int res, u32 cflags, u64 user_data)
> {
> - if (!READ_ONCE(ctx->submitter_task)) {
> - kfree_rcu(req, rcu_head);
> - return -EOWNERDEAD;
> - }
> req->opcode = IORING_OP_NOP;
> req->cqe.user_data = user_data;
> io_req_set_res(req, res, cflags);
> percpu_ref_get(&ctx->refs);
> req->ctx = ctx;
> req->tctx = NULL;
> req->io_task_work.func = io_msg_tw_complete;
> io_req_task_work_add_remote(req, IOU_F_TWQ_LAZY_WAKE);
> - return 0;
> }
>
> static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
> struct io_msg *msg)
> {
> @@ -109,12 +104,12 @@ static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
> return -ENOMEM;
>
> if (msg->flags & IORING_MSG_RING_FLAGS_PASS)
> flags = msg->cqe_flags;
>
> - return io_msg_remote_post(target_ctx, target, msg->len, flags,
> - msg->user_data);
> + io_msg_remote_post(target_ctx, target, msg->len, flags, msg->user_data);
> + return 0;
> }
>
> static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
> struct io_msg *msg, unsigned int issue_flags)
> {
> @@ -125,11 +120,11 @@ static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
> return -EINVAL;
> if (!(msg->flags & IORING_MSG_RING_FLAGS_PASS) && msg->dst_fd)
> return -EINVAL;
> /*
> * Keep IORING_SETUP_R_DISABLED check before submitter_task load
> - * in io_msg_data_remote() -> io_msg_remote_post()
> + * in io_msg_data_remote() -> io_req_task_work_add_remote()
> */
> if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
> return -EBADFD;
>
> if (io_msg_need_remote(target_ctx))
> @@ -225,14 +220,11 @@ static void io_msg_tw_fd_complete(struct callback_head *head)
>
> static int io_msg_fd_remote(struct io_kiocb *req)
> {
> struct io_ring_ctx *ctx = req->file->private_data;
> struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
> - struct task_struct *task = READ_ONCE(ctx->submitter_task);
> -
> - if (unlikely(!task))
> - return -EOWNERDEAD;
> + struct task_struct *task = ctx->submitter_task;
Is the if !task check here still needed? in the
io_register_enable_rings() logic I see
if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
ctx->submitter_task = get_task_struct(current);
...
}
and then a few lines below
ctx->flags &= ~IORING_SETUP_R_DISABLED;
but I'm not seeing any memory barrier stuff that prevents these from
being reordered.
In io_msg_send_fd() I see that we check "if (target_ctx->flags &
IORING_SETUP_R_DISABLED) return -EBADFD;" before calling into
io_msg_fd_remote() here but if the ctx->submitter_task assignment and
IORING_SETUP_R_DISABLED flag clearing logic are reordered, then it
seems like this opens a race condition where there could be a null ptr
crash when task_work_add() gets called below?
Thanks,
Joanne
>
> init_task_work(&msg->tw, io_msg_tw_fd_complete);
> if (task_work_add(task, &msg->tw, TWA_SIGNAL))
> return -EOWNERDEAD;
>
> diff --git a/io_uring/register.c b/io_uring/register.c
> index 12318c276068..8104728af294 100644
> --- a/io_uring/register.c
> +++ b/io_uring/register.c
> @@ -179,11 +179,11 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
> {
> if (!(ctx->flags & IORING_SETUP_R_DISABLED))
> return -EBADFD;
>
> if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
> - WRITE_ONCE(ctx->submitter_task, get_task_struct(current));
> + ctx->submitter_task = get_task_struct(current);
> /*
> * Lazy activation attempts would fail if it was polled before
> * submitter_task is set.
> */
> if (wq_has_sleeper(&ctx->poll_wq))
> --
> 2.45.2
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
2026-01-05 21:05 [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Caleb Sander Mateos
` (3 preceding siblings ...)
2026-01-06 18:41 ` [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Gabriel Krisman Bertazi
@ 2026-01-08 4:54 ` Hillf Danton
2026-01-12 18:34 ` Jens Axboe
5 siblings, 0 replies; 11+ messages in thread
From: Hillf Danton @ 2026-01-08 4:54 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, Joanne Koong, io-uring, linux-kernel
On Mon, 5 Jan 2026 14:05:39 -0700 Caleb Sander Mateos wrote:
> io_uring_enter(), __io_msg_ring_data(), and io_msg_send_fd() read
> ctx->flags and ctx->submitter_task without holding the ctx's uring_lock.
> This means they may race with the assignment to ctx->submitter_task and
> the clearing of IORING_SETUP_R_DISABLED from ctx->flags in
> io_register_enable_rings(). Ensure the correct ordering of the
> ctx->flags and ctx->submitter_task memory accesses by storing to
> ctx->flags using release ordering and loading it using acquire ordering.
>
Given no race is erased without locking, can you specify the observable effects
without enforced ordering?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks
2026-01-08 4:25 ` Joanne Koong
@ 2026-01-08 7:06 ` Caleb Sander Mateos
2026-01-08 22:04 ` Joanne Koong
0 siblings, 1 reply; 11+ messages in thread
From: Caleb Sander Mateos @ 2026-01-08 7:06 UTC (permalink / raw)
To: Joanne Koong; +Cc: Jens Axboe, io-uring, linux-kernel
On Wed, Jan 7, 2026 at 8:25 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> On Mon, Jan 5, 2026 at 1:05 PM Caleb Sander Mateos
> <csander@purestorage.com> wrote:
> >
> > __io_msg_ring_data() checks that the target_ctx isn't
> > IORING_SETUP_R_DISABLED before calling io_msg_data_remote(), which calls
> > io_msg_remote_post(). So submitter_task can't be modified concurrently
> > with the read in io_msg_remote_post(). Additionally, submitter_task must
> > exist, as io_msg_data_remote() is only called for io_msg_need_remote(),
> > i.e. task_complete is set, which requires IORING_SETUP_DEFER_TASKRUN,
> > which in turn requires IORING_SETUP_SINGLE_ISSUER. And submitter_task is
> > assigned in io_uring_create() or io_register_enable_rings() before
> > enabling any IORING_SETUP_SINGLE_ISSUER io_ring_ctx.
> > Similarly, io_msg_send_fd() checks IORING_SETUP_R_DISABLED and
> > io_msg_need_remote() before calling io_msg_fd_remote(). submitter_task
> > therefore can't be modified concurrently with the read in
> > io_msg_fd_remote() and must be non-null.
> > io_register_enable_rings() can't run concurrently because it's called
> > from io_uring_register() -> __io_uring_register() with uring_lock held.
> > Thus, replace the READ_ONCE() and WRITE_ONCE() of submitter_task with
> > plain loads and stores. And remove the NULL checks of submitter_task in
> > io_msg_remote_post() and io_msg_fd_remote().
> >
> > Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
> > ---
> > io_uring/io_uring.c | 7 +------
> > io_uring/msg_ring.c | 18 +++++-------------
> > io_uring/register.c | 2 +-
> > 3 files changed, 7 insertions(+), 20 deletions(-)
> >
> > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > index ec27fafcb213..b31d88295297 100644
> > --- a/io_uring/io_uring.c
> > +++ b/io_uring/io_uring.c
> > @@ -3663,17 +3663,12 @@ static __cold int io_uring_create(struct io_ctx_config *config)
> > ret = -EFAULT;
> > goto err;
> > }
> >
> > if (ctx->flags & IORING_SETUP_SINGLE_ISSUER
> > - && !(ctx->flags & IORING_SETUP_R_DISABLED)) {
> > - /*
> > - * Unlike io_register_enable_rings(), don't need WRITE_ONCE()
> > - * since ctx isn't yet accessible from other tasks
> > - */
> > + && !(ctx->flags & IORING_SETUP_R_DISABLED))
> > ctx->submitter_task = get_task_struct(current);
> > - }
> >
> > file = io_uring_get_file(ctx);
> > if (IS_ERR(file)) {
> > ret = PTR_ERR(file);
> > goto err;
> > diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
> > index 87b4d306cf1b..57ad0085869a 100644
> > --- a/io_uring/msg_ring.c
> > +++ b/io_uring/msg_ring.c
> > @@ -78,26 +78,21 @@ static void io_msg_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw)
> > io_add_aux_cqe(ctx, req->cqe.user_data, req->cqe.res, req->cqe.flags);
> > kfree_rcu(req, rcu_head);
> > percpu_ref_put(&ctx->refs);
> > }
> >
> > -static int io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > +static void io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > int res, u32 cflags, u64 user_data)
> > {
> > - if (!READ_ONCE(ctx->submitter_task)) {
> > - kfree_rcu(req, rcu_head);
> > - return -EOWNERDEAD;
> > - }
> > req->opcode = IORING_OP_NOP;
> > req->cqe.user_data = user_data;
> > io_req_set_res(req, res, cflags);
> > percpu_ref_get(&ctx->refs);
> > req->ctx = ctx;
> > req->tctx = NULL;
> > req->io_task_work.func = io_msg_tw_complete;
> > io_req_task_work_add_remote(req, IOU_F_TWQ_LAZY_WAKE);
> > - return 0;
> > }
> >
> > static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
> > struct io_msg *msg)
> > {
> > @@ -109,12 +104,12 @@ static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
> > return -ENOMEM;
> >
> > if (msg->flags & IORING_MSG_RING_FLAGS_PASS)
> > flags = msg->cqe_flags;
> >
> > - return io_msg_remote_post(target_ctx, target, msg->len, flags,
> > - msg->user_data);
> > + io_msg_remote_post(target_ctx, target, msg->len, flags, msg->user_data);
> > + return 0;
> > }
> >
> > static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
> > struct io_msg *msg, unsigned int issue_flags)
> > {
> > @@ -125,11 +120,11 @@ static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
> > return -EINVAL;
> > if (!(msg->flags & IORING_MSG_RING_FLAGS_PASS) && msg->dst_fd)
> > return -EINVAL;
> > /*
> > * Keep IORING_SETUP_R_DISABLED check before submitter_task load
> > - * in io_msg_data_remote() -> io_msg_remote_post()
> > + * in io_msg_data_remote() -> io_req_task_work_add_remote()
> > */
> > if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
> > return -EBADFD;
> >
> > if (io_msg_need_remote(target_ctx))
> > @@ -225,14 +220,11 @@ static void io_msg_tw_fd_complete(struct callback_head *head)
> >
> > static int io_msg_fd_remote(struct io_kiocb *req)
> > {
> > struct io_ring_ctx *ctx = req->file->private_data;
> > struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
> > - struct task_struct *task = READ_ONCE(ctx->submitter_task);
> > -
> > - if (unlikely(!task))
> > - return -EOWNERDEAD;
> > + struct task_struct *task = ctx->submitter_task;
>
> Is the if !task check here still needed? in the
> io_register_enable_rings() logic I see
>
> if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
> ctx->submitter_task = get_task_struct(current);
> ...
> }
> and then a few lines below
> ctx->flags &= ~IORING_SETUP_R_DISABLED;
>
> but I'm not seeing any memory barrier stuff that prevents these from
> being reordered.
>
> In io_msg_send_fd() I see that we check "if (target_ctx->flags &
> IORING_SETUP_R_DISABLED) return -EBADFD;" before calling into
> io_msg_fd_remote() here but if the ctx->submitter_task assignment and
> IORING_SETUP_R_DISABLED flag clearing logic are reordered, then it
> seems like this opens a race condition where there could be a null ptr
> crash when task_work_add() gets called below?
Shouldn't patch 1's switch to use smp_store_release() for the clearing
of IORING_SETUP_R_DISABLED and smp_load_acquire() for the check of
IORING_SETUP_R_DISABLED in io_msg_send_fd() ensure the necessary
ordering? Or am I missing something?
Thanks,
Caleb
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks
2026-01-08 7:06 ` Caleb Sander Mateos
@ 2026-01-08 22:04 ` Joanne Koong
0 siblings, 0 replies; 11+ messages in thread
From: Joanne Koong @ 2026-01-08 22:04 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, io-uring, linux-kernel
On Wed, Jan 7, 2026 at 11:07 PM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> On Wed, Jan 7, 2026 at 8:25 PM Joanne Koong <joannelkoong@gmail.com> wrote:
> >
> > On Mon, Jan 5, 2026 at 1:05 PM Caleb Sander Mateos
> > <csander@purestorage.com> wrote:
> > >
> > > __io_msg_ring_data() checks that the target_ctx isn't
> > > IORING_SETUP_R_DISABLED before calling io_msg_data_remote(), which calls
> > > io_msg_remote_post(). So submitter_task can't be modified concurrently
> > > with the read in io_msg_remote_post(). Additionally, submitter_task must
> > > exist, as io_msg_data_remote() is only called for io_msg_need_remote(),
> > > i.e. task_complete is set, which requires IORING_SETUP_DEFER_TASKRUN,
> > > which in turn requires IORING_SETUP_SINGLE_ISSUER. And submitter_task is
> > > assigned in io_uring_create() or io_register_enable_rings() before
> > > enabling any IORING_SETUP_SINGLE_ISSUER io_ring_ctx.
> > > Similarly, io_msg_send_fd() checks IORING_SETUP_R_DISABLED and
> > > io_msg_need_remote() before calling io_msg_fd_remote(). submitter_task
> > > therefore can't be modified concurrently with the read in
> > > io_msg_fd_remote() and must be non-null.
> > > io_register_enable_rings() can't run concurrently because it's called
> > > from io_uring_register() -> __io_uring_register() with uring_lock held.
> > > Thus, replace the READ_ONCE() and WRITE_ONCE() of submitter_task with
> > > plain loads and stores. And remove the NULL checks of submitter_task in
> > > io_msg_remote_post() and io_msg_fd_remote().
> > >
> > > Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
> > > ---
> > > io_uring/io_uring.c | 7 +------
> > > io_uring/msg_ring.c | 18 +++++-------------
> > > io_uring/register.c | 2 +-
> > > 3 files changed, 7 insertions(+), 20 deletions(-)
> > >
> > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > index ec27fafcb213..b31d88295297 100644
> > > --- a/io_uring/io_uring.c
> > > +++ b/io_uring/io_uring.c
> > > @@ -3663,17 +3663,12 @@ static __cold int io_uring_create(struct io_ctx_config *config)
> > > ret = -EFAULT;
> > > goto err;
> > > }
> > >
> > > if (ctx->flags & IORING_SETUP_SINGLE_ISSUER
> > > - && !(ctx->flags & IORING_SETUP_R_DISABLED)) {
> > > - /*
> > > - * Unlike io_register_enable_rings(), don't need WRITE_ONCE()
> > > - * since ctx isn't yet accessible from other tasks
> > > - */
> > > + && !(ctx->flags & IORING_SETUP_R_DISABLED))
> > > ctx->submitter_task = get_task_struct(current);
> > > - }
> > >
> > > file = io_uring_get_file(ctx);
> > > if (IS_ERR(file)) {
> > > ret = PTR_ERR(file);
> > > goto err;
> > > diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
> > > index 87b4d306cf1b..57ad0085869a 100644
> > > --- a/io_uring/msg_ring.c
> > > +++ b/io_uring/msg_ring.c
> > > @@ -78,26 +78,21 @@ static void io_msg_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw)
> > > io_add_aux_cqe(ctx, req->cqe.user_data, req->cqe.res, req->cqe.flags);
> > > kfree_rcu(req, rcu_head);
> > > percpu_ref_put(&ctx->refs);
> > > }
> > >
> > > -static int io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > +static void io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > int res, u32 cflags, u64 user_data)
> > > {
> > > - if (!READ_ONCE(ctx->submitter_task)) {
> > > - kfree_rcu(req, rcu_head);
> > > - return -EOWNERDEAD;
> > > - }
> > > req->opcode = IORING_OP_NOP;
> > > req->cqe.user_data = user_data;
> > > io_req_set_res(req, res, cflags);
> > > percpu_ref_get(&ctx->refs);
> > > req->ctx = ctx;
> > > req->tctx = NULL;
> > > req->io_task_work.func = io_msg_tw_complete;
> > > io_req_task_work_add_remote(req, IOU_F_TWQ_LAZY_WAKE);
> > > - return 0;
> > > }
> > >
> > > static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
> > > struct io_msg *msg)
> > > {
> > > @@ -109,12 +104,12 @@ static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
> > > return -ENOMEM;
> > >
> > > if (msg->flags & IORING_MSG_RING_FLAGS_PASS)
> > > flags = msg->cqe_flags;
> > >
> > > - return io_msg_remote_post(target_ctx, target, msg->len, flags,
> > > - msg->user_data);
> > > + io_msg_remote_post(target_ctx, target, msg->len, flags, msg->user_data);
> > > + return 0;
> > > }
> > >
> > > static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
> > > struct io_msg *msg, unsigned int issue_flags)
> > > {
> > > @@ -125,11 +120,11 @@ static int __io_msg_ring_data(struct io_ring_ctx *target_ctx,
> > > return -EINVAL;
> > > if (!(msg->flags & IORING_MSG_RING_FLAGS_PASS) && msg->dst_fd)
> > > return -EINVAL;
> > > /*
> > > * Keep IORING_SETUP_R_DISABLED check before submitter_task load
> > > - * in io_msg_data_remote() -> io_msg_remote_post()
> > > + * in io_msg_data_remote() -> io_req_task_work_add_remote()
> > > */
> > > if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
> > > return -EBADFD;
> > >
> > > if (io_msg_need_remote(target_ctx))
> > > @@ -225,14 +220,11 @@ static void io_msg_tw_fd_complete(struct callback_head *head)
> > >
> > > static int io_msg_fd_remote(struct io_kiocb *req)
> > > {
> > > struct io_ring_ctx *ctx = req->file->private_data;
> > > struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
> > > - struct task_struct *task = READ_ONCE(ctx->submitter_task);
> > > -
> > > - if (unlikely(!task))
> > > - return -EOWNERDEAD;
> > > + struct task_struct *task = ctx->submitter_task;
> >
> > Is the if !task check here still needed? in the
> > io_register_enable_rings() logic I see
> >
> > if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
> > ctx->submitter_task = get_task_struct(current);
> > ...
> > }
> > and then a few lines below
> > ctx->flags &= ~IORING_SETUP_R_DISABLED;
> >
> > but I'm not seeing any memory barrier stuff that prevents these from
> > being reordered.
> >
> > In io_msg_send_fd() I see that we check "if (target_ctx->flags &
> > IORING_SETUP_R_DISABLED) return -EBADFD;" before calling into
> > io_msg_fd_remote() here but if the ctx->submitter_task assignment and
> > IORING_SETUP_R_DISABLED flag clearing logic are reordered, then it
> > seems like this opens a race condition where there could be a null ptr
> > crash when task_work_add() gets called below?
>
> Shouldn't patch 1's switch to use smp_store_release() for the clearing
> of IORING_SETUP_R_DISABLED and smp_load_acquire() for the check of
> IORING_SETUP_R_DISABLED in io_msg_send_fd() ensure the necessary
> ordering? Or am I missing something?
>
Nice, yes, that addresses my concern. I had skipped your changes in patch 1.
Reviewed-by: Joanne Koong <joannelkoong@gmail.com>
Thanks,
Joanne
> Thanks,
> Caleb
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 3/3] io_uring/register: drop io_register_enable_rings() submitter_task check
2026-01-05 21:05 ` [PATCH v7 3/3] io_uring/register: drop io_register_enable_rings() submitter_task check Caleb Sander Mateos
@ 2026-01-08 22:10 ` Joanne Koong
0 siblings, 0 replies; 11+ messages in thread
From: Joanne Koong @ 2026-01-08 22:10 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, io-uring, linux-kernel
On Mon, Jan 5, 2026 at 1:05 PM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> io_register_enable_rings() checks that the io_ring_ctx is
> IORING_SETUP_R_DISABLED, which ensures submitter_task hasn't been
> assigned by io_uring_create() or a previous io_register_enable_rings()
> call. So drop the redundant check that submitter_task is NULL.
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
> ---
> io_uring/register.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/io_uring/register.c b/io_uring/register.c
> index 8104728af294..78434869270c 100644
> --- a/io_uring/register.c
> +++ b/io_uring/register.c
> @@ -178,11 +178,11 @@ static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
> static int io_register_enable_rings(struct io_ring_ctx *ctx)
> {
> if (!(ctx->flags & IORING_SETUP_R_DISABLED))
> return -EBADFD;
>
> - if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
> + if (ctx->flags & IORING_SETUP_SINGLE_ISSUER) {
> ctx->submitter_task = get_task_struct(current);
> /*
> * Lazy activation attempts would fail if it was polled before
> * submitter_task is set.
> */
> --
> 2.45.2
>
Reviewed-by: Joanne Koong <joannelkoong@gmail.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
2026-01-05 21:05 [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Caleb Sander Mateos
` (4 preceding siblings ...)
2026-01-08 4:54 ` Hillf Danton
@ 2026-01-12 18:34 ` Jens Axboe
5 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2026-01-12 18:34 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Joanne Koong, io-uring, linux-kernel
On Mon, 05 Jan 2026 14:05:39 -0700, Caleb Sander Mateos wrote:
> io_uring_enter(), __io_msg_ring_data(), and io_msg_send_fd() read
> ctx->flags and ctx->submitter_task without holding the ctx's uring_lock.
> This means they may race with the assignment to ctx->submitter_task and
> the clearing of IORING_SETUP_R_DISABLED from ctx->flags in
> io_register_enable_rings(). Ensure the correct ordering of the
> ctx->flags and ctx->submitter_task memory accesses by storing to
> ctx->flags using release ordering and loading it using acquire ordering.
>
> [...]
Applied, thanks!
[1/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED
commit: 7a8737e1132ff07ca225aa7a4008f87319b5b1ca
[2/3] io_uring/msg_ring: drop unnecessary submitter_task checks
commit: bcd4c95737d15fa1a85152b8862dec146b11c214
[3/3] io_uring/register: drop io_register_enable_rings() submitter_task check
commit: 130a82760718997806a618490f5b7ab06932bd9c
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-01-12 18:34 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-05 21:05 [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Caleb Sander Mateos
2026-01-05 21:05 ` [PATCH v7 1/3] " Caleb Sander Mateos
2026-01-05 21:05 ` [PATCH v7 2/3] io_uring/msg_ring: drop unnecessary submitter_task checks Caleb Sander Mateos
2026-01-08 4:25 ` Joanne Koong
2026-01-08 7:06 ` Caleb Sander Mateos
2026-01-08 22:04 ` Joanne Koong
2026-01-05 21:05 ` [PATCH v7 3/3] io_uring/register: drop io_register_enable_rings() submitter_task check Caleb Sander Mateos
2026-01-08 22:10 ` Joanne Koong
2026-01-06 18:41 ` [PATCH v7 0/3] io_uring: use release-acquire ordering for IORING_SETUP_R_DISABLED Gabriel Krisman Bertazi
2026-01-08 4:54 ` Hillf Danton
2026-01-12 18:34 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox