* [PATCH v2 1/7] io_uring: fix overflow resched cqe reordering
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
@ 2025-05-17 12:27 ` Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 2/7] io_uring: init overflow entry before passing to tracing Pavel Begunkov
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2025-05-17 12:27 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
Leaving the CQ critical section in the middle of a overflow flushing
can cause cqe reordering since the cache cq pointers are reset and any
new cqe emitters that might get called in between are not going to be
forced into io_cqe_cache_refill().
Fixes: eac2ca2d682f9 ("io_uring: check if we need to reschedule during overflow flush")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/io_uring.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 9a9b8d35349b..2fa84912b053 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -654,6 +654,7 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
* to care for a non-real case.
*/
if (need_resched()) {
+ ctx->cqe_sentinel = ctx->cqe_cached;
io_cq_unlock_post(ctx);
mutex_unlock(&ctx->uring_lock);
cond_resched();
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 2/7] io_uring: init overflow entry before passing to tracing
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 1/7] io_uring: fix overflow resched cqe reordering Pavel Begunkov
@ 2025-05-17 12:27 ` Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 3/7] io_uring: open code io_req_cqe_overflow() Pavel Begunkov
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2025-05-17 12:27 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
trace_io_uring_cqe_overflow() doesn't dereference the overflow pointer,
but it's still a good idea to initialise it beforehand if some bpf will
try to poke into it. That will also help to simplifying the tracing
helper in the future.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/io_uring.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2fa84912b053..d112f103135b 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -732,6 +732,16 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
ocq_size += sizeof(struct io_uring_cqe);
ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
+ if (ocqe) {
+ ocqe->cqe.user_data = user_data;
+ ocqe->cqe.res = res;
+ ocqe->cqe.flags = cflags;
+ if (is_cqe32) {
+ ocqe->cqe.big_cqe[0] = extra1;
+ ocqe->cqe.big_cqe[1] = extra2;
+ }
+ }
+
trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
if (!ocqe) {
struct io_rings *r = ctx->rings;
@@ -748,14 +758,6 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
if (list_empty(&ctx->cq_overflow_list)) {
set_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
atomic_or(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
-
- }
- ocqe->cqe.user_data = user_data;
- ocqe->cqe.res = res;
- ocqe->cqe.flags = cflags;
- if (is_cqe32) {
- ocqe->cqe.big_cqe[0] = extra1;
- ocqe->cqe.big_cqe[1] = extra2;
}
list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
return true;
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 3/7] io_uring: open code io_req_cqe_overflow()
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 1/7] io_uring: fix overflow resched cqe reordering Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 2/7] io_uring: init overflow entry before passing to tracing Pavel Begunkov
@ 2025-05-17 12:27 ` Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 4/7] io_uring: split __io_cqring_overflow_flush() Pavel Begunkov
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2025-05-17 12:27 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
A preparation patch, just open code io_req_cqe_overflow().
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/io_uring.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index d112f103135b..fff9812f53c0 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -763,14 +763,6 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
return true;
}
-static void io_req_cqe_overflow(struct io_kiocb *req)
-{
- io_cqring_event_overflow(req->ctx, req->cqe.user_data,
- req->cqe.res, req->cqe.flags,
- req->big_cqe.extra1, req->big_cqe.extra2);
- memset(&req->big_cqe, 0, sizeof(req->big_cqe));
-}
-
/*
* writes to the cq entry need to come after reading head; the
* control dependency is enough as we're using WRITE_ONCE to
@@ -1445,11 +1437,19 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
unlikely(!io_fill_cqe_req(ctx, req))) {
if (ctx->lockless_cq) {
spin_lock(&ctx->completion_lock);
- io_req_cqe_overflow(req);
+ io_cqring_event_overflow(req->ctx, req->cqe.user_data,
+ req->cqe.res, req->cqe.flags,
+ req->big_cqe.extra1,
+ req->big_cqe.extra2);
spin_unlock(&ctx->completion_lock);
} else {
- io_req_cqe_overflow(req);
+ io_cqring_event_overflow(req->ctx, req->cqe.user_data,
+ req->cqe.res, req->cqe.flags,
+ req->big_cqe.extra1,
+ req->big_cqe.extra2);
}
+
+ memset(&req->big_cqe, 0, sizeof(req->big_cqe));
}
}
__io_cq_unlock_post(ctx);
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 4/7] io_uring: split __io_cqring_overflow_flush()
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
` (2 preceding siblings ...)
2025-05-17 12:27 ` [PATCH v2 3/7] io_uring: open code io_req_cqe_overflow() Pavel Begunkov
@ 2025-05-17 12:27 ` Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 5/7] io_uring: separate lock for protecting overflow list Pavel Begunkov
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2025-05-17 12:27 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
Extract a helper function from __io_cqring_overflow_flush() and keep the
CQ locking and lock dropping in the caller.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/io_uring.c | 57 +++++++++++++++++++++++++--------------------
1 file changed, 32 insertions(+), 25 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index fff9812f53c0..a2a4e1319033 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -617,56 +617,63 @@ static void io_cq_unlock_post(struct io_ring_ctx *ctx)
io_commit_cqring_flush(ctx);
}
-static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
+static bool io_flush_overflow_list(struct io_ring_ctx *ctx, bool dying)
{
size_t cqe_size = sizeof(struct io_uring_cqe);
- lockdep_assert_held(&ctx->uring_lock);
-
- /* don't abort if we're dying, entries must get freed */
- if (!dying && __io_cqring_events(ctx) == ctx->cq_entries)
- return;
-
if (ctx->flags & IORING_SETUP_CQE32)
cqe_size <<= 1;
- io_cq_lock(ctx);
while (!list_empty(&ctx->cq_overflow_list)) {
struct io_uring_cqe *cqe;
struct io_overflow_cqe *ocqe;
+ /*
+ * For silly syzbot cases that deliberately overflow by huge
+ * amounts, check if we need to resched and drop and
+ * reacquire the locks if so. Nothing real would ever hit this.
+ * Ideally we'd have a non-posting unlock for this, but hard
+ * to care for a non-real case.
+ */
+ if (need_resched())
+ return false;
+
ocqe = list_first_entry(&ctx->cq_overflow_list,
struct io_overflow_cqe, list);
if (!dying) {
if (!io_get_cqe_overflow(ctx, &cqe, true))
- break;
+ return true;
memcpy(cqe, &ocqe->cqe, cqe_size);
}
list_del(&ocqe->list);
kfree(ocqe);
-
- /*
- * For silly syzbot cases that deliberately overflow by huge
- * amounts, check if we need to resched and drop and
- * reacquire the locks if so. Nothing real would ever hit this.
- * Ideally we'd have a non-posting unlock for this, but hard
- * to care for a non-real case.
- */
- if (need_resched()) {
- ctx->cqe_sentinel = ctx->cqe_cached;
- io_cq_unlock_post(ctx);
- mutex_unlock(&ctx->uring_lock);
- cond_resched();
- mutex_lock(&ctx->uring_lock);
- io_cq_lock(ctx);
- }
}
if (list_empty(&ctx->cq_overflow_list)) {
clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
}
+ return true;
+}
+
+static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
+{
+ lockdep_assert_held(&ctx->uring_lock);
+
+ /* don't abort if we're dying, entries must get freed */
+ if (!dying && __io_cqring_events(ctx) == ctx->cq_entries)
+ return;
+
+ io_cq_lock(ctx);
+ while (!io_flush_overflow_list(ctx, dying)) {
+ ctx->cqe_sentinel = ctx->cqe_cached;
+ io_cq_unlock_post(ctx);
+ mutex_unlock(&ctx->uring_lock);
+ cond_resched();
+ mutex_lock(&ctx->uring_lock);
+ io_cq_lock(ctx);
+ }
io_cq_unlock_post(ctx);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 5/7] io_uring: separate lock for protecting overflow list
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
` (3 preceding siblings ...)
2025-05-17 12:27 ` [PATCH v2 4/7] io_uring: split __io_cqring_overflow_flush() Pavel Begunkov
@ 2025-05-17 12:27 ` Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 6/7] io_uring: avoid GFP_ATOMIC for overflows if possible Pavel Begunkov
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2025-05-17 12:27 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
Introduce ->overflow_lock to protect all overflow ctx fields. With that
the caller is allowed but not always required to hold the completion
lock while overflowing.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
include/linux/io_uring_types.h | 1 +
io_uring/io_uring.c | 32 ++++++++++++--------------------
2 files changed, 13 insertions(+), 20 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 00dbd7cd0e7d..e11ab9d19877 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -370,6 +370,7 @@ struct io_ring_ctx {
spinlock_t completion_lock;
struct list_head cq_overflow_list;
+ spinlock_t overflow_lock;
struct hlist_head waitid_list;
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index a2a4e1319033..86b39a01a136 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -350,6 +350,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
init_waitqueue_head(&ctx->cq_wait);
init_waitqueue_head(&ctx->poll_wq);
spin_lock_init(&ctx->completion_lock);
+ spin_lock_init(&ctx->overflow_lock);
raw_spin_lock_init(&ctx->timeout_lock);
INIT_WQ_LIST(&ctx->iopoll_list);
INIT_LIST_HEAD(&ctx->defer_list);
@@ -624,6 +625,8 @@ static bool io_flush_overflow_list(struct io_ring_ctx *ctx, bool dying)
if (ctx->flags & IORING_SETUP_CQE32)
cqe_size <<= 1;
+ guard(spinlock)(&ctx->overflow_lock);
+
while (!list_empty(&ctx->cq_overflow_list)) {
struct io_uring_cqe *cqe;
struct io_overflow_cqe *ocqe;
@@ -733,8 +736,6 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
size_t ocq_size = sizeof(struct io_overflow_cqe);
bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
- lockdep_assert_held(&ctx->completion_lock);
-
if (is_cqe32)
ocq_size += sizeof(struct io_uring_cqe);
@@ -750,6 +751,9 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
}
trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
+
+ guard(spinlock)(&ctx->overflow_lock);
+
if (!ocqe) {
struct io_rings *r = ctx->rings;
@@ -849,11 +853,9 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
lockdep_assert_held(&ctx->uring_lock);
lockdep_assert(ctx->lockless_cq);
- if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
- spin_lock(&ctx->completion_lock);
+ if (!io_fill_cqe_aux(ctx, user_data, res, cflags))
io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
- spin_unlock(&ctx->completion_lock);
- }
+
ctx->submit_state.cq_flush = true;
}
@@ -1442,20 +1444,10 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
*/
if (!(req->flags & (REQ_F_CQE_SKIP | REQ_F_REISSUE)) &&
unlikely(!io_fill_cqe_req(ctx, req))) {
- if (ctx->lockless_cq) {
- spin_lock(&ctx->completion_lock);
- io_cqring_event_overflow(req->ctx, req->cqe.user_data,
- req->cqe.res, req->cqe.flags,
- req->big_cqe.extra1,
- req->big_cqe.extra2);
- spin_unlock(&ctx->completion_lock);
- } else {
- io_cqring_event_overflow(req->ctx, req->cqe.user_data,
- req->cqe.res, req->cqe.flags,
- req->big_cqe.extra1,
- req->big_cqe.extra2);
- }
-
+ io_cqring_event_overflow(req->ctx, req->cqe.user_data,
+ req->cqe.res, req->cqe.flags,
+ req->big_cqe.extra1,
+ req->big_cqe.extra2);
memset(&req->big_cqe, 0, sizeof(req->big_cqe));
}
}
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 6/7] io_uring: avoid GFP_ATOMIC for overflows if possible
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
` (4 preceding siblings ...)
2025-05-17 12:27 ` [PATCH v2 5/7] io_uring: separate lock for protecting overflow list Pavel Begunkov
@ 2025-05-17 12:27 ` Pavel Begunkov
2025-05-17 12:27 ` [PATCH v2 7/7] io_uring: add lockdep warning for overflow posting Pavel Begunkov
2025-05-21 13:02 ` (subset) [PATCH v2 0/7] simplify overflow CQE handling Jens Axboe
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2025-05-17 12:27 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
DEFER_TASKRUN enabled rings don't hold the completion lock or any other
spinlocks for CQE posting, so when an overflow happens they can do non
atomic allocations.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/io_uring.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 86b39a01a136..0e0b3e75010c 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -730,7 +730,8 @@ static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
}
static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
- s32 res, u32 cflags, u64 extra1, u64 extra2)
+ s32 res, u32 cflags, u64 extra1, u64 extra2,
+ gfp_t gfp)
{
struct io_overflow_cqe *ocqe;
size_t ocq_size = sizeof(struct io_overflow_cqe);
@@ -739,7 +740,7 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
if (is_cqe32)
ocq_size += sizeof(struct io_uring_cqe);
- ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
+ ocqe = kmalloc(ocq_size, gfp | __GFP_ACCOUNT);
if (ocqe) {
ocqe->cqe.user_data = user_data;
ocqe->cqe.res = res;
@@ -839,7 +840,8 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
io_cq_lock(ctx);
filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
if (!filled)
- filled = io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
+ filled = io_cqring_event_overflow(ctx, user_data, res, cflags,
+ 0, 0, GFP_ATOMIC);
io_cq_unlock_post(ctx);
return filled;
}
@@ -854,7 +856,8 @@ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
lockdep_assert(ctx->lockless_cq);
if (!io_fill_cqe_aux(ctx, user_data, res, cflags))
- io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
+ io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0,
+ GFP_KERNEL);
ctx->submit_state.cq_flush = true;
}
@@ -1444,10 +1447,13 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
*/
if (!(req->flags & (REQ_F_CQE_SKIP | REQ_F_REISSUE)) &&
unlikely(!io_fill_cqe_req(ctx, req))) {
+ gfp_t gfp = ctx->lockless_cq ? GFP_KERNEL : GFP_ATOMIC;
+
io_cqring_event_overflow(req->ctx, req->cqe.user_data,
req->cqe.res, req->cqe.flags,
req->big_cqe.extra1,
- req->big_cqe.extra2);
+ req->big_cqe.extra2,
+ gfp);
memset(&req->big_cqe, 0, sizeof(req->big_cqe));
}
}
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 7/7] io_uring: add lockdep warning for overflow posting
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
` (5 preceding siblings ...)
2025-05-17 12:27 ` [PATCH v2 6/7] io_uring: avoid GFP_ATOMIC for overflows if possible Pavel Begunkov
@ 2025-05-17 12:27 ` Pavel Begunkov
2025-05-21 13:02 ` (subset) [PATCH v2 0/7] simplify overflow CQE handling Jens Axboe
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2025-05-17 12:27 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
io_cqring_event_overflow() must be called in the same CQ protection
section as the preceding io_get_cqe(), otherwise overflowed and normal
CQEs can get out of order. It's hard to debug check that exactly, but we
can verify that io_cqring_event_overflow() is called with the CQ locked,
which should be good enough to catch most cases of misuse.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/io_uring.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 0e0b3e75010c..7f6b1fd37606 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -737,6 +737,8 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
size_t ocq_size = sizeof(struct io_overflow_cqe);
bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
+ io_lockdep_assert_cq_locked(ctx);
+
if (is_cqe32)
ocq_size += sizeof(struct io_uring_cqe);
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: (subset) [PATCH v2 0/7] simplify overflow CQE handling
2025-05-17 12:27 [PATCH v2 0/7] simplify overflow CQE handling Pavel Begunkov
` (6 preceding siblings ...)
2025-05-17 12:27 ` [PATCH v2 7/7] io_uring: add lockdep warning for overflow posting Pavel Begunkov
@ 2025-05-21 13:02 ` Jens Axboe
7 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2025-05-21 13:02 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
On Sat, 17 May 2025 13:27:36 +0100, Pavel Begunkov wrote:
> Some improvements for overflow posting like replacing GFP_ATOMIC
> with GFP_KERNEL in few cases and debug assertions for invariant
> violations.
>
> v2: nest another lock to get rid of conditional locking
>
> Pavel Begunkov (7):
> io_uring: fix overflow resched cqe reordering
> io_uring: init overflow entry before passing to tracing
> io_uring: open code io_req_cqe_overflow()
> io_uring: split __io_cqring_overflow_flush()
> io_uring: separate lock for protecting overflow list
> io_uring: avoid GFP_ATOMIC for overflows if possible
> io_uring: add lockdep warning for overflow posting
>
> [...]
Applied, thanks!
[1/7] io_uring: fix overflow resched cqe reordering
commit: a7d755ed9ce9738af3db602eb29d32774a180bc7
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread