* [PATCH] io_uring/uring_cmd: skip inline completion cleanup if unlocked
@ 2026-05-06 11:04 Jens Axboe
2026-05-06 16:14 ` Caleb Sander Mateos
0 siblings, 1 reply; 3+ messages in thread
From: Jens Axboe @ 2026-05-06 11:04 UTC (permalink / raw)
To: io-uring
If the call path to __io_uring_cmd_done() is not locked, then we
cannot recycle the uring_cmd to our allocation cache. Check for
that and skip it, and let the normal locked completion flushing
do the cleanup.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
This effectively defeats proper cache recyling for uring_cmd opcodes,
with the fix it's working fine again.
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 42be1be5b132..35e2aa8b9446 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -166,7 +166,9 @@ void __io_uring_cmd_done(struct io_uring_cmd *ioucmd, s32 ret, u64 res2,
req->cqe.flags |= IORING_CQE_F_32;
io_req_set_cqe32_extra(req, res2, 0);
}
- io_req_uring_cleanup(req, issue_flags);
+ /* defer cleanup if not locked, otherwise cache recyling is skipped */
+ if (!(issue_flags & IO_URING_F_UNLOCKED))
+ io_req_uring_cleanup(req, issue_flags);
if (req->flags & REQ_F_IOPOLL) {
/* order with io_iopoll_req_issued() checking ->iopoll_complete */
smp_store_release(&req->iopoll_completed, 1);
@@ -211,6 +213,7 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
ac = io_uring_alloc_async_data(&req->ctx->cmd_cache, req);
if (!ac)
return -ENOMEM;
+ req->flags |= REQ_F_NEED_CLEANUP;
ioucmd->sqe = sqe;
return 0;
}
--
Jens Axboe
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH] io_uring/uring_cmd: skip inline completion cleanup if unlocked
2026-05-06 11:04 [PATCH] io_uring/uring_cmd: skip inline completion cleanup if unlocked Jens Axboe
@ 2026-05-06 16:14 ` Caleb Sander Mateos
2026-05-06 16:24 ` Jens Axboe
0 siblings, 1 reply; 3+ messages in thread
From: Caleb Sander Mateos @ 2026-05-06 16:14 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring
On Wed, May 6, 2026 at 4:04 AM Jens Axboe <axboe@kernel.dk> wrote:
>
> If the call path to __io_uring_cmd_done() is not locked, then we
> cannot recycle the uring_cmd to our allocation cache. Check for
> that and skip it, and let the normal locked completion flushing
> do the cleanup.
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>
> ---
>
> This effectively defeats proper cache recyling for uring_cmd opcodes,
> with the fix it's working fine again.
>
> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> index 42be1be5b132..35e2aa8b9446 100644
> --- a/io_uring/uring_cmd.c
> +++ b/io_uring/uring_cmd.c
> @@ -166,7 +166,9 @@ void __io_uring_cmd_done(struct io_uring_cmd *ioucmd, s32 ret, u64 res2,
> req->cqe.flags |= IORING_CQE_F_32;
> io_req_set_cqe32_extra(req, res2, 0);
> }
> - io_req_uring_cleanup(req, issue_flags);
> + /* defer cleanup if not locked, otherwise cache recyling is skipped */
"recycling"?
> + if (!(issue_flags & IO_URING_F_UNLOCKED))
> + io_req_uring_cleanup(req, issue_flags);
Doesn't io_req_uring_cleanup() already check this?
Best,
Caleb
> if (req->flags & REQ_F_IOPOLL) {
> /* order with io_iopoll_req_issued() checking ->iopoll_complete */
> smp_store_release(&req->iopoll_completed, 1);
> @@ -211,6 +213,7 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
> ac = io_uring_alloc_async_data(&req->ctx->cmd_cache, req);
> if (!ac)
> return -ENOMEM;
> + req->flags |= REQ_F_NEED_CLEANUP;
> ioucmd->sqe = sqe;
> return 0;
> }
>
> --
> Jens Axboe
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] io_uring/uring_cmd: skip inline completion cleanup if unlocked
2026-05-06 16:14 ` Caleb Sander Mateos
@ 2026-05-06 16:24 ` Jens Axboe
0 siblings, 0 replies; 3+ messages in thread
From: Jens Axboe @ 2026-05-06 16:24 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: io-uring
On 5/6/26 10:14 AM, Caleb Sander Mateos wrote:
> On Wed, May 6, 2026 at 4:04?AM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> If the call path to __io_uring_cmd_done() is not locked, then we
>> cannot recycle the uring_cmd to our allocation cache. Check for
>> that and skip it, and let the normal locked completion flushing
>> do the cleanup.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>
>> ---
>>
>> This effectively defeats proper cache recyling for uring_cmd opcodes,
>> with the fix it's working fine again.
>>
>> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
>> index 42be1be5b132..35e2aa8b9446 100644
>> --- a/io_uring/uring_cmd.c
>> +++ b/io_uring/uring_cmd.c
>> @@ -166,7 +166,9 @@ void __io_uring_cmd_done(struct io_uring_cmd *ioucmd, s32 ret, u64 res2,
>> req->cqe.flags |= IORING_CQE_F_32;
>> io_req_set_cqe32_extra(req, res2, 0);
>> }
>> - io_req_uring_cleanup(req, issue_flags);
>> + /* defer cleanup if not locked, otherwise cache recyling is skipped */
>
> "recycling"?
>
>> + if (!(issue_flags & IO_URING_F_UNLOCKED))
>> + io_req_uring_cleanup(req, issue_flags);
>
> Doesn't io_req_uring_cleanup() already check this?
True it does, I think all we need is:
>> @@ -211,6 +213,7 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>> ac = io_uring_alloc_async_data(&req->ctx->cmd_cache, req);
>> if (!ac)
>> return -ENOMEM;
>> + req->flags |= REQ_F_NEED_CLEANUP;
>> ioucmd->sqe = sqe;
>> return 0;
>> }
to ensure it's called later too. I'll update it, thanks!
--
Jens Axboe
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-06 16:24 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 11:04 [PATCH] io_uring/uring_cmd: skip inline completion cleanup if unlocked Jens Axboe
2026-05-06 16:14 ` Caleb Sander Mateos
2026-05-06 16:24 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox