From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>,
io-uring@vger.kernel.org, Keith Busch <kbusch@kernel.org>
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
Ming Lei <ming.lei@redhat.com>
Subject: [PATCH V2 2/2] nvme/io_uring: optimize IOPOLL completions for local ring context
Date: Fri, 16 Jan 2026 15:46:38 +0800 [thread overview]
Message-ID: <20260116074641.665422-3-ming.lei@redhat.com> (raw)
In-Reply-To: <20260116074641.665422-1-ming.lei@redhat.com>
When multiple io_uring rings poll on the same NVMe queue, one ring can
find completions belonging to another ring. The current code always
uses task_work to handle this, but this adds overhead for the common
single-ring case.
This patch passes the polling io_ring_ctx through io_comp_batch's new
poll_ctx field. In io_do_iopoll(), the polling ring's context is stored
in iob.poll_ctx before calling the iopoll callbacks.
In nvme_uring_cmd_end_io(), we now compare iob->poll_ctx with the
request's owning io_ring_ctx (via io_uring_cmd_ctx_handle()). If they
match (local context), we complete inline with io_uring_cmd_done32().
If they differ (remote context) or iob is NULL (non-iopoll path), we
use task_work as before.
This optimization eliminates task_work scheduling overhead for the
common case where a ring polls and finds its own completions.
~10% IOPS improvement is observed in the following benchmark:
fio/t/io_uring -b512 -d128 -c32 -s32 -p1 -F1 -O0 -P1 -u1 -n1 /dev/ng0n1
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/nvme/host/ioctl.c | 20 +++++++++++++-------
include/linux/blkdev.h | 1 +
io_uring/rw.c | 6 ++++++
3 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index e45ac0ca174e..fb62633ccbb0 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -426,14 +426,20 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
pdu->result = le64_to_cpu(nvme_req(req)->result.u64);
/*
- * IOPOLL could potentially complete this request directly, but
- * if multiple rings are polling on the same queue, then it's possible
- * for one ring to find completions for another ring. Punting the
- * completion via task_work will always direct it to the right
- * location, rather than potentially complete requests for ringA
- * under iopoll invocations from ringB.
+ * For IOPOLL, check if this completion is happening in the context
+ * of the same io_ring that owns the request (local context). If so,
+ * we can complete inline without task_work overhead. Otherwise, we
+ * must punt to task_work to ensure completion happens in the correct
+ * ring's context.
*/
- io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);
+ if (blk_rq_is_poll(req) && iob &&
+ iob->poll_ctx == io_uring_cmd_ctx_handle(ioucmd)) {
+ if (pdu->bio)
+ blk_rq_unmap_user(pdu->bio);
+ io_uring_cmd_done32(ioucmd, pdu->status, pdu->result, 0);
+ } else {
+ io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);
+ }
return RQ_END_IO_FREE;
}
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 438c4946b6e5..251e0f538c4c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1822,6 +1822,7 @@ struct io_comp_batch {
struct rq_list req_list;
bool need_ts;
void (*complete)(struct io_comp_batch *);
+ void *poll_ctx;
};
static inline bool blk_atomic_write_start_sect_aligned(sector_t sector,
diff --git a/io_uring/rw.c b/io_uring/rw.c
index c33c533a267e..4c81a5a89089 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -1321,6 +1321,12 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
struct io_kiocb *req, *tmp;
int nr_events = 0;
+ /*
+ * Store the polling io_ring_ctx so drivers can detect if they're
+ * completing a request in the same ring context that's polling.
+ */
+ iob.poll_ctx = ctx;
+
/*
* Only spin for completions if we don't have multiple devices hanging
* off our complete list.
--
2.47.0
next prev parent reply other threads:[~2026-01-16 7:47 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20260116074819epcas5p37afab1cd05fdf9e0555a14b5fe89c2dd@epcas5p3.samsung.com>
2026-01-16 7:46 ` [PATCH V2 0/2] nvme: optimize passthrough IOPOLL completion for local ring context Ming Lei
2026-01-16 7:46 ` [PATCH V2 1/2] block: pass io_comp_batch to rq_end_io_fn callback Ming Lei
2026-01-16 7:46 ` Ming Lei [this message]
2026-01-19 15:07 ` [PATCH V2 0/2] nvme: optimize passthrough IOPOLL completion for local ring context Kanchan Joshi
2026-01-19 15:19 ` Jens Axboe
2026-01-19 15:22 ` Jens Axboe
2026-01-20 17:07 ` Keith Busch
2026-01-20 17:24 ` Jens Axboe
2026-01-20 17:24 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260116074641.665422-3-ming.lei@redhat.com \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=io-uring@vger.kernel.org \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox