public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL
@ 2026-02-13  3:21 Caleb Sander Mateos
  2026-02-13  3:21 ` [PATCH 1/3] io_uring: add IORING_OP_URING_CMD128 to opcode checks Caleb Sander Mateos
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Caleb Sander Mateos @ 2026-02-13  3:21 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Keith Busch, Sagi Grimberg
  Cc: io-uring, linux-nvme, linux-kernel, Caleb Sander Mateos

Currently, creating an io_uring with IORING_SETUP_IOPOLL requires all
requests issued to it to support iopoll. This prevents, for example,
using ublk zero-copy together with IORING_SETUP_IOPOLL, as ublk
zero-copy buffer registrations are performed using a uring_cmd. There's
no technical reason why these non-iopoll uring_cmds can't be supported.
They will either complete synchronously or via an external mechanism
that calls io_uring_cmd_done(), so they don't need to be polled.

Allow uring_cmd requests to be issued to IORING_SETUP_IOPOLL io_urings
even if their files don't implement ->uring_cmd_iopoll().

The first commit fixes a few bugs where IORING_OP_URING_CMD128 isn't
treated as IORING_OP_URING_CMD with iopoll and provided buffers.

The last commit removes an unnecessary IO_URING_F_IOPOLL check in
nvme_dev_uring_cmd() as NVMe admin passthru commands can be issued to
IORING_SETUP_IOPOLL io_urings now.

Caleb Sander Mateos (3):
  io_uring: add IORING_OP_URING_CMD128 to opcode checks
  io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL
  nvme: remove nvme_dev_uring_cmd() IO_URING_F_IOPOLL check

 drivers/nvme/host/ioctl.c |  4 ----
 io_uring/io_uring.c       |  4 +++-
 io_uring/io_uring.h       |  6 ++++++
 io_uring/kbuf.c           |  2 +-
 io_uring/rw.c             |  4 ++--
 io_uring/uring_cmd.c      | 11 +++++------
 6 files changed, 17 insertions(+), 14 deletions(-)

-- 
2.45.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] io_uring: add IORING_OP_URING_CMD128 to opcode checks
  2026-02-13  3:21 [PATCH 0/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
@ 2026-02-13  3:21 ` Caleb Sander Mateos
  2026-02-13  3:21 ` [PATCH 2/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
  2026-02-13  3:21 ` [PATCH 3/3] nvme: remove nvme_dev_uring_cmd() IO_URING_F_IOPOLL check Caleb Sander Mateos
  2 siblings, 0 replies; 6+ messages in thread
From: Caleb Sander Mateos @ 2026-02-13  3:21 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Keith Busch, Sagi Grimberg
  Cc: io-uring, linux-nvme, linux-kernel, Caleb Sander Mateos

io_should_commit(), io_uring_classic_poll(), and io_do_iopoll() compare
struct io_kiocb's opcode against IORING_OP_URING_CMD to implement
special treatment for uring_cmds. The recently added opcode
IORING_OP_URING_CMD128 is meant to be equivalent to IORING_OP_URING_CMD,
so treat it the same way in these functions.

Fixes: 1cba30bf9fdd ("io_uring: add support for IORING_SETUP_SQE_MIXED")
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
 io_uring/io_uring.h | 6 ++++++
 io_uring/kbuf.c     | 2 +-
 io_uring/rw.c       | 4 ++--
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 503663d6fd6d..0fa844faf287 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -528,10 +528,16 @@ static inline bool io_file_can_poll(struct io_kiocb *req)
 		return true;
 	}
 	return false;
 }
 
+static inline bool io_is_uring_cmd(const struct io_kiocb *req)
+{
+	return req->opcode == IORING_OP_URING_CMD ||
+	       req->opcode == IORING_OP_URING_CMD128;
+}
+
 static inline ktime_t io_get_time(struct io_ring_ctx *ctx)
 {
 	if (ctx->clockid == CLOCK_MONOTONIC)
 		return ktime_get();
 
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index 67d4fe576473..dae5b4ab3819 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -169,11 +169,11 @@ static bool io_should_commit(struct io_kiocb *req, unsigned int issue_flags)
 	*/
 	if (issue_flags & IO_URING_F_UNLOCKED)
 		return true;
 
 	/* uring_cmd commits kbuf upfront, no need to auto-commit */
-	if (!io_file_can_poll(req) && req->opcode != IORING_OP_URING_CMD)
+	if (!io_file_can_poll(req) && !io_is_uring_cmd(req))
 		return true;
 	return false;
 }
 
 static struct io_br_sel io_ring_buffer_select(struct io_kiocb *req, size_t *len,
diff --git a/io_uring/rw.c b/io_uring/rw.c
index b3971171c342..1a5f262734e8 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -1252,11 +1252,11 @@ void io_rw_fail(struct io_kiocb *req)
 static int io_uring_classic_poll(struct io_kiocb *req, struct io_comp_batch *iob,
 				unsigned int poll_flags)
 {
 	struct file *file = req->file;
 
-	if (req->opcode == IORING_OP_URING_CMD) {
+	if (io_is_uring_cmd(req)) {
 		struct io_uring_cmd *ioucmd;
 
 		ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
 		return file->f_op->uring_cmd_iopoll(ioucmd, iob, poll_flags);
 	} else {
@@ -1378,11 +1378,11 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
 			continue;
 		list_del(&req->iopoll_node);
 		wq_list_add_tail(&req->comp_list, &ctx->submit_state.compl_reqs);
 		nr_events++;
 		req->cqe.flags = io_put_kbuf(req, req->cqe.res, NULL);
-		if (req->opcode != IORING_OP_URING_CMD)
+		if (!io_is_uring_cmd(req))
 			io_req_rw_cleanup(req, 0);
 	}
 	if (nr_events)
 		__io_submit_flush_completions(ctx);
 	return nr_events;
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL
  2026-02-13  3:21 [PATCH 0/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
  2026-02-13  3:21 ` [PATCH 1/3] io_uring: add IORING_OP_URING_CMD128 to opcode checks Caleb Sander Mateos
@ 2026-02-13  3:21 ` Caleb Sander Mateos
  2026-02-18  2:18   ` Caleb Sander Mateos
  2026-02-13  3:21 ` [PATCH 3/3] nvme: remove nvme_dev_uring_cmd() IO_URING_F_IOPOLL check Caleb Sander Mateos
  2 siblings, 1 reply; 6+ messages in thread
From: Caleb Sander Mateos @ 2026-02-13  3:21 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Keith Busch, Sagi Grimberg
  Cc: io-uring, linux-nvme, linux-kernel, Caleb Sander Mateos

Currently, creating an io_uring with IORING_SETUP_IOPOLL requires all
requests issued to it to support iopoll. This prevents, for example,
using ublk zero-copy together with IORING_SETUP_IOPOLL, as ublk
zero-copy buffer registrations are performed using a uring_cmd. There's
no technical reason why these non-iopoll uring_cmds can't be supported.
They will either complete synchronously or via an external mechanism
that calls io_uring_cmd_done(), so they don't need to be polled.

Allow uring_cmd requests to be issued to IORING_SETUP_IOPOLL io_urings
even if their files don't implement ->uring_cmd_iopoll(). For these
uring_cmd requests, skip initializing struct io_kiocb's iopoll fields,
don't insert the request into iopoll_list, and take the
io_req_complete_defer() or io_req_task_work_add() path in
__io_uring_cmd_done() instead of setting the iopoll_completed flag. Also
allow io_uring_cmd_mark_cancelable() to be called on these uring_cmds.
Assert that io_uring_cmd_mark_cancelable() is only called on
non-IORING_SETUP_IOPOLL io_urings or uring_cmds to files that don't
implement ->uring_cmd_iopoll().

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
 io_uring/io_uring.c  |  4 +++-
 io_uring/uring_cmd.c | 11 +++++------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c45af82dda3d..4e68a5168894 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1417,11 +1417,13 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 
 	if (ret == IOU_ISSUE_SKIP_COMPLETE) {
 		ret = 0;
 
 		/* If the op doesn't have a file, we're not polling for it */
-		if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
+		if ((req->ctx->flags & IORING_SETUP_IOPOLL) &&
+		    def->iopoll_queue && (!io_is_uring_cmd(req) ||
+					  req->file->f_op->uring_cmd_iopoll))
 			io_iopoll_req_issued(req, issue_flags);
 	}
 	return ret;
 }
 
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index ee7b49f47cb5..8df52e8f1c1b 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -108,12 +108,12 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
 	 * Doing cancelations on IOPOLL requests are not supported. Both
 	 * because they can't get canceled in the block stack, but also
 	 * because iopoll completion data overlaps with the hash_node used
 	 * for tracking.
 	 */
-	if (ctx->flags & IORING_SETUP_IOPOLL)
-		return;
+	WARN_ON_ONCE(ctx->flags & IORING_SETUP_IOPOLL &&
+		     req->file->f_op->uring_cmd_iopoll);
 
 	if (!(cmd->flags & IORING_URING_CMD_CANCELABLE)) {
 		cmd->flags |= IORING_URING_CMD_CANCELABLE;
 		io_ring_submit_lock(ctx, issue_flags);
 		hlist_add_head(&req->hash_node, &ctx->cancelable_uring_cmd);
@@ -165,11 +165,12 @@ void __io_uring_cmd_done(struct io_uring_cmd *ioucmd, s32 ret, u64 res2,
 		if (req->ctx->flags & IORING_SETUP_CQE_MIXED)
 			req->cqe.flags |= IORING_CQE_F_32;
 		io_req_set_cqe32_extra(req, res2, 0);
 	}
 	io_req_uring_cleanup(req, issue_flags);
-	if (req->ctx->flags & IORING_SETUP_IOPOLL) {
+	if (req->ctx->flags & IORING_SETUP_IOPOLL &&
+	    req->file->f_op->uring_cmd_iopoll) {
 		/* order with io_iopoll_req_issued() checking ->iopoll_complete */
 		smp_store_release(&req->iopoll_completed, 1);
 	} else if (issue_flags & IO_URING_F_COMPLETE_DEFER) {
 		if (WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED))
 			return;
@@ -255,13 +256,11 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
 		issue_flags |= IO_URING_F_SQE128;
 	if (ctx->flags & (IORING_SETUP_CQE32 | IORING_SETUP_CQE_MIXED))
 		issue_flags |= IO_URING_F_CQE32;
 	if (io_is_compat(ctx))
 		issue_flags |= IO_URING_F_COMPAT;
-	if (ctx->flags & IORING_SETUP_IOPOLL) {
-		if (!file->f_op->uring_cmd_iopoll)
-			return -EOPNOTSUPP;
+	if (ctx->flags & IORING_SETUP_IOPOLL && file->f_op->uring_cmd_iopoll) {
 		issue_flags |= IO_URING_F_IOPOLL;
 		req->iopoll_completed = 0;
 		if (ctx->flags & IORING_SETUP_HYBRID_IOPOLL) {
 			/* make sure every req only blocks once */
 			req->flags &= ~REQ_F_IOPOLL_STATE;
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] nvme: remove nvme_dev_uring_cmd() IO_URING_F_IOPOLL check
  2026-02-13  3:21 [PATCH 0/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
  2026-02-13  3:21 ` [PATCH 1/3] io_uring: add IORING_OP_URING_CMD128 to opcode checks Caleb Sander Mateos
  2026-02-13  3:21 ` [PATCH 2/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
@ 2026-02-13  3:21 ` Caleb Sander Mateos
  2 siblings, 0 replies; 6+ messages in thread
From: Caleb Sander Mateos @ 2026-02-13  3:21 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Keith Busch, Sagi Grimberg
  Cc: io-uring, linux-nvme, linux-kernel, Caleb Sander Mateos

nvme_dev_uring_cmd() is part of struct file_operations nvme_dev_fops,
which doesn't implement ->uring_cmd_iopoll(). So it won't be called with
issue_flags that include IO_URING_F_IOPOLL. Drop the unnecessary
IO_URING_F_IOPOLL check in nvme_dev_uring_cmd().

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
 drivers/nvme/host/ioctl.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index fb62633ccbb0..fa489c1979db 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -783,14 +783,10 @@ int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd,
 int nvme_dev_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags)
 {
 	struct nvme_ctrl *ctrl = ioucmd->file->private_data;
 	int ret;
 
-	/* IOPOLL not supported yet */
-	if (issue_flags & IO_URING_F_IOPOLL)
-		return -EOPNOTSUPP;
-
 	ret = nvme_uring_cmd_checks(issue_flags);
 	if (ret)
 		return ret;
 
 	switch (ioucmd->cmd_op) {
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL
  2026-02-13  3:21 ` [PATCH 2/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
@ 2026-02-18  2:18   ` Caleb Sander Mateos
  2026-02-18 16:23     ` Jens Axboe
  0 siblings, 1 reply; 6+ messages in thread
From: Caleb Sander Mateos @ 2026-02-18  2:18 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Keith Busch, Sagi Grimberg
  Cc: io-uring, linux-nvme, linux-kernel

On Thu, Feb 12, 2026 at 7:21 PM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> Currently, creating an io_uring with IORING_SETUP_IOPOLL requires all
> requests issued to it to support iopoll. This prevents, for example,
> using ublk zero-copy together with IORING_SETUP_IOPOLL, as ublk
> zero-copy buffer registrations are performed using a uring_cmd. There's
> no technical reason why these non-iopoll uring_cmds can't be supported.
> They will either complete synchronously or via an external mechanism
> that calls io_uring_cmd_done(), so they don't need to be polled.
>
> Allow uring_cmd requests to be issued to IORING_SETUP_IOPOLL io_urings
> even if their files don't implement ->uring_cmd_iopoll(). For these
> uring_cmd requests, skip initializing struct io_kiocb's iopoll fields,
> don't insert the request into iopoll_list, and take the
> io_req_complete_defer() or io_req_task_work_add() path in
> __io_uring_cmd_done() instead of setting the iopoll_completed flag. Also
> allow io_uring_cmd_mark_cancelable() to be called on these uring_cmds.
> Assert that io_uring_cmd_mark_cancelable() is only called on
> non-IORING_SETUP_IOPOLL io_urings or uring_cmds to files that don't
> implement ->uring_cmd_iopoll().
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
> ---
>  io_uring/io_uring.c  |  4 +++-
>  io_uring/uring_cmd.c | 11 +++++------
>  2 files changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index c45af82dda3d..4e68a5168894 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -1417,11 +1417,13 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
>
>         if (ret == IOU_ISSUE_SKIP_COMPLETE) {
>                 ret = 0;
>
>                 /* If the op doesn't have a file, we're not polling for it */
> -               if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> +               if ((req->ctx->flags & IORING_SETUP_IOPOLL) &&
> +                   def->iopoll_queue && (!io_is_uring_cmd(req) ||
> +                                         req->file->f_op->uring_cmd_iopoll))
>                         io_iopoll_req_issued(req, issue_flags);
>         }
>         return ret;
>  }
>
> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> index ee7b49f47cb5..8df52e8f1c1b 100644
> --- a/io_uring/uring_cmd.c
> +++ b/io_uring/uring_cmd.c
> @@ -108,12 +108,12 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
>          * Doing cancelations on IOPOLL requests are not supported. Both
>          * because they can't get canceled in the block stack, but also
>          * because iopoll completion data overlaps with the hash_node used
>          * for tracking.
>          */
> -       if (ctx->flags & IORING_SETUP_IOPOLL)
> -               return;
> +       WARN_ON_ONCE(ctx->flags & IORING_SETUP_IOPOLL &&
> +                    req->file->f_op->uring_cmd_iopoll);
>
>         if (!(cmd->flags & IORING_URING_CMD_CANCELABLE)) {
>                 cmd->flags |= IORING_URING_CMD_CANCELABLE;
>                 io_ring_submit_lock(ctx, issue_flags);
>                 hlist_add_head(&req->hash_node, &ctx->cancelable_uring_cmd);
> @@ -165,11 +165,12 @@ void __io_uring_cmd_done(struct io_uring_cmd *ioucmd, s32 ret, u64 res2,
>                 if (req->ctx->flags & IORING_SETUP_CQE_MIXED)
>                         req->cqe.flags |= IORING_CQE_F_32;
>                 io_req_set_cqe32_extra(req, res2, 0);
>         }
>         io_req_uring_cleanup(req, issue_flags);
> -       if (req->ctx->flags & IORING_SETUP_IOPOLL) {
> +       if (req->ctx->flags & IORING_SETUP_IOPOLL &&
> +           req->file->f_op->uring_cmd_iopoll) {

I do worry that the pointer chasing here may be expensive, ->file and
->f_op could both be uncached. Would it make sense to add a flag to
req->flags to indicate whether a request should actually be IOPOLLed?

Thanks,
Caleb

>                 /* order with io_iopoll_req_issued() checking ->iopoll_complete */
>                 smp_store_release(&req->iopoll_completed, 1);
>         } else if (issue_flags & IO_URING_F_COMPLETE_DEFER) {
>                 if (WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED))
>                         return;
> @@ -255,13 +256,11 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
>                 issue_flags |= IO_URING_F_SQE128;
>         if (ctx->flags & (IORING_SETUP_CQE32 | IORING_SETUP_CQE_MIXED))
>                 issue_flags |= IO_URING_F_CQE32;
>         if (io_is_compat(ctx))
>                 issue_flags |= IO_URING_F_COMPAT;
> -       if (ctx->flags & IORING_SETUP_IOPOLL) {
> -               if (!file->f_op->uring_cmd_iopoll)
> -                       return -EOPNOTSUPP;
> +       if (ctx->flags & IORING_SETUP_IOPOLL && file->f_op->uring_cmd_iopoll) {
>                 issue_flags |= IO_URING_F_IOPOLL;
>                 req->iopoll_completed = 0;
>                 if (ctx->flags & IORING_SETUP_HYBRID_IOPOLL) {
>                         /* make sure every req only blocks once */
>                         req->flags &= ~REQ_F_IOPOLL_STATE;
> --
> 2.45.2
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL
  2026-02-18  2:18   ` Caleb Sander Mateos
@ 2026-02-18 16:23     ` Jens Axboe
  0 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2026-02-18 16:23 UTC (permalink / raw)
  To: Caleb Sander Mateos, Christoph Hellwig, Keith Busch,
	Sagi Grimberg
  Cc: io-uring, linux-nvme, linux-kernel

On 2/17/26 7:18 PM, Caleb Sander Mateos wrote:
> On Thu, Feb 12, 2026 at 7:21?PM Caleb Sander Mateos
> <csander@purestorage.com> wrote:
>>
>> Currently, creating an io_uring with IORING_SETUP_IOPOLL requires all
>> requests issued to it to support iopoll. This prevents, for example,
>> using ublk zero-copy together with IORING_SETUP_IOPOLL, as ublk
>> zero-copy buffer registrations are performed using a uring_cmd. There's
>> no technical reason why these non-iopoll uring_cmds can't be supported.
>> They will either complete synchronously or via an external mechanism
>> that calls io_uring_cmd_done(), so they don't need to be polled.
>>
>> Allow uring_cmd requests to be issued to IORING_SETUP_IOPOLL io_urings
>> even if their files don't implement ->uring_cmd_iopoll(). For these
>> uring_cmd requests, skip initializing struct io_kiocb's iopoll fields,
>> don't insert the request into iopoll_list, and take the
>> io_req_complete_defer() or io_req_task_work_add() path in
>> __io_uring_cmd_done() instead of setting the iopoll_completed flag. Also
>> allow io_uring_cmd_mark_cancelable() to be called on these uring_cmds.
>> Assert that io_uring_cmd_mark_cancelable() is only called on
>> non-IORING_SETUP_IOPOLL io_urings or uring_cmds to files that don't
>> implement ->uring_cmd_iopoll().
>>
>> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
>> ---
>>  io_uring/io_uring.c  |  4 +++-
>>  io_uring/uring_cmd.c | 11 +++++------
>>  2 files changed, 8 insertions(+), 7 deletions(-)
>>
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index c45af82dda3d..4e68a5168894 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -1417,11 +1417,13 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
>>
>>         if (ret == IOU_ISSUE_SKIP_COMPLETE) {
>>                 ret = 0;
>>
>>                 /* If the op doesn't have a file, we're not polling for it */
>> -               if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
>> +               if ((req->ctx->flags & IORING_SETUP_IOPOLL) &&
>> +                   def->iopoll_queue && (!io_is_uring_cmd(req) ||
>> +                                         req->file->f_op->uring_cmd_iopoll))
>>                         io_iopoll_req_issued(req, issue_flags);
>>         }
>>         return ret;
>>  }
>>
>> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
>> index ee7b49f47cb5..8df52e8f1c1b 100644
>> --- a/io_uring/uring_cmd.c
>> +++ b/io_uring/uring_cmd.c
>> @@ -108,12 +108,12 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
>>          * Doing cancelations on IOPOLL requests are not supported. Both
>>          * because they can't get canceled in the block stack, but also
>>          * because iopoll completion data overlaps with the hash_node used
>>          * for tracking.
>>          */
>> -       if (ctx->flags & IORING_SETUP_IOPOLL)
>> -               return;
>> +       WARN_ON_ONCE(ctx->flags & IORING_SETUP_IOPOLL &&
>> +                    req->file->f_op->uring_cmd_iopoll);
>>
>>         if (!(cmd->flags & IORING_URING_CMD_CANCELABLE)) {
>>                 cmd->flags |= IORING_URING_CMD_CANCELABLE;
>>                 io_ring_submit_lock(ctx, issue_flags);
>>                 hlist_add_head(&req->hash_node, &ctx->cancelable_uring_cmd);
>> @@ -165,11 +165,12 @@ void __io_uring_cmd_done(struct io_uring_cmd *ioucmd, s32 ret, u64 res2,
>>                 if (req->ctx->flags & IORING_SETUP_CQE_MIXED)
>>                         req->cqe.flags |= IORING_CQE_F_32;
>>                 io_req_set_cqe32_extra(req, res2, 0);
>>         }
>>         io_req_uring_cleanup(req, issue_flags);
>> -       if (req->ctx->flags & IORING_SETUP_IOPOLL) {
>> +       if (req->ctx->flags & IORING_SETUP_IOPOLL &&
>> +           req->file->f_op->uring_cmd_iopoll) {
> 
> I do worry that the pointer chasing here may be expensive, ->file and
> ->f_op could both be uncached. Would it make sense to add a flag to
> req->flags to indicate whether a request should actually be IOPOLLed?

I think adding a REQ_F flag for that similar to what is done for NOWAIT
etc would be a good idea.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-02-18 16:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-13  3:21 [PATCH 0/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
2026-02-13  3:21 ` [PATCH 1/3] io_uring: add IORING_OP_URING_CMD128 to opcode checks Caleb Sander Mateos
2026-02-13  3:21 ` [PATCH 2/3] io_uring/uring_cmd: allow non-iopoll cmds with IORING_SETUP_IOPOLL Caleb Sander Mateos
2026-02-18  2:18   ` Caleb Sander Mateos
2026-02-18 16:23     ` Jens Axboe
2026-02-13  3:21 ` [PATCH 3/3] nvme: remove nvme_dev_uring_cmd() IO_URING_F_IOPOLL check Caleb Sander Mateos

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox