public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] nvme_map_user_request() cleanup
@ 2025-03-28 15:46 Caleb Sander Mateos
  2025-03-28 15:46 ` [PATCH v4 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Caleb Sander Mateos @ 2025-03-28 15:46 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Chaitanya Kulkarni, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

The first commit removes a WARN_ON_ONCE() checking userspace values.
The last 2 move code out of nvme_map_user_request() that belongs better
in its callers, and move the fixed buffer import before going async.
As discussed in [1], this allows an NVMe passthru operation submitted at
the same time as a ublk zero-copy buffer unregister operation to succeed
even if the initial issue goes async. This can improve performance of
userspace applications submitting the operations together like this with
a slow fallback path on failure. This is an alternate approach to [2],
which moved the fixed buffer import to the io_uring layer.

There will likely be conflicts with the parameter cleanup series Keith
posted last month in [3].

The series is based on for-6.15/io_uring, with commit 00817f0f1c45
("nvme-ioctl: fix leaked requests on mapping error") cherry-picked.

[1]: https://lore.kernel.org/io-uring/20250321184819.3847386-1-csander@purestorage.com/T/#u
[2]: https://lore.kernel.org/io-uring/20250321184819.3847386-4-csander@purestorage.com/
[3]: https://lore.kernel.org/all/20250224182128.2042061-1-kbusch@meta.com/T/#u

v4:
- Preserve the order of blk_mq_free_request() and nvme_passthru_end()
- Add Reviewed-by tags

v3: Move the fixed buffer import before allocating a blk-mq request

v2: Fix iov_iter value passed to nvme_map_user_request()

Caleb Sander Mateos (3):
  nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer
  nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request()
  nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io()

 drivers/nvme/host/ioctl.c | 68 +++++++++++++++++++++------------------
 1 file changed, 37 insertions(+), 31 deletions(-)

-- 
2.45.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer
  2025-03-28 15:46 [PATCH v4 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
@ 2025-03-28 15:46 ` Caleb Sander Mateos
  2025-03-28 15:46 ` [PATCH v4 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request() Caleb Sander Mateos
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Caleb Sander Mateos @ 2025-03-28 15:46 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Chaitanya Kulkarni, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

The vectorized io_uring NVMe passthru opcodes don't yet support fixed
buffers. But since userspace can trigger this condition based on the
io_uring SQE parameters, it shouldn't cause a kernel warning.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Fixes: 23fd22e55b76 ("nvme: wire up fixed buffer support for nvme passthrough")
---
 drivers/nvme/host/ioctl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index a35ff018da74..0634e24eac97 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -140,11 +140,11 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 
 	if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
 		struct iov_iter iter;
 
 		/* fixedbufs is only for non-vectored io */
-		if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) {
+		if (flags & NVME_IOCTL_VEC) {
 			ret = -EINVAL;
 			goto out;
 		}
 		ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
 				rq_data_dir(req), &iter, ioucmd,
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request()
  2025-03-28 15:46 [PATCH v4 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
  2025-03-28 15:46 ` [PATCH v4 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
@ 2025-03-28 15:46 ` Caleb Sander Mateos
  2025-03-28 15:46 ` [PATCH v4 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
  2025-03-28 17:31 ` [PATCH v4 0/3] nvme_map_user_request() cleanup Keith Busch
  3 siblings, 0 replies; 7+ messages in thread
From: Caleb Sander Mateos @ 2025-03-28 15:46 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Chaitanya Kulkarni, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

The callers of nvme_map_user_request() (nvme_submit_user_cmd() and
nvme_uring_cmd_io()) allocate the request, so have them free it if
nvme_map_user_request() fails.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/ioctl.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 0634e24eac97..42dfd29ed39e 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -127,41 +127,39 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 	int ret;
 
 	if (!nvme_ctrl_sgl_supported(ctrl))
 		dev_warn_once(ctrl->device, "using unchecked data buffer\n");
 	if (has_metadata) {
-		if (!supports_metadata) {
-			ret = -EINVAL;
-			goto out;
-		}
+		if (!supports_metadata)
+			return -EINVAL;
+
 		if (!nvme_ctrl_meta_sgl_supported(ctrl))
 			dev_warn_once(ctrl->device,
 				      "using unchecked metadata buffer\n");
 	}
 
 	if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
 		struct iov_iter iter;
 
 		/* fixedbufs is only for non-vectored io */
-		if (flags & NVME_IOCTL_VEC) {
-			ret = -EINVAL;
-			goto out;
-		}
+		if (flags & NVME_IOCTL_VEC)
+			return -EINVAL;
+
 		ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
 				rq_data_dir(req), &iter, ioucmd,
 				iou_issue_flags);
 		if (ret < 0)
-			goto out;
+			return ret;
 		ret = blk_rq_map_user_iov(q, req, NULL, &iter, GFP_KERNEL);
 	} else {
 		ret = blk_rq_map_user_io(req, NULL, nvme_to_user_ptr(ubuffer),
 				bufflen, GFP_KERNEL, flags & NVME_IOCTL_VEC, 0,
 				0, rq_data_dir(req));
 	}
 
 	if (ret)
-		goto out;
+		return ret;
 
 	bio = req->bio;
 	if (bdev)
 		bio_set_dev(bio, bdev);
 
@@ -174,12 +172,10 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 	return ret;
 
 out_unmap:
 	if (bio)
 		blk_rq_unmap_user(bio);
-out:
-	blk_mq_free_request(req);
 	return ret;
 }
 
 static int nvme_submit_user_cmd(struct request_queue *q,
 		struct nvme_command *cmd, u64 ubuffer, unsigned bufflen,
@@ -200,11 +196,11 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 	req->timeout = timeout;
 	if (ubuffer && bufflen) {
 		ret = nvme_map_user_request(req, ubuffer, bufflen, meta_buffer,
 				meta_len, NULL, flags, 0);
 		if (ret)
-			return ret;
+			goto out_free_req;
 	}
 
 	bio = req->bio;
 	ctrl = nvme_req(req)->ctrl;
 
@@ -216,11 +212,14 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 		blk_rq_unmap_user(bio);
 	blk_mq_free_request(req);
 
 	if (effects)
 		nvme_passthru_end(ctrl, ns, effects, cmd, ret);
+	return ret;
 
+out_free_req:
+	blk_mq_free_request(req);
 	return ret;
 }
 
 static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
 {
@@ -520,20 +519,24 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	if (d.data_len) {
 		ret = nvme_map_user_request(req, d.addr,
 			d.data_len, nvme_to_user_ptr(d.metadata),
 			d.metadata_len, ioucmd, vec, issue_flags);
 		if (ret)
-			return ret;
+			goto out_free_req;
 	}
 
 	/* to free bio on completion, as req->bio will be null at that time */
 	pdu->bio = req->bio;
 	pdu->req = req;
 	req->end_io_data = ioucmd;
 	req->end_io = nvme_uring_cmd_end_io;
 	blk_execute_rq_nowait(req, false);
 	return -EIOCBQUEUED;
+
+out_free_req:
+	blk_mq_free_request(req);
+	return ret;
 }
 
 static bool is_ctrl_ioctl(unsigned int cmd)
 {
 	if (cmd == NVME_IOCTL_ADMIN_CMD || cmd == NVME_IOCTL_ADMIN64_CMD)
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io()
  2025-03-28 15:46 [PATCH v4 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
  2025-03-28 15:46 ` [PATCH v4 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
  2025-03-28 15:46 ` [PATCH v4 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request() Caleb Sander Mateos
@ 2025-03-28 15:46 ` Caleb Sander Mateos
  2025-03-31  6:46   ` Kanchan Joshi
  2025-03-28 17:31 ` [PATCH v4 0/3] nvme_map_user_request() cleanup Keith Busch
  3 siblings, 1 reply; 7+ messages in thread
From: Caleb Sander Mateos @ 2025-03-28 15:46 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Chaitanya Kulkarni, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

nvme_map_user_request() is called from both nvme_submit_user_cmd() and
nvme_uring_cmd_io(). But the ioucmd branch is only applicable to
nvme_uring_cmd_io(). Move it to nvme_uring_cmd_io() and just pass the
resulting iov_iter to nvme_map_user_request().

For NVMe passthru operations with fixed buffers, the fixed buffer lookup
happens in io_uring_cmd_import_fixed(). But nvme_uring_cmd_io() can
return -EAGAIN first from nvme_alloc_user_request() if all tags in the
tag set are in use. This ordering difference is observable when using
UBLK_U_IO_{,UN}REGISTER_IO_BUF SQEs to modify the fixed buffer table. If
the NVMe passthru operation is followed by UBLK_U_IO_UNREGISTER_IO_BUF
to unregister the fixed buffer and the NVMe passthru goes async, the
fixed buffer lookup will fail because it happens after the unregister.

Userspace should not depend on the order in which io_uring issues SQEs
submitted in parallel, but it may try submitting the SQEs together and
fall back on a slow path if the fixed buffer lookup fails. To make the
fast path more likely, do the import before nvme_alloc_user_request().

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/ioctl.c | 45 +++++++++++++++++++++------------------
 1 file changed, 24 insertions(+), 21 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 42dfd29ed39e..400c3df0e58f 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -112,12 +112,11 @@ static struct request *nvme_alloc_user_request(struct request_queue *q,
 	return req;
 }
 
 static int nvme_map_user_request(struct request *req, u64 ubuffer,
 		unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
-		struct io_uring_cmd *ioucmd, unsigned int flags,
-		unsigned int iou_issue_flags)
+		struct iov_iter *iter, unsigned int flags)
 {
 	struct request_queue *q = req->q;
 	struct nvme_ns *ns = q->queuedata;
 	struct block_device *bdev = ns ? ns->disk->part0 : NULL;
 	bool supports_metadata = bdev && blk_get_integrity(bdev->bd_disk);
@@ -135,28 +134,16 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 		if (!nvme_ctrl_meta_sgl_supported(ctrl))
 			dev_warn_once(ctrl->device,
 				      "using unchecked metadata buffer\n");
 	}
 
-	if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
-		struct iov_iter iter;
-
-		/* fixedbufs is only for non-vectored io */
-		if (flags & NVME_IOCTL_VEC)
-			return -EINVAL;
-
-		ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
-				rq_data_dir(req), &iter, ioucmd,
-				iou_issue_flags);
-		if (ret < 0)
-			return ret;
-		ret = blk_rq_map_user_iov(q, req, NULL, &iter, GFP_KERNEL);
-	} else {
+	if (iter)
+		ret = blk_rq_map_user_iov(q, req, NULL, iter, GFP_KERNEL);
+	else
 		ret = blk_rq_map_user_io(req, NULL, nvme_to_user_ptr(ubuffer),
 				bufflen, GFP_KERNEL, flags & NVME_IOCTL_VEC, 0,
 				0, rq_data_dir(req));
-	}
 
 	if (ret)
 		return ret;
 
 	bio = req->bio;
@@ -194,11 +181,11 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 		return PTR_ERR(req);
 
 	req->timeout = timeout;
 	if (ubuffer && bufflen) {
 		ret = nvme_map_user_request(req, ubuffer, bufflen, meta_buffer,
-				meta_len, NULL, flags, 0);
+				meta_len, NULL, flags);
 		if (ret)
 			goto out_free_req;
 	}
 
 	bio = req->bio;
@@ -467,10 +454,12 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
 	const struct nvme_uring_cmd *cmd = io_uring_sqe_cmd(ioucmd->sqe);
 	struct request_queue *q = ns ? ns->queue : ctrl->admin_q;
 	struct nvme_uring_data d;
 	struct nvme_command c;
+	struct iov_iter iter;
+	struct iov_iter *map_iter = NULL;
 	struct request *req;
 	blk_opf_t rq_flags = REQ_ALLOC_CACHE;
 	blk_mq_req_flags_t blk_flags = 0;
 	int ret;
 
@@ -502,10 +491,24 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	d.addr = READ_ONCE(cmd->addr);
 	d.data_len = READ_ONCE(cmd->data_len);
 	d.metadata_len = READ_ONCE(cmd->metadata_len);
 	d.timeout_ms = READ_ONCE(cmd->timeout_ms);
 
+	if (d.data_len && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
+		/* fixedbufs is only for non-vectored io */
+		if (vec)
+			return -EINVAL;
+
+		ret = io_uring_cmd_import_fixed(d.addr, d.data_len,
+			nvme_is_write(&c) ? WRITE : READ, &iter, ioucmd,
+			issue_flags);
+		if (ret < 0)
+			return ret;
+
+		map_iter = &iter;
+	}
+
 	if (issue_flags & IO_URING_F_NONBLOCK) {
 		rq_flags |= REQ_NOWAIT;
 		blk_flags = BLK_MQ_REQ_NOWAIT;
 	}
 	if (issue_flags & IO_URING_F_IOPOLL)
@@ -515,13 +518,13 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 	req->timeout = d.timeout_ms ? msecs_to_jiffies(d.timeout_ms) : 0;
 
 	if (d.data_len) {
-		ret = nvme_map_user_request(req, d.addr,
-			d.data_len, nvme_to_user_ptr(d.metadata),
-			d.metadata_len, ioucmd, vec, issue_flags);
+		ret = nvme_map_user_request(req, d.addr, d.data_len,
+			nvme_to_user_ptr(d.metadata), d.metadata_len,
+			map_iter, vec);
 		if (ret)
 			goto out_free_req;
 	}
 
 	/* to free bio on completion, as req->bio will be null at that time */
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 0/3] nvme_map_user_request() cleanup
  2025-03-28 15:46 [PATCH v4 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
                   ` (2 preceding siblings ...)
  2025-03-28 15:46 ` [PATCH v4 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
@ 2025-03-28 17:31 ` Keith Busch
  3 siblings, 0 replies; 7+ messages in thread
From: Keith Busch @ 2025-03-28 17:31 UTC (permalink / raw)
  To: Caleb Sander Mateos
  Cc: Jens Axboe, Christoph Hellwig, Sagi Grimberg, Pavel Begunkov,
	Chaitanya Kulkarni, linux-nvme, io-uring, linux-kernel

On Fri, Mar 28, 2025 at 09:46:44AM -0600, Caleb Sander Mateos wrote:
> The first commit removes a WARN_ON_ONCE() checking userspace values.
> The last 2 move code out of nvme_map_user_request() that belongs better
> in its callers, and move the fixed buffer import before going async.
> As discussed in [1], this allows an NVMe passthru operation submitted at
> the same time as a ublk zero-copy buffer unregister operation to succeed
> even if the initial issue goes async. This can improve performance of
> userspace applications submitting the operations together like this with
> a slow fallback path on failure. This is an alternate approach to [2],
> which moved the fixed buffer import to the io_uring layer.

Thanks, applied to nvme for-next. I'll replay this to nvme-6.15 after
block-6.15 rebases to Linus master.
 
> There will likely be conflicts with the parameter cleanup series Keith
> posted last month in [3].

No worries, this actually makes some of those cleanups eaiser.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io()
  2025-03-28 15:46 ` [PATCH v4 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
@ 2025-03-31  6:46   ` Kanchan Joshi
  2025-03-31 14:36     ` Keith Busch
  0 siblings, 1 reply; 7+ messages in thread
From: Kanchan Joshi @ 2025-03-31  6:46 UTC (permalink / raw)
  To: Caleb Sander Mateos, Keith Busch, Jens Axboe, Christoph Hellwig,
	Sagi Grimberg, Pavel Begunkov
  Cc: Chaitanya Kulkarni, linux-nvme, io-uring, linux-kernel

On 3/28/2025 9:16 PM, Caleb Sander Mateos wrote:
> For NVMe passthru operations with fixed buffers, the fixed buffer lookup
> happens in io_uring_cmd_import_fixed(). But nvme_uring_cmd_io() can
> return -EAGAIN first from nvme_alloc_user_request() if all tags in the
> tag set are in use. This ordering difference is observable when using
> UBLK_U_IO_{,UN}REGISTER_IO_BUF SQEs to modify the fixed buffer table. If
> the NVMe passthru operation is followed by UBLK_U_IO_UNREGISTER_IO_BUF
> to unregister the fixed buffer and the NVMe passthru goes async, the
> fixed buffer lookup will fail because it happens after the unregister.

while the patch looks fine, I wonder what setup is required to 
trigger/test this. Given that io_uring NVMe passthru is on the char 
device node, and ublk does not take char device as the backing file. 
Care to explain?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io()
  2025-03-31  6:46   ` Kanchan Joshi
@ 2025-03-31 14:36     ` Keith Busch
  0 siblings, 0 replies; 7+ messages in thread
From: Keith Busch @ 2025-03-31 14:36 UTC (permalink / raw)
  To: Kanchan Joshi
  Cc: Caleb Sander Mateos, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov, Chaitanya Kulkarni, linux-nvme, io-uring,
	linux-kernel

On Mon, Mar 31, 2025 at 12:16:58PM +0530, Kanchan Joshi wrote:
> On 3/28/2025 9:16 PM, Caleb Sander Mateos wrote:
> > For NVMe passthru operations with fixed buffers, the fixed buffer lookup
> > happens in io_uring_cmd_import_fixed(). But nvme_uring_cmd_io() can
> > return -EAGAIN first from nvme_alloc_user_request() if all tags in the
> > tag set are in use. This ordering difference is observable when using
> > UBLK_U_IO_{,UN}REGISTER_IO_BUF SQEs to modify the fixed buffer table. If
> > the NVMe passthru operation is followed by UBLK_U_IO_UNREGISTER_IO_BUF
> > to unregister the fixed buffer and the NVMe passthru goes async, the
> > fixed buffer lookup will fail because it happens after the unregister.
> 
> while the patch looks fine, I wonder what setup is required to 
> trigger/test this. Given that io_uring NVMe passthru is on the char 
> device node, and ublk does not take char device as the backing file. 
> Care to explain?

Not sure I understand the question. A ublk daemon can use anything it
wants on the backend. Are you just referring to the public ublksrv
implementation? That's not used here, if that's what you mean.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-03-31 14:36 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-28 15:46 [PATCH v4 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
2025-03-28 15:46 ` [PATCH v4 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
2025-03-28 15:46 ` [PATCH v4 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request() Caleb Sander Mateos
2025-03-28 15:46 ` [PATCH v4 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
2025-03-31  6:46   ` Kanchan Joshi
2025-03-31 14:36     ` Keith Busch
2025-03-28 17:31 ` [PATCH v4 0/3] nvme_map_user_request() cleanup Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox