public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH v3 0/3] nvme_map_user_request() cleanup
@ 2025-03-24 20:05 Caleb Sander Mateos
  2025-03-24 20:05 ` [PATCH v3 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-03-24 20:05 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Xinyu Zhang, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

The first commit removes a WARN_ON_ONCE() checking userspace values.
The last 2 move code out of nvme_map_user_request() that belongs better
in its callers, and move the fixed buffer import before going async.
As discussed in [1], this allows an NVMe passthru operation submitted at
the same time as a ublk zero-copy buffer unregister operation to succeed
even if the initial issue goes async. This can improve performance of
userspace applications submitting the operations together like this with
a slow fallback path on failure. This is an alternate approach to [2],
which moved the fixed buffer import to the io_uring layer.

There will likely be conflicts with the parameter cleanup series Keith
posted last month in [3].

The series is based on block/for-6.15/io_uring, with commit 00817f0f1c45
("nvme-ioctl: fix leaked requests on mapping error") cherry-picked.

[1]: https://lore.kernel.org/io-uring/[email protected]/T/#u
[2]: https://lore.kernel.org/io-uring/[email protected]/
[3]: https://lore.kernel.org/all/[email protected]/T/#u

v3: Move the fixed buffer import before allocating a blk-mq request

v2: Fix iov_iter value passed to nvme_map_user_request()

Caleb Sander Mateos (3):
  nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer
  nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request()
  nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io()

 drivers/nvme/host/ioctl.c | 68 +++++++++++++++++++++------------------
 1 file changed, 36 insertions(+), 32 deletions(-)

-- 
2.45.2


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer
  2025-03-24 20:05 [PATCH v3 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
@ 2025-03-24 20:05 ` Caleb Sander Mateos
  2025-03-25 12:50   ` Jens Axboe
  2025-03-24 20:05 ` [PATCH v3 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request() Caleb Sander Mateos
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-03-24 20:05 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Xinyu Zhang, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

The vectorized io_uring NVMe passthru opcodes don't yet support fixed
buffers. But since userspace can trigger this condition based on the
io_uring SQE parameters, it shouldn't cause a kernel warning.

Signed-off-by: Caleb Sander Mateos <[email protected]>
Fixes: 23fd22e55b76 ("nvme: wire up fixed buffer support for nvme passthrough")
---
 drivers/nvme/host/ioctl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index a35ff018da74..0634e24eac97 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -140,11 +140,11 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 
 	if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
 		struct iov_iter iter;
 
 		/* fixedbufs is only for non-vectored io */
-		if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) {
+		if (flags & NVME_IOCTL_VEC) {
 			ret = -EINVAL;
 			goto out;
 		}
 		ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
 				rq_data_dir(req), &iter, ioucmd,
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request()
  2025-03-24 20:05 [PATCH v3 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
  2025-03-24 20:05 ` [PATCH v3 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
@ 2025-03-24 20:05 ` Caleb Sander Mateos
  2025-03-25 12:52   ` Jens Axboe
  2025-03-24 20:05 ` [PATCH v3 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
  2025-03-25 17:27 ` [PATCH v3 0/3] nvme_map_user_request() cleanup Chaitanya Kulkarni
  3 siblings, 1 reply; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-03-24 20:05 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Xinyu Zhang, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

The callers of nvme_map_user_request() (nvme_submit_user_cmd() and
nvme_uring_cmd_io()) allocate the request, so have them free it if
nvme_map_user_request() fails.

Signed-off-by: Caleb Sander Mateos <[email protected]>
---
 drivers/nvme/host/ioctl.c | 31 ++++++++++++++++---------------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 0634e24eac97..f6576e7201c5 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -127,41 +127,39 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 	int ret;
 
 	if (!nvme_ctrl_sgl_supported(ctrl))
 		dev_warn_once(ctrl->device, "using unchecked data buffer\n");
 	if (has_metadata) {
-		if (!supports_metadata) {
-			ret = -EINVAL;
-			goto out;
-		}
+		if (!supports_metadata)
+			return -EINVAL;
+
 		if (!nvme_ctrl_meta_sgl_supported(ctrl))
 			dev_warn_once(ctrl->device,
 				      "using unchecked metadata buffer\n");
 	}
 
 	if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
 		struct iov_iter iter;
 
 		/* fixedbufs is only for non-vectored io */
-		if (flags & NVME_IOCTL_VEC) {
-			ret = -EINVAL;
-			goto out;
-		}
+		if (flags & NVME_IOCTL_VEC)
+			return -EINVAL;
+
 		ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
 				rq_data_dir(req), &iter, ioucmd,
 				iou_issue_flags);
 		if (ret < 0)
-			goto out;
+			return ret;
 		ret = blk_rq_map_user_iov(q, req, NULL, &iter, GFP_KERNEL);
 	} else {
 		ret = blk_rq_map_user_io(req, NULL, nvme_to_user_ptr(ubuffer),
 				bufflen, GFP_KERNEL, flags & NVME_IOCTL_VEC, 0,
 				0, rq_data_dir(req));
 	}
 
 	if (ret)
-		goto out;
+		return ret;
 
 	bio = req->bio;
 	if (bdev)
 		bio_set_dev(bio, bdev);
 
@@ -174,12 +172,10 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 	return ret;
 
 out_unmap:
 	if (bio)
 		blk_rq_unmap_user(bio);
-out:
-	blk_mq_free_request(req);
 	return ret;
 }
 
 static int nvme_submit_user_cmd(struct request_queue *q,
 		struct nvme_command *cmd, u64 ubuffer, unsigned bufflen,
@@ -200,11 +196,11 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 	req->timeout = timeout;
 	if (ubuffer && bufflen) {
 		ret = nvme_map_user_request(req, ubuffer, bufflen, meta_buffer,
 				meta_len, NULL, flags, 0);
 		if (ret)
-			return ret;
+			goto out_free_req;
 	}
 
 	bio = req->bio;
 	ctrl = nvme_req(req)->ctrl;
 
@@ -212,15 +208,16 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 	ret = nvme_execute_rq(req, false);
 	if (result)
 		*result = le64_to_cpu(nvme_req(req)->result.u64);
 	if (bio)
 		blk_rq_unmap_user(bio);
-	blk_mq_free_request(req);
 
 	if (effects)
 		nvme_passthru_end(ctrl, ns, effects, cmd, ret);
 
+out_free_req:
+	blk_mq_free_request(req);
 	return ret;
 }
 
 static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
 {
@@ -520,20 +517,24 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	if (d.data_len) {
 		ret = nvme_map_user_request(req, d.addr,
 			d.data_len, nvme_to_user_ptr(d.metadata),
 			d.metadata_len, ioucmd, vec, issue_flags);
 		if (ret)
-			return ret;
+			goto out_free_req;
 	}
 
 	/* to free bio on completion, as req->bio will be null at that time */
 	pdu->bio = req->bio;
 	pdu->req = req;
 	req->end_io_data = ioucmd;
 	req->end_io = nvme_uring_cmd_end_io;
 	blk_execute_rq_nowait(req, false);
 	return -EIOCBQUEUED;
+
+out_free_req:
+	blk_mq_free_request(req);
+	return ret;
 }
 
 static bool is_ctrl_ioctl(unsigned int cmd)
 {
 	if (cmd == NVME_IOCTL_ADMIN_CMD || cmd == NVME_IOCTL_ADMIN64_CMD)
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io()
  2025-03-24 20:05 [PATCH v3 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
  2025-03-24 20:05 ` [PATCH v3 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
  2025-03-24 20:05 ` [PATCH v3 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request() Caleb Sander Mateos
@ 2025-03-24 20:05 ` Caleb Sander Mateos
  2025-03-25 12:52   ` Jens Axboe
  2025-03-25 17:27 ` [PATCH v3 0/3] nvme_map_user_request() cleanup Chaitanya Kulkarni
  3 siblings, 1 reply; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-03-24 20:05 UTC (permalink / raw)
  To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Pavel Begunkov
  Cc: Xinyu Zhang, linux-nvme, io-uring, linux-kernel,
	Caleb Sander Mateos

nvme_map_user_request() is called from both nvme_submit_user_cmd() and
nvme_uring_cmd_io(). But the ioucmd branch is only applicable to
nvme_uring_cmd_io(). Move it to nvme_uring_cmd_io() and just pass the
resulting iov_iter to nvme_map_user_request().

For NVMe passthru operations with fixed buffers, the fixed buffer lookup
happens in io_uring_cmd_import_fixed(). But nvme_uring_cmd_io() can
return -EAGAIN first from nvme_alloc_user_request() if all tags in the
tag set are in use. This ordering difference is observable when using
UBLK_U_IO_{,UN}REGISTER_IO_BUF SQEs to modify the fixed buffer table. If
the NVMe passthru operation is followed by UBLK_U_IO_UNREGISTER_IO_BUF
to unregister the fixed buffer and the NVMe passthru goes async, the
fixed buffer lookup will fail because it happens after the unregister.

Userspace should not depend on the order in which io_uring issues SQEs
submitted in parallel, but it may try submitting the SQEs together and
fall back on a slow path if the fixed buffer lookup fails. To make the
fast path more likely, do the import before nvme_alloc_user_request().

Signed-off-by: Caleb Sander Mateos <[email protected]>
---
 drivers/nvme/host/ioctl.c | 45 +++++++++++++++++++++------------------
 1 file changed, 24 insertions(+), 21 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index f6576e7201c5..da0eee21ecd0 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -112,12 +112,11 @@ static struct request *nvme_alloc_user_request(struct request_queue *q,
 	return req;
 }
 
 static int nvme_map_user_request(struct request *req, u64 ubuffer,
 		unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
-		struct io_uring_cmd *ioucmd, unsigned int flags,
-		unsigned int iou_issue_flags)
+		struct iov_iter *iter, unsigned int flags)
 {
 	struct request_queue *q = req->q;
 	struct nvme_ns *ns = q->queuedata;
 	struct block_device *bdev = ns ? ns->disk->part0 : NULL;
 	bool supports_metadata = bdev && blk_get_integrity(bdev->bd_disk);
@@ -135,28 +134,16 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
 		if (!nvme_ctrl_meta_sgl_supported(ctrl))
 			dev_warn_once(ctrl->device,
 				      "using unchecked metadata buffer\n");
 	}
 
-	if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
-		struct iov_iter iter;
-
-		/* fixedbufs is only for non-vectored io */
-		if (flags & NVME_IOCTL_VEC)
-			return -EINVAL;
-
-		ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
-				rq_data_dir(req), &iter, ioucmd,
-				iou_issue_flags);
-		if (ret < 0)
-			return ret;
-		ret = blk_rq_map_user_iov(q, req, NULL, &iter, GFP_KERNEL);
-	} else {
+	if (iter)
+		ret = blk_rq_map_user_iov(q, req, NULL, iter, GFP_KERNEL);
+	else
 		ret = blk_rq_map_user_io(req, NULL, nvme_to_user_ptr(ubuffer),
 				bufflen, GFP_KERNEL, flags & NVME_IOCTL_VEC, 0,
 				0, rq_data_dir(req));
-	}
 
 	if (ret)
 		return ret;
 
 	bio = req->bio;
@@ -194,11 +181,11 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 		return PTR_ERR(req);
 
 	req->timeout = timeout;
 	if (ubuffer && bufflen) {
 		ret = nvme_map_user_request(req, ubuffer, bufflen, meta_buffer,
-				meta_len, NULL, flags, 0);
+				meta_len, NULL, flags);
 		if (ret)
 			goto out_free_req;
 	}
 
 	bio = req->bio;
@@ -465,10 +452,12 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
 	const struct nvme_uring_cmd *cmd = io_uring_sqe_cmd(ioucmd->sqe);
 	struct request_queue *q = ns ? ns->queue : ctrl->admin_q;
 	struct nvme_uring_data d;
 	struct nvme_command c;
+	struct iov_iter iter;
+	struct iov_iter *map_iter = NULL;
 	struct request *req;
 	blk_opf_t rq_flags = REQ_ALLOC_CACHE;
 	blk_mq_req_flags_t blk_flags = 0;
 	int ret;
 
@@ -500,10 +489,24 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	d.addr = READ_ONCE(cmd->addr);
 	d.data_len = READ_ONCE(cmd->data_len);
 	d.metadata_len = READ_ONCE(cmd->metadata_len);
 	d.timeout_ms = READ_ONCE(cmd->timeout_ms);
 
+	if (d.data_len && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
+		/* fixedbufs is only for non-vectored io */
+		if (vec)
+			return -EINVAL;
+
+		ret = io_uring_cmd_import_fixed(d.addr, d.data_len,
+			nvme_is_write(&c) ? WRITE : READ, &iter, ioucmd,
+			issue_flags);
+		if (ret < 0)
+			return ret;
+
+		map_iter = &iter;
+	}
+
 	if (issue_flags & IO_URING_F_NONBLOCK) {
 		rq_flags |= REQ_NOWAIT;
 		blk_flags = BLK_MQ_REQ_NOWAIT;
 	}
 	if (issue_flags & IO_URING_F_IOPOLL)
@@ -513,13 +516,13 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 	req->timeout = d.timeout_ms ? msecs_to_jiffies(d.timeout_ms) : 0;
 
 	if (d.data_len) {
-		ret = nvme_map_user_request(req, d.addr,
-			d.data_len, nvme_to_user_ptr(d.metadata),
-			d.metadata_len, ioucmd, vec, issue_flags);
+		ret = nvme_map_user_request(req, d.addr, d.data_len,
+			nvme_to_user_ptr(d.metadata), d.metadata_len,
+			map_iter, vec);
 		if (ret)
 			goto out_free_req;
 	}
 
 	/* to free bio on completion, as req->bio will be null at that time */
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer
  2025-03-24 20:05 ` [PATCH v3 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
@ 2025-03-25 12:50   ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2025-03-25 12:50 UTC (permalink / raw)
  To: Caleb Sander Mateos, Keith Busch, Christoph Hellwig,
	Sagi Grimberg, Pavel Begunkov
  Cc: Xinyu Zhang, linux-nvme, io-uring, linux-kernel

On 3/24/25 2:05 PM, Caleb Sander Mateos wrote:
> The vectorized io_uring NVMe passthru opcodes don't yet support fixed
> buffers. But since userspace can trigger this condition based on the
> io_uring SQE parameters, it shouldn't cause a kernel warning.

Looks good to me:

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request()
  2025-03-24 20:05 ` [PATCH v3 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request() Caleb Sander Mateos
@ 2025-03-25 12:52   ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2025-03-25 12:52 UTC (permalink / raw)
  To: Caleb Sander Mateos, Keith Busch, Christoph Hellwig,
	Sagi Grimberg, Pavel Begunkov
  Cc: Xinyu Zhang, linux-nvme, io-uring, linux-kernel

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io()
  2025-03-24 20:05 ` [PATCH v3 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
@ 2025-03-25 12:52   ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2025-03-25 12:52 UTC (permalink / raw)
  To: Caleb Sander Mateos, Keith Busch, Christoph Hellwig,
	Sagi Grimberg, Pavel Begunkov
  Cc: Xinyu Zhang, linux-nvme, io-uring, linux-kernel

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 0/3] nvme_map_user_request() cleanup
  2025-03-24 20:05 [PATCH v3 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
                   ` (2 preceding siblings ...)
  2025-03-24 20:05 ` [PATCH v3 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
@ 2025-03-25 17:27 ` Chaitanya Kulkarni
  3 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2025-03-25 17:27 UTC (permalink / raw)
  To: Caleb Sander Mateos, Keith Busch, Jens Axboe, Christoph Hellwig,
	Sagi Grimberg, Pavel Begunkov
  Cc: Xinyu Zhang, [email protected],
	[email protected], [email protected]

On 3/24/25 13:05, Caleb Sander Mateos wrote:
> The first commit removes a WARN_ON_ONCE() checking userspace values.
> The last 2 move code out of nvme_map_user_request() that belongs better
> in its callers, and move the fixed buffer import before going async.
> As discussed in [1], this allows an NVMe passthru operation submitted at
> the same time as a ublk zero-copy buffer unregister operation to succeed
> even if the initial issue goes async. This can improve performance of
> userspace applications submitting the operations together like this with
> a slow fallback path on failure. This is an alternate approach to [2],
> which moved the fixed buffer import to the io_uring layer.
>
> There will likely be conflicts with the parameter cleanup series Keith
> posted last month in [3].
>
> The series is based on block/for-6.15/io_uring, with commit 00817f0f1c45
> ("nvme-ioctl: fix leaked requests on mapping error") cherry-picked.
>
> [1]:https://lore.kernel.org/io-uring/[email protected]/T/#u
> [2]:https://lore.kernel.org/io-uring/[email protected]/
> [3]:https://lore.kernel.org/all/[email protected]/T/#u
>
> v3: Move the fixed buffer import before allocating a blk-mq request
>
> v2: Fix iov_iter value passed to nvme_map_user_request()

Looks good to me.

Reviewed-by: Chaitanya Kulkarni <[email protected]>

-ck



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-03-25 17:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-24 20:05 [PATCH v3 0/3] nvme_map_user_request() cleanup Caleb Sander Mateos
2025-03-24 20:05 ` [PATCH v3 1/3] nvme/ioctl: don't warn on vectorized uring_cmd with fixed buffer Caleb Sander Mateos
2025-03-25 12:50   ` Jens Axboe
2025-03-24 20:05 ` [PATCH v3 2/3] nvme/ioctl: move blk_mq_free_request() out of nvme_map_user_request() Caleb Sander Mateos
2025-03-25 12:52   ` Jens Axboe
2025-03-24 20:05 ` [PATCH v3 3/3] nvme/ioctl: move fixed buffer lookup to nvme_uring_cmd_io() Caleb Sander Mateos
2025-03-25 12:52   ` Jens Axboe
2025-03-25 17:27 ` [PATCH v3 0/3] nvme_map_user_request() cleanup Chaitanya Kulkarni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox