* [RFC 0/3] Async nvme passthrough
[not found] <CGME20210302160907epcas5p4d04ab7c4ef4d467302498f06ed656b24@epcas5p4.samsung.com>
@ 2021-03-02 16:07 ` Kanchan Joshi
[not found] ` <CGME20210302161000epcas5p3ec5c461a8eec593b6d83a9127c7fec4f@epcas5p3.samsung.com>
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-02 16:07 UTC (permalink / raw)
To: axboe, hch, kbusch
Cc: io-uring, linux-nvme, anuj20.g, javier.gonz, Kanchan Joshi
This series adds async passthrough capability for nvme block-dev over
iouring_cmd. The patches are on top of Jens uring-cmd branch:
https://git.kernel.dk/cgit/linux-block/log/?h=io_uring-fops.v3
Application is expected to allocate passthrough command structure, set
it up traditionally, and pass its address via "block_uring_cmd->addr".
On completion, CQE is posted with completion-status preceding any
ioctl specific buffer/field update.
Kanchan Joshi (3):
io_uring: add helper for uring_cmd completion in submitter-task
nvme: passthrough helper with callback
nvme: wire up support for async passthrough
drivers/nvme/host/core.c | 225 ++++++++++++++++++++++++++++++++-------
drivers/nvme/host/nvme.h | 3 +
drivers/nvme/host/pci.c | 1 +
fs/io_uring.c | 37 ++++++-
include/linux/io_uring.h | 8 ++
5 files changed, 232 insertions(+), 42 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [RFC 1/3] io_uring: add helper for uring_cmd completion in submitter-task
[not found] ` <CGME20210302161000epcas5p3ec5c461a8eec593b6d83a9127c7fec4f@epcas5p3.samsung.com>
@ 2021-03-02 16:07 ` Kanchan Joshi
0 siblings, 0 replies; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-02 16:07 UTC (permalink / raw)
To: axboe, hch, kbusch
Cc: io-uring, linux-nvme, anuj20.g, javier.gonz, Kanchan Joshi
Completion of a uring_cmd ioctl may involve referencing certain
ioctl-specific fields, requiring original submitter context.
Introduce 'uring_cmd_complete_in_task' that driver can use for this
purpose. The API facilitates task-work infra, while driver gets to
implement cmd-specific handling in a callback.
Signed-off-by: Kanchan Joshi <[email protected]>
---
fs/io_uring.c | 37 +++++++++++++++++++++++++++++++++----
include/linux/io_uring.h | 8 ++++++++
2 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 116ac0f179e0..d4ed1326b9f1 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -775,9 +775,12 @@ struct io_kiocb {
/* use only after cleaning per-op data, see io_clean_op() */
struct io_completion compl;
};
-
- /* opcode allocated if it needs to store data for async defer */
- void *async_data;
+ union {
+ /* opcode allocated if it needs to store data for async defer */
+ void *async_data;
+ /* used for uring-cmd, when driver needs to update in task */
+ void (*driver_cb)(struct io_uring_cmd *cmd);
+ };
u8 opcode;
/* polled IO has completed */
u8 iopoll_completed;
@@ -1719,7 +1722,7 @@ static void io_dismantle_req(struct io_kiocb *req)
{
io_clean_op(req);
- if (req->async_data)
+ if (io_op_defs[req->opcode].async_size && req->async_data)
kfree(req->async_data);
if (req->file)
io_put_file(req, req->file, (req->flags & REQ_F_FIXED_FILE));
@@ -2035,6 +2038,31 @@ static void io_req_task_submit(struct callback_head *cb)
__io_req_task_submit(req);
}
+static void uring_cmd_work(struct callback_head *cb)
+{
+ struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+ struct io_uring_cmd *cmd = &req->uring_cmd;
+
+ req->driver_cb(cmd);
+}
+int uring_cmd_complete_in_task(struct io_uring_cmd *ioucmd,
+ void (*driver_cb)(struct io_uring_cmd *))
+{
+ int ret;
+ struct io_kiocb *req = container_of(ioucmd, struct io_kiocb, uring_cmd);
+
+ req->driver_cb = driver_cb;
+ req->task_work.func = uring_cmd_work;
+ ret = io_req_task_work_add(req);
+ if (unlikely(ret)) {
+ req->result = -ECANCELED;
+ percpu_ref_get(&req->ctx->refs);
+ io_req_task_work_add_fallback(req, io_req_task_cancel);
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(uring_cmd_complete_in_task);
+
static void io_req_task_queue(struct io_kiocb *req)
{
int ret;
@@ -3537,6 +3565,7 @@ void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret)
req_set_fail_links(req);
io_req_complete(req, ret);
}
+EXPORT_SYMBOL_GPL(io_uring_cmd_done);
static int io_uring_cmd_prep(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index 5849b15334b8..dba8f0b3da9f 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -41,6 +41,8 @@ struct io_uring_cmd {
#if defined(CONFIG_IO_URING)
void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret);
+int uring_cmd_complete_in_task(struct io_uring_cmd *ioucmd,
+ void (*driver_cb)(struct io_uring_cmd *));
struct sock *io_uring_get_socket(struct file *file);
void __io_uring_task_cancel(void);
void __io_uring_files_cancel(struct files_struct *files);
@@ -65,6 +67,12 @@ static inline void io_uring_free(struct task_struct *tsk)
static inline void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret)
{
}
+
+int uring_cmd_complete_in_task(struct io_uring_cmd *ioucmd,
+ void (*driver_cb)(struct io_uring_cmd *))
+{
+ return -1;
+}
static inline struct sock *io_uring_get_socket(struct file *file)
{
return NULL;
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC 2/3] nvme: passthrough helper with callback
[not found] ` <CGME20210302161005epcas5p23f28fe21bab5a3e07b9b382dd2406fdc@epcas5p2.samsung.com>
@ 2021-03-02 16:07 ` Kanchan Joshi
2021-03-03 7:52 ` Chaitanya Kulkarni
0 siblings, 1 reply; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-02 16:07 UTC (permalink / raw)
To: axboe, hch, kbusch
Cc: io-uring, linux-nvme, anuj20.g, javier.gonz, Kanchan Joshi
This is a prep patch, so that ioctl completion can be decoupled from
submission.
Signed-off-by: Kanchan Joshi <[email protected]>
---
drivers/nvme/host/core.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e68a8c4ac5a6..15c9490b593f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1126,7 +1126,8 @@ static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects)
}
}
-void nvme_execute_passthru_rq(struct request *rq)
+void nvme_execute_passthru_rq_common(struct request *rq,
+ rq_end_io_fn *done)
{
struct nvme_command *cmd = nvme_req(rq)->cmd;
struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
@@ -1135,9 +1136,17 @@ void nvme_execute_passthru_rq(struct request *rq)
u32 effects;
effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
- blk_execute_rq(disk, rq, 0);
+ if (!done)
+ blk_execute_rq(disk, rq, 0);
+ else
+ blk_execute_rq_nowait(disk, rq, 0, done);
nvme_passthru_end(ctrl, effects);
}
+
+void nvme_execute_passthru_rq(struct request *rq)
+{
+ return nvme_execute_passthru_rq_common(rq, NULL);
+}
EXPORT_SYMBOL_NS_GPL(nvme_execute_passthru_rq, NVME_TARGET_PASSTHRU);
static int nvme_submit_user_cmd(struct request_queue *q,
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC 3/3] nvme: wire up support for async passthrough
[not found] ` <CGME20210302161010epcas5p4da13d3f866ff4ed45c04fb82929d1c83@epcas5p4.samsung.com>
@ 2021-03-02 16:07 ` Kanchan Joshi
2021-03-03 7:34 ` Chaitanya Kulkarni
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-02 16:07 UTC (permalink / raw)
To: axboe, hch, kbusch
Cc: io-uring, linux-nvme, anuj20.g, javier.gonz, Kanchan Joshi
Introduce handler for mq_ops->uring_cmd(), implementing async
passthrough on block-device.
The passthrough command is submitted to device as before, but it is not
waited upon. When device notifies completion, driver sets up a
task-work based callback which first updates ioctl specific
fields/buffers and after that completes uring_cmd.
Signed-off-by: Kanchan Joshi <[email protected]>
Signed-off-by: Anuj Gupta <[email protected]>
---
drivers/nvme/host/core.c | 212 ++++++++++++++++++++++++++++++++-------
drivers/nvme/host/nvme.h | 3 +
drivers/nvme/host/pci.c | 1 +
3 files changed, 180 insertions(+), 36 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 15c9490b593f..e894ead04935 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1053,6 +1053,89 @@ static void *nvme_add_user_metadata(struct bio *bio, void __user *ubuf,
return ERR_PTR(ret);
}
+/*
+ * Convert integer values from ioctl structures to user pointers, silently
+ * ignoring the upper bits in the compat case to match behaviour of 32-bit
+ * kernels.
+ */
+static void __user *nvme_to_user_ptr(uintptr_t ptrval)
+{
+ if (in_compat_syscall())
+ ptrval = (compat_uptr_t)ptrval;
+ return (void __user *)ptrval;
+}
+/*
+ * This is carved within the block_uring_cmd, to avoid dynamic allocation.
+ * Care should be taken not to grow this beyond what is available.
+ */
+struct uring_cmd_data {
+ union {
+ struct bio *bio;
+ u64 result; /* nvme cmd result */
+ };
+ void *meta; /* kernel-resident buffer */
+ int status; /* nvme cmd status */
+};
+
+inline u64 *ucmd_data_addr(struct io_uring_cmd *ioucmd)
+{
+ return &(((struct block_uring_cmd *)&ioucmd->pdu)->unused[0]);
+}
+
+void ioucmd_task_cb(struct io_uring_cmd *ioucmd)
+{
+ struct uring_cmd_data *ucd;
+ struct nvme_passthru_cmd __user *ptcmd;
+ struct block_uring_cmd *bcmd;
+
+ bcmd = (struct block_uring_cmd *) &ioucmd->pdu;
+ ptcmd = (void __user *) bcmd->addr;
+ ucd = (struct uring_cmd_data *) ucmd_data_addr(ioucmd);
+
+ /* handle meta update */
+ if (ucd->meta) {
+ void __user *umeta = nvme_to_user_ptr(ptcmd->metadata);
+
+ if (!ucd->status)
+ if (copy_to_user(umeta, ucd->meta, ptcmd->metadata_len))
+ ucd->status = -EFAULT;
+ kfree(ucd->meta);
+ }
+ /* handle result update */
+ if (put_user(ucd->result, (u32 __user *)&ptcmd->result))
+ ucd->status = -EFAULT;
+ io_uring_cmd_done(ioucmd, ucd->status);
+}
+
+void nvme_end_async_pt(struct request *req, blk_status_t err)
+{
+ struct io_uring_cmd *ioucmd;
+ struct uring_cmd_data *ucd;
+ struct bio *bio;
+ int ret;
+
+ ioucmd = req->end_io_data;
+ ucd = (struct uring_cmd_data *) ucmd_data_addr(ioucmd);
+ /* extract bio before reusing the same field for status */
+ bio = ucd->bio;
+
+ if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
+ ucd->status = -EINTR;
+ else
+ ucd->status = nvme_req(req)->status;
+ ucd->result = le64_to_cpu(nvme_req(req)->result.u64);
+
+ /* this takes care of setting up task-work */
+ ret = uring_cmd_complete_in_task(ioucmd, ioucmd_task_cb);
+ if (ret < 0)
+ kfree(ucd->meta);
+
+ /* unmap pages, free bio, nvme command and request */
+ blk_rq_unmap_user(bio);
+ kfree(nvme_req(req)->cmd);
+ blk_mq_free_request(req);
+}
+
static u32 nvme_known_admin_effects(u8 opcode)
{
switch (opcode) {
@@ -1149,10 +1232,27 @@ void nvme_execute_passthru_rq(struct request *rq)
}
EXPORT_SYMBOL_NS_GPL(nvme_execute_passthru_rq, NVME_TARGET_PASSTHRU);
+static void nvme_setup_uring_cmd_data(struct request *rq,
+ struct io_uring_cmd *ioucmd, void *meta, bool write)
+{
+ struct uring_cmd_data *ucd;
+
+ ucd = (struct uring_cmd_data *) ucmd_data_addr(ioucmd);
+ /* to free bio on completion, as req->bio will be null at that time */
+ ucd->bio = rq->bio;
+ /* meta update is required only for read requests */
+ if (meta && !write)
+ ucd->meta = meta;
+ else
+ ucd->meta = NULL;
+ rq->end_io_data = ioucmd;
+}
+
static int nvme_submit_user_cmd(struct request_queue *q,
struct nvme_command *cmd, void __user *ubuffer,
unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
- u32 meta_seed, u64 *result, unsigned timeout)
+ u32 meta_seed, u64 *result, unsigned int timeout,
+ struct io_uring_cmd *ioucmd)
{
bool write = nvme_is_write(cmd);
struct nvme_ns *ns = q->queuedata;
@@ -1188,6 +1288,11 @@ static int nvme_submit_user_cmd(struct request_queue *q,
req->cmd_flags |= REQ_INTEGRITY;
}
}
+ if (ioucmd) { /* async handling */
+ nvme_setup_uring_cmd_data(req, ioucmd, meta, write);
+ nvme_execute_passthru_rq_common(req, nvme_end_async_pt);
+ return 0;
+ }
nvme_execute_passthru_rq(req);
if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
@@ -1553,18 +1658,6 @@ static void nvme_enable_aen(struct nvme_ctrl *ctrl)
queue_work(nvme_wq, &ctrl->async_event_work);
}
-/*
- * Convert integer values from ioctl structures to user pointers, silently
- * ignoring the upper bits in the compat case to match behaviour of 32-bit
- * kernels.
- */
-static void __user *nvme_to_user_ptr(uintptr_t ptrval)
-{
- if (in_compat_syscall())
- ptrval = (compat_uptr_t)ptrval;
- return (void __user *)ptrval;
-}
-
static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
{
struct nvme_user_io io;
@@ -1625,14 +1718,16 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
return nvme_submit_user_cmd(ns->queue, &c,
nvme_to_user_ptr(io.addr), length,
- metadata, meta_len, lower_32_bits(io.slba), NULL, 0);
+ metadata, meta_len, lower_32_bits(io.slba), NULL, 0,
+ NULL);
}
static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
- struct nvme_passthru_cmd __user *ucmd)
+ struct nvme_passthru_cmd __user *ucmd,
+ struct io_uring_cmd *ioucmd)
{
struct nvme_passthru_cmd cmd;
- struct nvme_command c;
+ struct nvme_command c, *cptr;
unsigned timeout = 0;
u64 result;
int status;
@@ -1644,32 +1739,44 @@ static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
if (cmd.flags)
return -EINVAL;
- memset(&c, 0, sizeof(c));
- c.common.opcode = cmd.opcode;
- c.common.flags = cmd.flags;
- c.common.nsid = cpu_to_le32(cmd.nsid);
- c.common.cdw2[0] = cpu_to_le32(cmd.cdw2);
- c.common.cdw2[1] = cpu_to_le32(cmd.cdw3);
- c.common.cdw10 = cpu_to_le32(cmd.cdw10);
- c.common.cdw11 = cpu_to_le32(cmd.cdw11);
- c.common.cdw12 = cpu_to_le32(cmd.cdw12);
- c.common.cdw13 = cpu_to_le32(cmd.cdw13);
- c.common.cdw14 = cpu_to_le32(cmd.cdw14);
- c.common.cdw15 = cpu_to_le32(cmd.cdw15);
+ if (!ioucmd)
+ cptr = &c;
+ else {
+ /*for async - allocate cmd dynamically */
+ cptr = kmalloc(sizeof(struct nvme_command), GFP_KERNEL);
+ if (!cptr)
+ return -ENOMEM;
+ }
+
+ memset(cptr, 0, sizeof(c));
+ cptr->common.opcode = cmd.opcode;
+ cptr->common.flags = cmd.flags;
+ cptr->common.nsid = cpu_to_le32(cmd.nsid);
+ cptr->common.cdw2[0] = cpu_to_le32(cmd.cdw2);
+ cptr->common.cdw2[1] = cpu_to_le32(cmd.cdw3);
+ cptr->common.cdw10 = cpu_to_le32(cmd.cdw10);
+ cptr->common.cdw11 = cpu_to_le32(cmd.cdw11);
+ cptr->common.cdw12 = cpu_to_le32(cmd.cdw12);
+ cptr->common.cdw13 = cpu_to_le32(cmd.cdw13);
+ cptr->common.cdw14 = cpu_to_le32(cmd.cdw14);
+ cptr->common.cdw15 = cpu_to_le32(cmd.cdw15);
if (cmd.timeout_ms)
timeout = msecs_to_jiffies(cmd.timeout_ms);
- status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
+ status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, cptr,
nvme_to_user_ptr(cmd.addr), cmd.data_len,
nvme_to_user_ptr(cmd.metadata), cmd.metadata_len,
- 0, &result, timeout);
+ 0, &result, timeout, ioucmd);
- if (status >= 0) {
+ if (!ioucmd && status >= 0) {
if (put_user(result, &ucmd->result))
return -EFAULT;
}
+ if (ioucmd && status < 0)
+ kfree(cptr);
+
return status;
}
@@ -1707,7 +1814,7 @@ static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
nvme_to_user_ptr(cmd.addr), cmd.data_len,
nvme_to_user_ptr(cmd.metadata), cmd.metadata_len,
- 0, &cmd.result, timeout);
+ 0, &cmd.result, timeout, NULL);
if (status >= 0) {
if (put_user(cmd.result, &ucmd->result))
@@ -1769,7 +1876,7 @@ static int nvme_handle_ctrl_ioctl(struct nvme_ns *ns, unsigned int cmd,
switch (cmd) {
case NVME_IOCTL_ADMIN_CMD:
- ret = nvme_user_cmd(ctrl, NULL, argp);
+ ret = nvme_user_cmd(ctrl, NULL, argp, NULL);
break;
case NVME_IOCTL_ADMIN64_CMD:
ret = nvme_user_cmd64(ctrl, NULL, argp);
@@ -1808,7 +1915,7 @@ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
ret = ns->head->ns_id;
break;
case NVME_IOCTL_IO_CMD:
- ret = nvme_user_cmd(ns->ctrl, ns, argp);
+ ret = nvme_user_cmd(ns->ctrl, ns, argp, NULL);
break;
case NVME_IOCTL_SUBMIT_IO:
ret = nvme_submit_io(ns, argp);
@@ -1827,6 +1934,39 @@ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
return ret;
}
+int nvme_uring_cmd(struct request_queue *q, struct io_uring_cmd *ioucmd,
+ enum io_uring_cmd_flags flags)
+{
+ struct nvme_ns_head *head = NULL;
+ struct block_device *bdev = I_BDEV(ioucmd->file->f_mapping->host);
+ struct block_uring_cmd *bcmd = (struct block_uring_cmd *)&ioucmd->pdu;
+ struct nvme_ns *ns;
+ int srcu_idx, ret;
+ void __user *argp = (void __user *) bcmd->addr;
+
+ BUILD_BUG_ON(sizeof(struct uring_cmd_data) >
+ sizeof(struct block_uring_cmd) -
+ offsetof(struct block_uring_cmd, unused));
+
+ ns = nvme_get_ns_from_disk(bdev->bd_disk, &head, &srcu_idx);
+ if (unlikely(!ns))
+ return -EWOULDBLOCK;
+
+ switch (bcmd->ioctl_cmd) {
+ case NVME_IOCTL_IO_CMD:
+ ret = nvme_user_cmd(ns->ctrl, ns, argp, ioucmd);
+ break;
+ default:
+ ret = -ENOTTY;
+ }
+
+ if (ret >= 0)
+ ret = -EIOCBQUEUED;
+ nvme_put_ns_from_disk(head, srcu_idx);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(nvme_uring_cmd);
+
#ifdef CONFIG_COMPAT
struct nvme_user_io32 {
__u8 opcode;
@@ -3318,7 +3458,7 @@ static int nvme_dev_user_cmd(struct nvme_ctrl *ctrl, void __user *argp)
kref_get(&ns->kref);
up_read(&ctrl->namespaces_rwsem);
- ret = nvme_user_cmd(ctrl, ns, argp);
+ ret = nvme_user_cmd(ctrl, ns, argp, NULL);
nvme_put_ns(ns);
return ret;
@@ -3335,7 +3475,7 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
switch (cmd) {
case NVME_IOCTL_ADMIN_CMD:
- return nvme_user_cmd(ctrl, NULL, argp);
+ return nvme_user_cmd(ctrl, NULL, argp, NULL);
case NVME_IOCTL_ADMIN64_CMD:
return nvme_user_cmd64(ctrl, NULL, argp);
case NVME_IOCTL_IO_CMD:
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 07b34175c6ce..19b58311d8f7 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -19,6 +19,7 @@
#include <linux/t10-pi.h>
#include <trace/events/block.h>
+#include <linux/io_uring.h>
extern unsigned int nvme_io_timeout;
#define NVME_IO_TIMEOUT (nvme_io_timeout * HZ)
@@ -620,6 +621,8 @@ int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
void nvme_start_freeze(struct nvme_ctrl *ctrl);
#define NVME_QID_ANY -1
+int nvme_uring_cmd(struct request_queue *q, struct io_uring_cmd *ucmd,
+ enum io_uring_cmd_flags flags);
struct request *nvme_alloc_request(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags);
void nvme_cleanup_cmd(struct request *req);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 7b6632c00ffd..6c84dc964259 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1629,6 +1629,7 @@ static const struct blk_mq_ops nvme_mq_ops = {
.map_queues = nvme_pci_map_queues,
.timeout = nvme_timeout,
.poll = nvme_poll,
+ .uring_cmd = nvme_uring_cmd,
};
static void nvme_dev_remove_admin(struct nvme_dev *dev)
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-02 16:07 ` [RFC 3/3] nvme: wire up support for async passthrough Kanchan Joshi
@ 2021-03-03 7:34 ` Chaitanya Kulkarni
2021-03-04 11:01 ` Kanchan Joshi
2021-03-03 7:35 ` Chaitanya Kulkarni
2021-03-03 7:37 ` Chaitanya Kulkarni
2 siblings, 1 reply; 19+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-03 7:34 UTC (permalink / raw)
To: Kanchan Joshi, [email protected], [email protected], [email protected]
Cc: [email protected], [email protected],
[email protected], [email protected]
On 3/2/21 23:22, Kanchan Joshi wrote:
> + if (!ioucmd)
> + cptr = &c;
> + else {
> + /*for async - allocate cmd dynamically */
> + cptr = kmalloc(sizeof(struct nvme_command), GFP_KERNEL);
> + if (!cptr)
> + return -ENOMEM;
> + }
> +
> + memset(cptr, 0, sizeof(c));
Why not kzalloc and remove memset() ?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-02 16:07 ` [RFC 3/3] nvme: wire up support for async passthrough Kanchan Joshi
2021-03-03 7:34 ` Chaitanya Kulkarni
@ 2021-03-03 7:35 ` Chaitanya Kulkarni
2021-03-04 10:55 ` Kanchan Joshi
2021-03-03 7:37 ` Chaitanya Kulkarni
2 siblings, 1 reply; 19+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-03 7:35 UTC (permalink / raw)
To: Kanchan Joshi, [email protected], [email protected], [email protected]
Cc: [email protected], [email protected],
[email protected], [email protected]
On 3/2/21 23:22, Kanchan Joshi wrote:
> + switch (bcmd->ioctl_cmd) {
> + case NVME_IOCTL_IO_CMD:
> + ret = nvme_user_cmd(ns->ctrl, ns, argp, ioucmd);
> + break;
> + default:
> + ret = -ENOTTY;
> + }
Switch for one case ? why not use if else ?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-02 16:07 ` [RFC 3/3] nvme: wire up support for async passthrough Kanchan Joshi
2021-03-03 7:34 ` Chaitanya Kulkarni
2021-03-03 7:35 ` Chaitanya Kulkarni
@ 2021-03-03 7:37 ` Chaitanya Kulkarni
2 siblings, 0 replies; 19+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-03 7:37 UTC (permalink / raw)
To: Kanchan Joshi, [email protected], [email protected], [email protected]
Cc: [email protected], [email protected],
[email protected], [email protected]
On 3/2/21 23:22, Kanchan Joshi wrote:
> +/*
> + * Convert integer values from ioctl structures to user pointers, silently
> + * ignoring the upper bits in the compat case to match behaviour of 32-bit
> + * kernels.
> + */
> +static void __user *nvme_to_user_ptr(uintptr_t ptrval)
> +{
> + if (in_compat_syscall())
> + ptrval = (compat_uptr_t)ptrval;
> + return (void __user *)ptrval;
> +}
> +/*
Newline is missing from before above comment.
> + * This is carved within the block_uring_cmd, to avoid dynamic allocation.
> + * Care should be taken not to grow this beyond what is available.
> + */
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 2/3] nvme: passthrough helper with callback
2021-03-02 16:07 ` [RFC 2/3] nvme: passthrough helper with callback Kanchan Joshi
@ 2021-03-03 7:52 ` Chaitanya Kulkarni
2021-03-04 11:13 ` Kanchan Joshi
0 siblings, 1 reply; 19+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-03 7:52 UTC (permalink / raw)
To: Kanchan Joshi
Cc: [email protected], [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On 3/2/21 23:22, Kanchan Joshi wrote:
> -void nvme_execute_passthru_rq(struct request *rq)
> +void nvme_execute_passthru_rq_common(struct request *rq,
> + rq_end_io_fn *done)
> {
> struct nvme_command *cmd = nvme_req(rq)->cmd;
> struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
> @@ -1135,9 +1136,17 @@ void nvme_execute_passthru_rq(struct request *rq)
> u32 effects;
>
> effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
> - blk_execute_rq(disk, rq, 0);
> + if (!done)
> + blk_execute_rq(disk, rq, 0);
> + else
> + blk_execute_rq_nowait(disk, rq, 0, done);
> nvme_passthru_end(ctrl, effects);
This needs a detailed explanation in order to prove the correctness.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-03 7:35 ` Chaitanya Kulkarni
@ 2021-03-04 10:55 ` Kanchan Joshi
2021-03-05 13:22 ` hch
0 siblings, 1 reply; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-04 10:55 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: Kanchan Joshi, [email protected], [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On Thu, Mar 4, 2021 at 12:22 AM Chaitanya Kulkarni
<[email protected]> wrote:
>
> On 3/2/21 23:22, Kanchan Joshi wrote:
> > + switch (bcmd->ioctl_cmd) {
> > + case NVME_IOCTL_IO_CMD:
> > + ret = nvme_user_cmd(ns->ctrl, ns, argp, ioucmd);
> > + break;
> > + default:
> > + ret = -ENOTTY;
> > + }
> Switch for one case ? why not use if else ?
Indeed, I should have used that. I had started off with more than one,
and retracted later.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-03 7:34 ` Chaitanya Kulkarni
@ 2021-03-04 11:01 ` Kanchan Joshi
2021-03-04 22:59 ` Chaitanya Kulkarni
2021-03-05 2:41 ` Keith Busch
0 siblings, 2 replies; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-04 11:01 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: Kanchan Joshi, [email protected], [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On Thu, Mar 4, 2021 at 3:14 AM Chaitanya Kulkarni
<[email protected]> wrote:
>
> On 3/2/21 23:22, Kanchan Joshi wrote:
> > + if (!ioucmd)
> > + cptr = &c;
> > + else {
> > + /*for async - allocate cmd dynamically */
> > + cptr = kmalloc(sizeof(struct nvme_command), GFP_KERNEL);
> > + if (!cptr)
> > + return -ENOMEM;
> > + }
> > +
> > + memset(cptr, 0, sizeof(c));
> Why not kzalloc and remove memset() ?
Yes sure. Ideally I want to get rid of the allocation cost. Perhaps
employing kmem_cache/mempool can help. Do you think there is a better
way?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 2/3] nvme: passthrough helper with callback
2021-03-03 7:52 ` Chaitanya Kulkarni
@ 2021-03-04 11:13 ` Kanchan Joshi
2021-03-05 4:14 ` Chaitanya Kulkarni
0 siblings, 1 reply; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-04 11:13 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: Kanchan Joshi, [email protected], [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On Wed, Mar 3, 2021 at 8:52 PM Chaitanya Kulkarni
<[email protected]> wrote:
>
> On 3/2/21 23:22, Kanchan Joshi wrote:
> > -void nvme_execute_passthru_rq(struct request *rq)
> > +void nvme_execute_passthru_rq_common(struct request *rq,
> > + rq_end_io_fn *done)
> > {
> > struct nvme_command *cmd = nvme_req(rq)->cmd;
> > struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
> > @@ -1135,9 +1136,17 @@ void nvme_execute_passthru_rq(struct request *rq)
> > u32 effects;
> >
> > effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
> > - blk_execute_rq(disk, rq, 0);
> > + if (!done)
> > + blk_execute_rq(disk, rq, 0);
> > + else
> > + blk_execute_rq_nowait(disk, rq, 0, done);
> > nvme_passthru_end(ctrl, effects);
>
> This needs a detailed explanation in order to prove the correctness.
Do you see something wrong here?
blk_execute_rq() employs the same helper (i.e. nowait one) and uses
additional completion-variable to make it sync.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-04 11:01 ` Kanchan Joshi
@ 2021-03-04 22:59 ` Chaitanya Kulkarni
2021-03-05 1:46 ` Jens Axboe
2021-03-05 2:41 ` Keith Busch
1 sibling, 1 reply; 19+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-04 22:59 UTC (permalink / raw)
To: Kanchan Joshi
Cc: Kanchan Joshi, [email protected], [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On 3/4/21 03:01, Kanchan Joshi wrote:
> On Thu, Mar 4, 2021 at 3:14 AM Chaitanya Kulkarni
> <[email protected]> wrote:
>> On 3/2/21 23:22, Kanchan Joshi wrote:
>>> + if (!ioucmd)
>>> + cptr = &c;
>>> + else {
>>> + /*for async - allocate cmd dynamically */
>>> + cptr = kmalloc(sizeof(struct nvme_command), GFP_KERNEL);
>>> + if (!cptr)
>>> + return -ENOMEM;
>>> + }
>>> +
>>> + memset(cptr, 0, sizeof(c));
>> Why not kzalloc and remove memset() ?
> Yes sure. Ideally I want to get rid of the allocation cost. Perhaps
> employing kmem_cache/mempool can help. Do you think there is a better
> way?
>
Is this hot path ?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-04 22:59 ` Chaitanya Kulkarni
@ 2021-03-05 1:46 ` Jens Axboe
0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2021-03-05 1:46 UTC (permalink / raw)
To: Chaitanya Kulkarni, Kanchan Joshi
Cc: Kanchan Joshi, [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On 3/4/21 3:59 PM, Chaitanya Kulkarni wrote:
> On 3/4/21 03:01, Kanchan Joshi wrote:
>> On Thu, Mar 4, 2021 at 3:14 AM Chaitanya Kulkarni
>> <[email protected]> wrote:
>>> On 3/2/21 23:22, Kanchan Joshi wrote:
>>>> + if (!ioucmd)
>>>> + cptr = &c;
>>>> + else {
>>>> + /*for async - allocate cmd dynamically */
>>>> + cptr = kmalloc(sizeof(struct nvme_command), GFP_KERNEL);
>>>> + if (!cptr)
>>>> + return -ENOMEM;
>>>> + }
>>>> +
>>>> + memset(cptr, 0, sizeof(c));
>>> Why not kzalloc and remove memset() ?
>> Yes sure. Ideally I want to get rid of the allocation cost. Perhaps
>> employing kmem_cache/mempool can help. Do you think there is a better
>> way?
>>
>
> Is this hot path ?
It's command issue, and for a bypass (kind of) solution. It's most
definitely the hot path.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-04 11:01 ` Kanchan Joshi
2021-03-04 22:59 ` Chaitanya Kulkarni
@ 2021-03-05 2:41 ` Keith Busch
2021-03-05 10:44 ` Kanchan Joshi
2021-03-05 13:17 ` hch
1 sibling, 2 replies; 19+ messages in thread
From: Keith Busch @ 2021-03-05 2:41 UTC (permalink / raw)
To: Kanchan Joshi
Cc: Chaitanya Kulkarni, Kanchan Joshi, [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On Thu, Mar 04, 2021 at 04:31:11PM +0530, Kanchan Joshi wrote:
> On Thu, Mar 4, 2021 at 3:14 AM Chaitanya Kulkarni
> <[email protected]> wrote:
> >
> > On 3/2/21 23:22, Kanchan Joshi wrote:
> > > + if (!ioucmd)
> > > + cptr = &c;
> > > + else {
> > > + /*for async - allocate cmd dynamically */
> > > + cptr = kmalloc(sizeof(struct nvme_command), GFP_KERNEL);
> > > + if (!cptr)
> > > + return -ENOMEM;
> > > + }
> > > +
> > > + memset(cptr, 0, sizeof(c));
> > Why not kzalloc and remove memset() ?
>
> Yes sure. Ideally I want to get rid of the allocation cost. Perhaps
> employing kmem_cache/mempool can help. Do you think there is a better
> way?
I'll need to think on this to consider if the memory cost is worth it
(8b to 64b), but you could replace nvme_request's 'struct nvme_command'
pointer with the struct itself and not have to allocate anything per IO.
An added bonus is that sync and async handling become more the same.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 2/3] nvme: passthrough helper with callback
2021-03-04 11:13 ` Kanchan Joshi
@ 2021-03-05 4:14 ` Chaitanya Kulkarni
2021-03-05 10:40 ` Kanchan Joshi
0 siblings, 1 reply; 19+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-05 4:14 UTC (permalink / raw)
To: Kanchan Joshi
Cc: Kanchan Joshi, [email protected], [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On 3/4/21 03:14, Kanchan Joshi wrote:
> On Wed, Mar 3, 2021 at 8:52 PM Chaitanya Kulkarni
> <[email protected]> wrote:
>> On 3/2/21 23:22, Kanchan Joshi wrote:
>>> -void nvme_execute_passthru_rq(struct request *rq)
>>> +void nvme_execute_passthru_rq_common(struct request *rq,
>>> + rq_end_io_fn *done)
>>> {
>>> struct nvme_command *cmd = nvme_req(rq)->cmd;
>>> struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
>>> @@ -1135,9 +1136,17 @@ void nvme_execute_passthru_rq(struct request *rq)
>>> u32 effects;
>>>
>>> effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
>>> - blk_execute_rq(disk, rq, 0);
>>> + if (!done)
>>> + blk_execute_rq(disk, rq, 0);
>>> + else
>>> + blk_execute_rq_nowait(disk, rq, 0, done);
>>> nvme_passthru_end(ctrl, effects);
>> This needs a detailed explanation in order to prove the correctness.
> Do you see something wrong here?
> blk_execute_rq() employs the same helper (i.e. nowait one) and uses
> additional completion-variable to make it sync.
>
There is no gurantee that command will finish between call to
blk_execute_rq_nowait()
and nvme_passthru_end() is there ?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 2/3] nvme: passthrough helper with callback
2021-03-05 4:14 ` Chaitanya Kulkarni
@ 2021-03-05 10:40 ` Kanchan Joshi
0 siblings, 0 replies; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-05 10:40 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: Kanchan Joshi, [email protected], [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On Fri, Mar 5, 2021 at 9:44 AM Chaitanya Kulkarni
<[email protected]> wrote:
>
> On 3/4/21 03:14, Kanchan Joshi wrote:
> > On Wed, Mar 3, 2021 at 8:52 PM Chaitanya Kulkarni
> > <[email protected]> wrote:
> >> On 3/2/21 23:22, Kanchan Joshi wrote:
> >>> -void nvme_execute_passthru_rq(struct request *rq)
> >>> +void nvme_execute_passthru_rq_common(struct request *rq,
> >>> + rq_end_io_fn *done)
> >>> {
> >>> struct nvme_command *cmd = nvme_req(rq)->cmd;
> >>> struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
> >>> @@ -1135,9 +1136,17 @@ void nvme_execute_passthru_rq(struct request *rq)
> >>> u32 effects;
> >>>
> >>> effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode);
> >>> - blk_execute_rq(disk, rq, 0);
> >>> + if (!done)
> >>> + blk_execute_rq(disk, rq, 0);
> >>> + else
> >>> + blk_execute_rq_nowait(disk, rq, 0, done);
> >>> nvme_passthru_end(ctrl, effects);
> >> This needs a detailed explanation in order to prove the correctness.
> > Do you see something wrong here?
> > blk_execute_rq() employs the same helper (i.e. nowait one) and uses
> > additional completion-variable to make it sync.
> >
>
> There is no gurantee that command will finish between call to
> blk_execute_rq_nowait()
> and nvme_passthru_end() is there ?
Got your concern now, thanks for elaborating.
I will move the passthru_end to when completion actually occurs, and
get it processed via workqueue.
And perhaps I should send a micro-optimization patch which will call
passthru_end only for non-zero command-effects.
Currently we do three checks.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-05 2:41 ` Keith Busch
@ 2021-03-05 10:44 ` Kanchan Joshi
2021-03-05 13:17 ` hch
1 sibling, 0 replies; 19+ messages in thread
From: Kanchan Joshi @ 2021-03-05 10:44 UTC (permalink / raw)
To: Keith Busch
Cc: Chaitanya Kulkarni, Kanchan Joshi, [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
On Fri, Mar 5, 2021 at 8:11 AM Keith Busch <[email protected]> wrote:
>
> On Thu, Mar 04, 2021 at 04:31:11PM +0530, Kanchan Joshi wrote:
> > On Thu, Mar 4, 2021 at 3:14 AM Chaitanya Kulkarni
> > <[email protected]> wrote:
> > >
> > > On 3/2/21 23:22, Kanchan Joshi wrote:
> > > > + if (!ioucmd)
> > > > + cptr = &c;
> > > > + else {
> > > > + /*for async - allocate cmd dynamically */
> > > > + cptr = kmalloc(sizeof(struct nvme_command), GFP_KERNEL);
> > > > + if (!cptr)
> > > > + return -ENOMEM;
> > > > + }
> > > > +
> > > > + memset(cptr, 0, sizeof(c));
> > > Why not kzalloc and remove memset() ?
> >
> > Yes sure. Ideally I want to get rid of the allocation cost. Perhaps
> > employing kmem_cache/mempool can help. Do you think there is a better
> > way?
>
> I'll need to think on this to consider if the memory cost is worth it
> (8b to 64b), but you could replace nvme_request's 'struct nvme_command'
> pointer with the struct itself and not have to allocate anything per IO.
> An added bonus is that sync and async handling become more the same.
Indeed, thanks for this. I will fold that change as a prep in the next version.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-05 2:41 ` Keith Busch
2021-03-05 10:44 ` Kanchan Joshi
@ 2021-03-05 13:17 ` hch
1 sibling, 0 replies; 19+ messages in thread
From: hch @ 2021-03-05 13:17 UTC (permalink / raw)
To: Keith Busch
Cc: Kanchan Joshi, Chaitanya Kulkarni, Kanchan Joshi, [email protected],
[email protected], [email protected],
[email protected], [email protected],
[email protected]
On Fri, Mar 05, 2021 at 11:41:33AM +0900, Keith Busch wrote:
> I'll need to think on this to consider if the memory cost is worth it
> (8b to 64b), but you could replace nvme_request's 'struct nvme_command'
> pointer with the struct itself and not have to allocate anything per IO.
> An added bonus is that sync and async handling become more the same.
This would solve a lot of mess with the async passthrough, and also
more closely match what is done in SCSI. In theory we could use a
cut down version without the data and metadata pointers (just 40 bytes),
but I'm not sure that is really worth it given that we then need to
rearrange things in the I/O path.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC 3/3] nvme: wire up support for async passthrough
2021-03-04 10:55 ` Kanchan Joshi
@ 2021-03-05 13:22 ` hch
0 siblings, 0 replies; 19+ messages in thread
From: hch @ 2021-03-05 13:22 UTC (permalink / raw)
To: Kanchan Joshi
Cc: Chaitanya Kulkarni, Kanchan Joshi, [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected],
[email protected]
On Thu, Mar 04, 2021 at 04:25:41PM +0530, Kanchan Joshi wrote:
> On Thu, Mar 4, 2021 at 12:22 AM Chaitanya Kulkarni
> <[email protected]> wrote:
> >
> > On 3/2/21 23:22, Kanchan Joshi wrote:
> > > + switch (bcmd->ioctl_cmd) {
> > > + case NVME_IOCTL_IO_CMD:
> > > + ret = nvme_user_cmd(ns->ctrl, ns, argp, ioucmd);
> > > + break;
> > > + default:
> > > + ret = -ENOTTY;
> > > + }
> > Switch for one case ? why not use if else ?
>
> Indeed, I should have used that. I had started off with more than one,
> and retracted later.
I have to say I really do like the switch for ioctl handlers, as they
are designed as a multiplexer, and nothing screams multiplexer more
than a switch statement.
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2021-03-05 13:23 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CGME20210302160907epcas5p4d04ab7c4ef4d467302498f06ed656b24@epcas5p4.samsung.com>
2021-03-02 16:07 ` [RFC 0/3] Async nvme passthrough Kanchan Joshi
[not found] ` <CGME20210302161000epcas5p3ec5c461a8eec593b6d83a9127c7fec4f@epcas5p3.samsung.com>
2021-03-02 16:07 ` [RFC 1/3] io_uring: add helper for uring_cmd completion in submitter-task Kanchan Joshi
[not found] ` <CGME20210302161005epcas5p23f28fe21bab5a3e07b9b382dd2406fdc@epcas5p2.samsung.com>
2021-03-02 16:07 ` [RFC 2/3] nvme: passthrough helper with callback Kanchan Joshi
2021-03-03 7:52 ` Chaitanya Kulkarni
2021-03-04 11:13 ` Kanchan Joshi
2021-03-05 4:14 ` Chaitanya Kulkarni
2021-03-05 10:40 ` Kanchan Joshi
[not found] ` <CGME20210302161010epcas5p4da13d3f866ff4ed45c04fb82929d1c83@epcas5p4.samsung.com>
2021-03-02 16:07 ` [RFC 3/3] nvme: wire up support for async passthrough Kanchan Joshi
2021-03-03 7:34 ` Chaitanya Kulkarni
2021-03-04 11:01 ` Kanchan Joshi
2021-03-04 22:59 ` Chaitanya Kulkarni
2021-03-05 1:46 ` Jens Axboe
2021-03-05 2:41 ` Keith Busch
2021-03-05 10:44 ` Kanchan Joshi
2021-03-05 13:17 ` hch
2021-03-03 7:35 ` Chaitanya Kulkarni
2021-03-04 10:55 ` Kanchan Joshi
2021-03-05 13:22 ` hch
2021-03-03 7:37 ` Chaitanya Kulkarni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox