* [PATCH v1 0/3] virtio-blk: add io_uring passthrough support.
@ 2024-12-18 9:24 Ferry Meng
2024-12-18 9:24 ` [PATCH v1 1/3] virtio-blk: add virtio-blk chardev support Ferry Meng
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Ferry Meng @ 2024-12-18 9:24 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, linux-block, Jens Axboe,
virtualization
Cc: linux-kernel, io-uring, Stefan Hajnoczi, Christoph Hellwig,
Joseph Qi, Jeffle Xu, Ferry Meng
This patchset implements io_uring passthrough surppot in virtio-blk
driver, bypass vfs and part of block layer logic, resulting in lower
submit latency and increased flexibility when utilizing virtio-blk.
In this version, currently only supports READ/WRITE vec/no-vec operations,
others like discard or zoned ops not considered in. So the userspace-related
struct is not complicated.
struct virtblk_uring_cmd {
__u32 type;
__u32 ioprio;
__u64 sector;
/* above is related to out_hdr */
__u64 data; // user buffer addr or iovec base addr.
__u32 data_len; // user buffer length or iovec count.
__u32 flag; // only contains whether a vector rw or not.
};
To test this patch series, I changed fio's code:
1. Added virtio-blk support to engines/io_uring.c.
2. Added virtio-blk support to the t/io_uring.c testing tool.
Link: https://github.com/jdmfr/fio
===========
Performance
===========
Using t/io_uring-vblk, the performance of virtio-blk based on uring-cmd
scales better than block device access. (such as below, Virtio-Blk with QEMU,
1-depth fio)
(passthru) read: IOPS=17.2k, BW=67.4MiB/s (70.6MB/s)
slat (nsec): min=2907, max=43592, avg=3981.87, stdev=595.10
clat (usec): min=38, max=285,avg=53.47, stdev= 8.28
lat (usec): min=44, max=288, avg=57.45, stdev= 8.28
(block) read: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)
slat (nsec): min=3408, max=35366, avg=5102.17, stdev=790.79
clat (usec): min=35, max=343, avg=59.63, stdev=10.26
lat (usec): min=43, max=349, avg=64.73, stdev=10.21
Testing the virtio-blk device with fio using 'engines=io_uring_cmd'
and 'engines=io_uring' also demonstrates improvements in submit latency.
(passthru) taskset -c 0 t/io_uring-vblk -b4096 -d8 -c4 -s4 -p0 -F1 -B0 -O0 -n1 -u1 /dev/vdcc0
IOPS=189.80K, BW=741MiB/s, IOS/call=4/3
IOPS=187.68K, BW=733MiB/s, IOS/call=4/3
(block) taskset -c 0 t/io_uring-vblk -b4096 -d8 -c4 -s4 -p0 -F1 -B0 -O0 -n1 -u0 /dev/vdc
IOPS=101.51K, BW=396MiB/s, IOS/call=4/3
IOPS=100.01K, BW=390MiB/s, IOS/call=4/4
=======
Changes
=======
Changes in v1:
--------------
* remove virtblk_is_write() helper
* fix rq_flags type definition (blk_opf_t), add REQ_ALLOC_CACHE flag.
https://lore.kernel.org/io-uring/[email protected]/
RFC discussion:
---------------
https://lore.kernel.org/io-uring/[email protected]/
Ferry Meng (3):
virtio-blk: add virtio-blk chardev support.
virtio-blk: add uring_cmd support for I/O passthru on chardev.
virtio-blk: add uring_cmd iopoll support.
drivers/block/virtio_blk.c | 320 +++++++++++++++++++++++++++++++-
include/uapi/linux/virtio_blk.h | 16 ++
2 files changed, 331 insertions(+), 5 deletions(-)
--
2.43.5
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v1 1/3] virtio-blk: add virtio-blk chardev support.
2024-12-18 9:24 [PATCH v1 0/3] virtio-blk: add io_uring passthrough support Ferry Meng
@ 2024-12-18 9:24 ` Ferry Meng
2024-12-30 7:47 ` Joseph Qi
2024-12-18 9:24 ` [PATCH v1 2/3] virtio-blk: add uring_cmd support for I/O passthru on chardev Ferry Meng
2024-12-18 9:24 ` [PATCH v1 3/3] virtio-blk: add uring_cmd iopoll support Ferry Meng
2 siblings, 1 reply; 6+ messages in thread
From: Ferry Meng @ 2024-12-18 9:24 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, linux-block, Jens Axboe,
virtualization
Cc: linux-kernel, io-uring, Stefan Hajnoczi, Christoph Hellwig,
Joseph Qi, Jeffle Xu, Ferry Meng
Introduce character interfaces for block device (per-device), facilitating
access to block devices through io_uring I/O passsthrough.
Besides, vblk initialize only use kmalloc with GFP_KERNEL flag, but for
char device support, we should ensure cdev kobj must be zero before
initialize. So better initial this struct with __GFP_ZERO flag.
Now the character devices only named as
- /dev/vdXc0
Currently, only one character interface is created for one actual
virtblk device, although it has been partitioned.
Signed-off-by: Ferry Meng <[email protected]>
---
drivers/block/virtio_blk.c | 84 +++++++++++++++++++++++++++++++++++++-
1 file changed, 83 insertions(+), 1 deletion(-)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 194417abc105..3487aaa67514 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -17,6 +17,7 @@
#include <linux/numa.h>
#include <linux/vmalloc.h>
#include <uapi/linux/virtio_ring.h>
+#include <linux/cdev.h>
#define PART_BITS 4
#define VQ_NAME_LEN 16
@@ -25,6 +26,8 @@
/* The maximum number of sg elements that fit into a virtqueue */
#define VIRTIO_BLK_MAX_SG_ELEMS 32768
+#define VIRTBLK_MINORS (1U << MINORBITS)
+
#ifdef CONFIG_ARCH_NO_SG_CHAIN
#define VIRTIO_BLK_INLINE_SG_CNT 0
#else
@@ -45,6 +48,10 @@ MODULE_PARM_DESC(poll_queues, "The number of dedicated virtqueues for polling I/
static int major;
static DEFINE_IDA(vd_index_ida);
+static DEFINE_IDA(vd_chr_minor_ida);
+static dev_t vd_chr_devt;
+static struct class *vd_chr_class;
+
static struct workqueue_struct *virtblk_wq;
struct virtio_blk_vq {
@@ -84,6 +91,10 @@ struct virtio_blk {
/* For zoned device */
unsigned int zone_sectors;
+
+ /* For passthrough cmd */
+ struct cdev cdev;
+ struct device cdev_device;
};
struct virtblk_req {
@@ -1239,6 +1250,55 @@ static const struct blk_mq_ops virtio_mq_ops = {
.poll = virtblk_poll,
};
+static void virtblk_cdev_rel(struct device *dev)
+{
+ ida_free(&vd_chr_minor_ida, MINOR(dev->devt));
+}
+
+static void virtblk_cdev_del(struct cdev *cdev, struct device *cdev_device)
+{
+ cdev_device_del(cdev, cdev_device);
+ put_device(cdev_device);
+}
+
+static int virtblk_cdev_add(struct virtio_blk *vblk,
+ const struct file_operations *fops)
+{
+ struct cdev *cdev = &vblk->cdev;
+ struct device *cdev_device = &vblk->cdev_device;
+ int minor, ret;
+
+ minor = ida_alloc(&vd_chr_minor_ida, GFP_KERNEL);
+ if (minor < 0)
+ return minor;
+
+ cdev_device->parent = &vblk->vdev->dev;
+ cdev_device->devt = MKDEV(MAJOR(vd_chr_devt), minor);
+ cdev_device->class = vd_chr_class;
+ cdev_device->release = virtblk_cdev_rel;
+ device_initialize(cdev_device);
+
+ ret = dev_set_name(cdev_device, "%sc0", vblk->disk->disk_name);
+ if (ret)
+ goto err;
+
+ cdev_init(cdev, fops);
+ ret = cdev_device_add(cdev, cdev_device);
+ if (ret) {
+ put_device(cdev_device);
+ goto err;
+ }
+ return ret;
+
+err:
+ ida_free(&vd_chr_minor_ida, minor);
+ return ret;
+}
+
+static const struct file_operations virtblk_chr_fops = {
+ .owner = THIS_MODULE,
+};
+
static unsigned int virtblk_queue_depth;
module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);
@@ -1456,7 +1516,7 @@ static int virtblk_probe(struct virtio_device *vdev)
goto out;
index = err;
- vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
+ vdev->priv = vblk = kzalloc(sizeof(*vblk), GFP_KERNEL);
if (!vblk) {
err = -ENOMEM;
goto out_free_index;
@@ -1544,6 +1604,10 @@ static int virtblk_probe(struct virtio_device *vdev)
if (err)
goto out_cleanup_disk;
+ err = virtblk_cdev_add(vblk, &virtblk_chr_fops);
+ if (err)
+ goto out_cleanup_disk;
+
return 0;
out_cleanup_disk:
@@ -1568,6 +1632,8 @@ static void virtblk_remove(struct virtio_device *vdev)
/* Make sure no work handler is accessing the device. */
flush_work(&vblk->config_work);
+ virtblk_cdev_del(&vblk->cdev, &vblk->cdev_device);
+
del_gendisk(vblk->disk);
blk_mq_free_tag_set(&vblk->tag_set);
@@ -1674,13 +1740,27 @@ static int __init virtio_blk_init(void)
goto out_destroy_workqueue;
}
+ error = alloc_chrdev_region(&vd_chr_devt, 0, VIRTBLK_MINORS,
+ "vblk-generic");
+ if (error < 0)
+ goto unregister_chrdev;
+
+ vd_chr_class = class_create("vblk-generic");
+ if (IS_ERR(vd_chr_class)) {
+ error = PTR_ERR(vd_chr_class);
+ goto unregister_chrdev;
+ }
+
error = register_virtio_driver(&virtio_blk);
if (error)
goto out_unregister_blkdev;
+
return 0;
out_unregister_blkdev:
unregister_blkdev(major, "virtblk");
+unregister_chrdev:
+ unregister_chrdev_region(vd_chr_devt, VIRTBLK_MINORS);
out_destroy_workqueue:
destroy_workqueue(virtblk_wq);
return error;
@@ -1690,7 +1770,9 @@ static void __exit virtio_blk_fini(void)
{
unregister_virtio_driver(&virtio_blk);
unregister_blkdev(major, "virtblk");
+ unregister_chrdev_region(vd_chr_devt, VIRTBLK_MINORS);
destroy_workqueue(virtblk_wq);
+ ida_destroy(&vd_chr_minor_ida);
}
module_init(virtio_blk_init);
module_exit(virtio_blk_fini);
--
2.43.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v1 2/3] virtio-blk: add uring_cmd support for I/O passthru on chardev.
2024-12-18 9:24 [PATCH v1 0/3] virtio-blk: add io_uring passthrough support Ferry Meng
2024-12-18 9:24 ` [PATCH v1 1/3] virtio-blk: add virtio-blk chardev support Ferry Meng
@ 2024-12-18 9:24 ` Ferry Meng
2024-12-30 8:00 ` Joseph Qi
2024-12-18 9:24 ` [PATCH v1 3/3] virtio-blk: add uring_cmd iopoll support Ferry Meng
2 siblings, 1 reply; 6+ messages in thread
From: Ferry Meng @ 2024-12-18 9:24 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, linux-block, Jens Axboe,
virtualization
Cc: linux-kernel, io-uring, Stefan Hajnoczi, Christoph Hellwig,
Joseph Qi, Jeffle Xu, Ferry Meng
Add ->uring_cmd() support for virtio-blk chardev (/dev/vdXc0).
According to virtio spec, in addition to passing 'hdr' info into kernel,
we also need to pass vaddr & data length of the 'iov' requeired for the
writev/readv op.
Signed-off-by: Ferry Meng <[email protected]>
---
drivers/block/virtio_blk.c | 223 +++++++++++++++++++++++++++++++-
include/uapi/linux/virtio_blk.h | 16 +++
2 files changed, 235 insertions(+), 4 deletions(-)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 3487aaa67514..cd88cf939144 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -18,6 +18,9 @@
#include <linux/vmalloc.h>
#include <uapi/linux/virtio_ring.h>
#include <linux/cdev.h>
+#include <linux/io_uring/cmd.h>
+#include <linux/types.h>
+#include <linux/uio.h>
#define PART_BITS 4
#define VQ_NAME_LEN 16
@@ -54,6 +57,20 @@ static struct class *vd_chr_class;
static struct workqueue_struct *virtblk_wq;
+struct virtblk_uring_cmd_pdu {
+ struct request *req;
+ struct bio *bio;
+ int status;
+};
+
+struct virtblk_command {
+ struct virtio_blk_outhdr out_hdr;
+
+ __u64 data;
+ __u32 data_len;
+ __u32 flag;
+};
+
struct virtio_blk_vq {
struct virtqueue *vq;
spinlock_t lock;
@@ -122,6 +139,11 @@ struct virtblk_req {
struct scatterlist sg[];
};
+static void __user *virtblk_to_user_ptr(uintptr_t ptrval)
+{
+ return (void __user *)ptrval;
+}
+
static inline blk_status_t virtblk_result(u8 status)
{
switch (status) {
@@ -259,9 +281,6 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) && op_is_zone_mgmt(req_op(req)))
return BLK_STS_NOTSUPP;
- /* Set fields for all request types */
- vbr->out_hdr.ioprio = cpu_to_virtio32(vdev, req_get_ioprio(req));
-
switch (req_op(req)) {
case REQ_OP_READ:
type = VIRTIO_BLK_T_IN;
@@ -309,9 +328,11 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
type = VIRTIO_BLK_T_ZONE_RESET_ALL;
break;
case REQ_OP_DRV_IN:
+ case REQ_OP_DRV_OUT:
/*
* Out header has already been prepared by the caller (virtblk_get_id()
- * or virtblk_submit_zone_report()), nothing to do here.
+ * virtblk_submit_zone_report() or io_uring passthrough cmd), nothing
+ * to do here.
*/
return 0;
default:
@@ -323,6 +344,7 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
vbr->in_hdr_len = in_hdr_len;
vbr->out_hdr.type = cpu_to_virtio32(vdev, type);
vbr->out_hdr.sector = cpu_to_virtio64(vdev, sector);
+ vbr->out_hdr.ioprio = cpu_to_virtio32(vdev, req_get_ioprio(req));
if (type == VIRTIO_BLK_T_DISCARD || type == VIRTIO_BLK_T_WRITE_ZEROES ||
type == VIRTIO_BLK_T_SECURE_ERASE) {
@@ -832,6 +854,7 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str)
vbr = blk_mq_rq_to_pdu(req);
vbr->in_hdr_len = sizeof(vbr->in_hdr.status);
vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_GET_ID);
+ vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(req));
vbr->out_hdr.sector = 0;
err = blk_rq_map_kern(q, req, id_str, VIRTIO_BLK_ID_BYTES, GFP_KERNEL);
@@ -1250,6 +1273,197 @@ static const struct blk_mq_ops virtio_mq_ops = {
.poll = virtblk_poll,
};
+static inline struct virtblk_uring_cmd_pdu *virtblk_get_uring_cmd_pdu(
+ struct io_uring_cmd *ioucmd)
+{
+ return (struct virtblk_uring_cmd_pdu *)&ioucmd->pdu;
+}
+
+static void virtblk_uring_task_cb(struct io_uring_cmd *ioucmd,
+ unsigned int issue_flags)
+{
+ struct virtblk_uring_cmd_pdu *pdu = virtblk_get_uring_cmd_pdu(ioucmd);
+ struct virtblk_req *vbr = blk_mq_rq_to_pdu(pdu->req);
+ u64 result = 0;
+
+ if (pdu->bio)
+ blk_rq_unmap_user(pdu->bio);
+
+ /* currently result has no use, it should be zero as cqe->res */
+ io_uring_cmd_done(ioucmd, vbr->in_hdr.status, result, issue_flags);
+}
+
+static enum rq_end_io_ret virtblk_uring_cmd_end_io(struct request *req,
+ blk_status_t err)
+{
+ struct io_uring_cmd *ioucmd = req->end_io_data;
+ struct virtblk_uring_cmd_pdu *pdu = virtblk_get_uring_cmd_pdu(ioucmd);
+
+ /*
+ * For iopoll, complete it directly. Note that using the uring_cmd
+ * helper for this is safe only because we check blk_rq_is_poll().
+ * As that returns false if we're NOT on a polled queue, then it's
+ * safe to use the polled completion helper.
+ *
+ * Otherwise, move the completion to task work.
+ */
+ if (blk_rq_is_poll(req)) {
+ if (pdu->bio)
+ blk_rq_unmap_user(pdu->bio);
+ io_uring_cmd_iopoll_done(ioucmd, 0, pdu->status);
+ } else {
+ io_uring_cmd_do_in_task_lazy(ioucmd, virtblk_uring_task_cb);
+ }
+
+ return RQ_END_IO_FREE;
+}
+
+static struct virtblk_req *virtblk_req(struct request *req)
+{
+ return blk_mq_rq_to_pdu(req);
+}
+
+static inline enum req_op virtblk_req_op(const struct virtblk_uring_cmd *cmd)
+{
+ return (cmd->type & VIRTIO_BLK_T_OUT) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
+}
+
+static struct request *virtblk_alloc_user_request(
+ struct request_queue *q, struct virtblk_command *cmd,
+ blk_opf_t rq_flags, blk_mq_req_flags_t blk_flags)
+{
+ struct request *req;
+
+ req = blk_mq_alloc_request(q, rq_flags, blk_flags);
+ if (IS_ERR(req))
+ return req;
+
+ req->rq_flags |= RQF_DONTPREP;
+ memcpy(&virtblk_req(req)->out_hdr, &cmd->out_hdr, sizeof(struct virtio_blk_outhdr));
+ return req;
+}
+
+static int virtblk_map_user_request(struct request *req, u64 ubuffer,
+ unsigned int bufflen, struct io_uring_cmd *ioucmd,
+ bool vec)
+{
+ struct request_queue *q = req->q;
+ struct virtio_blk *vblk = q->queuedata;
+ struct block_device *bdev = vblk ? vblk->disk->part0 : NULL;
+ struct bio *bio = NULL;
+ int ret;
+
+ if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
+ struct iov_iter iter;
+
+ /* fixedbufs is only for non-vectored io */
+ if (WARN_ON_ONCE(vec))
+ return -EINVAL;
+ ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
+ rq_data_dir(req), &iter, ioucmd);
+ if (ret < 0)
+ goto out;
+ ret = blk_rq_map_user_iov(q, req, NULL,
+ &iter, GFP_KERNEL);
+ } else {
+ ret = blk_rq_map_user_io(req, NULL,
+ virtblk_to_user_ptr(ubuffer),
+ bufflen, GFP_KERNEL, vec, 0,
+ 0, rq_data_dir(req));
+ }
+ if (ret)
+ goto out;
+
+ bio = req->bio;
+ if (bdev)
+ bio_set_dev(bio, bdev);
+ return 0;
+
+out:
+ blk_mq_free_request(req);
+ return ret;
+}
+
+static int virtblk_uring_cmd_io(struct virtio_blk *vblk,
+ struct io_uring_cmd *ioucmd, unsigned int issue_flags, bool vec)
+{
+ struct virtblk_uring_cmd_pdu *pdu = virtblk_get_uring_cmd_pdu(ioucmd);
+ const struct virtblk_uring_cmd *cmd = io_uring_sqe_cmd(ioucmd->sqe);
+ struct request_queue *q = vblk->disk->queue;
+ struct virtblk_req *vbr;
+ struct virtblk_command d;
+ struct request *req;
+ blk_opf_t rq_flags = REQ_ALLOC_CACHE | virtblk_req_op(cmd);
+ blk_mq_req_flags_t blk_flags = 0;
+ int ret;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ d.out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, READ_ONCE(cmd->ioprio));
+ d.out_hdr.type = cpu_to_virtio32(vblk->vdev, READ_ONCE(cmd->type));
+ d.out_hdr.sector = cpu_to_virtio64(vblk->vdev, READ_ONCE(cmd->sector));
+ d.data = READ_ONCE(cmd->data);
+ d.data_len = READ_ONCE(cmd->data_len);
+
+ if (issue_flags & IO_URING_F_NONBLOCK) {
+ rq_flags |= REQ_NOWAIT;
+ blk_flags = BLK_MQ_REQ_NOWAIT;
+ }
+ if (issue_flags & IO_URING_F_IOPOLL)
+ rq_flags |= REQ_POLLED;
+
+ req = virtblk_alloc_user_request(q, &d, rq_flags, blk_flags);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ vbr = virtblk_req(req);
+ vbr->in_hdr_len = sizeof(vbr->in_hdr.status);
+ if (d.data && d.data_len) {
+ ret = virtblk_map_user_request(req, d.data, d.data_len, ioucmd, vec);
+ if (ret)
+ return ret;
+ }
+
+ /* to free bio on completion, as req->bio will be null at that time */
+ pdu->bio = req->bio;
+ pdu->req = req;
+ req->end_io_data = ioucmd;
+ req->end_io = virtblk_uring_cmd_end_io;
+ blk_execute_rq_nowait(req, false);
+ return -EIOCBQUEUED;
+}
+
+
+static int virtblk_uring_cmd(struct virtio_blk *vblk, struct io_uring_cmd *ioucmd,
+ unsigned int issue_flags)
+{
+ int ret;
+
+ BUILD_BUG_ON(sizeof(struct virtblk_uring_cmd_pdu) > sizeof(ioucmd->pdu));
+
+ switch (ioucmd->cmd_op) {
+ case VIRTBLK_URING_CMD_IO:
+ ret = virtblk_uring_cmd_io(vblk, ioucmd, issue_flags, false);
+ break;
+ case VIRTBLK_URING_CMD_IO_VEC:
+ ret = virtblk_uring_cmd_io(vblk, ioucmd, issue_flags, true);
+ break;
+ default:
+ ret = -ENOTTY;
+ }
+
+ return ret;
+}
+
+static int virtblk_chr_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags)
+{
+ struct virtio_blk *vblk = container_of(file_inode(ioucmd->file)->i_cdev,
+ struct virtio_blk, cdev);
+
+ return virtblk_uring_cmd(vblk, ioucmd, issue_flags);
+}
+
static void virtblk_cdev_rel(struct device *dev)
{
ida_free(&vd_chr_minor_ida, MINOR(dev->devt));
@@ -1297,6 +1511,7 @@ static int virtblk_cdev_add(struct virtio_blk *vblk,
static const struct file_operations virtblk_chr_fops = {
.owner = THIS_MODULE,
+ .uring_cmd = virtblk_chr_uring_cmd,
};
static unsigned int virtblk_queue_depth;
diff --git a/include/uapi/linux/virtio_blk.h b/include/uapi/linux/virtio_blk.h
index 3744e4da1b2a..93b6e1b5b9a4 100644
--- a/include/uapi/linux/virtio_blk.h
+++ b/include/uapi/linux/virtio_blk.h
@@ -313,6 +313,22 @@ struct virtio_scsi_inhdr {
};
#endif /* !VIRTIO_BLK_NO_LEGACY */
+struct virtblk_uring_cmd {
+ /* VIRTIO_BLK_T* */
+ __u32 type;
+ /* io priority. */
+ __u32 ioprio;
+ /* Sector (ie. 512 byte offset) */
+ __u64 sector;
+
+ __u64 data;
+ __u32 data_len;
+ __u32 flag;
+};
+
+#define VIRTBLK_URING_CMD_IO 1
+#define VIRTBLK_URING_CMD_IO_VEC 2
+
/* And this is the final byte of the write scatter-gather list. */
#define VIRTIO_BLK_S_OK 0
#define VIRTIO_BLK_S_IOERR 1
--
2.43.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v1 3/3] virtio-blk: add uring_cmd iopoll support.
2024-12-18 9:24 [PATCH v1 0/3] virtio-blk: add io_uring passthrough support Ferry Meng
2024-12-18 9:24 ` [PATCH v1 1/3] virtio-blk: add virtio-blk chardev support Ferry Meng
2024-12-18 9:24 ` [PATCH v1 2/3] virtio-blk: add uring_cmd support for I/O passthru on chardev Ferry Meng
@ 2024-12-18 9:24 ` Ferry Meng
2 siblings, 0 replies; 6+ messages in thread
From: Ferry Meng @ 2024-12-18 9:24 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, linux-block, Jens Axboe,
virtualization
Cc: linux-kernel, io-uring, Stefan Hajnoczi, Christoph Hellwig,
Joseph Qi, Jeffle Xu, Ferry Meng
Add polling support for uring_cmd polling support for virtblk, which
will be called during completion-polling.
Signed-off-by: Ferry Meng <[email protected]>
---
drivers/block/virtio_blk.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index cd88cf939144..cd4c74e06107 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1464,6 +1464,18 @@ static int virtblk_chr_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue
return virtblk_uring_cmd(vblk, ioucmd, issue_flags);
}
+static int virtblk_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd,
+ struct io_comp_batch *iob,
+ unsigned int poll_flags)
+{
+ struct virtblk_uring_cmd_pdu *pdu = virtblk_get_uring_cmd_pdu(ioucmd);
+ struct request *req = pdu->req;
+
+ if (req && blk_rq_is_poll(req))
+ return blk_rq_poll(req, iob, poll_flags);
+ return 0;
+}
+
static void virtblk_cdev_rel(struct device *dev)
{
ida_free(&vd_chr_minor_ida, MINOR(dev->devt));
@@ -1512,6 +1524,7 @@ static int virtblk_cdev_add(struct virtio_blk *vblk,
static const struct file_operations virtblk_chr_fops = {
.owner = THIS_MODULE,
.uring_cmd = virtblk_chr_uring_cmd,
+ .uring_cmd_iopoll = virtblk_chr_uring_cmd_iopoll,
};
static unsigned int virtblk_queue_depth;
--
2.43.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v1 1/3] virtio-blk: add virtio-blk chardev support.
2024-12-18 9:24 ` [PATCH v1 1/3] virtio-blk: add virtio-blk chardev support Ferry Meng
@ 2024-12-30 7:47 ` Joseph Qi
0 siblings, 0 replies; 6+ messages in thread
From: Joseph Qi @ 2024-12-30 7:47 UTC (permalink / raw)
To: Ferry Meng
Cc: linux-kernel, io-uring, Stefan Hajnoczi, Christoph Hellwig,
Jeffle Xu, Michael S . Tsirkin, Jason Wang, linux-block,
Jens Axboe, virtualization
On 2024/12/18 17:24, Ferry Meng wrote:
> Introduce character interfaces for block device (per-device), facilitating
> access to block devices through io_uring I/O passsthrough.
>
> Besides, vblk initialize only use kmalloc with GFP_KERNEL flag, but for
> char device support, we should ensure cdev kobj must be zero before
> initialize. So better initial this struct with __GFP_ZERO flag.
>
> Now the character devices only named as
>
> - /dev/vdXc0
>
> Currently, only one character interface is created for one actual
> virtblk device, although it has been partitioned.
>
> Signed-off-by: Ferry Meng <[email protected]>
> ---
> drivers/block/virtio_blk.c | 84 +++++++++++++++++++++++++++++++++++++-
> 1 file changed, 83 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 194417abc105..3487aaa67514 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -17,6 +17,7 @@
> #include <linux/numa.h>
> #include <linux/vmalloc.h>
> #include <uapi/linux/virtio_ring.h>
> +#include <linux/cdev.h>
>
> #define PART_BITS 4
> #define VQ_NAME_LEN 16
> @@ -25,6 +26,8 @@
> /* The maximum number of sg elements that fit into a virtqueue */
> #define VIRTIO_BLK_MAX_SG_ELEMS 32768
>
> +#define VIRTBLK_MINORS (1U << MINORBITS)
> +
> #ifdef CONFIG_ARCH_NO_SG_CHAIN
> #define VIRTIO_BLK_INLINE_SG_CNT 0
> #else
> @@ -45,6 +48,10 @@ MODULE_PARM_DESC(poll_queues, "The number of dedicated virtqueues for polling I/
> static int major;
> static DEFINE_IDA(vd_index_ida);
>
> +static DEFINE_IDA(vd_chr_minor_ida);
> +static dev_t vd_chr_devt;
> +static struct class *vd_chr_class;
> +
> static struct workqueue_struct *virtblk_wq;
>
> struct virtio_blk_vq {
> @@ -84,6 +91,10 @@ struct virtio_blk {
>
> /* For zoned device */
> unsigned int zone_sectors;
> +
> + /* For passthrough cmd */
> + struct cdev cdev;
> + struct device cdev_device;
> };
>
> struct virtblk_req {
> @@ -1239,6 +1250,55 @@ static const struct blk_mq_ops virtio_mq_ops = {
> .poll = virtblk_poll,
> };
>
> +static void virtblk_cdev_rel(struct device *dev)
> +{
> + ida_free(&vd_chr_minor_ida, MINOR(dev->devt));
> +}
> +
> +static void virtblk_cdev_del(struct cdev *cdev, struct device *cdev_device)
> +{
> + cdev_device_del(cdev, cdev_device);
> + put_device(cdev_device);
> +}
> +
> +static int virtblk_cdev_add(struct virtio_blk *vblk,
> + const struct file_operations *fops)
> +{
> + struct cdev *cdev = &vblk->cdev;
> + struct device *cdev_device = &vblk->cdev_device;
> + int minor, ret;
> +
> + minor = ida_alloc(&vd_chr_minor_ida, GFP_KERNEL);
> + if (minor < 0)
> + return minor;
> +
> + cdev_device->parent = &vblk->vdev->dev;
> + cdev_device->devt = MKDEV(MAJOR(vd_chr_devt), minor);
> + cdev_device->class = vd_chr_class;
> + cdev_device->release = virtblk_cdev_rel;
> + device_initialize(cdev_device);
> +
> + ret = dev_set_name(cdev_device, "%sc0", vblk->disk->disk_name);
> + if (ret)
> + goto err;
> +
> + cdev_init(cdev, fops);
> + ret = cdev_device_add(cdev, cdev_device);
> + if (ret) {
> + put_device(cdev_device);
> + goto err;
put_device() will call cdev_device->release() to free vd_chr_minor_ida.
> + }
> + return ret;
> +
> +err:
> + ida_free(&vd_chr_minor_ida, minor);
> + return ret;
> +}
> +
> +static const struct file_operations virtblk_chr_fops = {
> + .owner = THIS_MODULE,
> +};
> +
> static unsigned int virtblk_queue_depth;
> module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);
>
> @@ -1456,7 +1516,7 @@ static int virtblk_probe(struct virtio_device *vdev)
> goto out;
> index = err;
>
> - vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
> + vdev->priv = vblk = kzalloc(sizeof(*vblk), GFP_KERNEL);
> if (!vblk) {
> err = -ENOMEM;
> goto out_free_index;
> @@ -1544,6 +1604,10 @@ static int virtblk_probe(struct virtio_device *vdev)
> if (err)
> goto out_cleanup_disk;
>
> + err = virtblk_cdev_add(vblk, &virtblk_chr_fops);
> + if (err)
> + goto out_cleanup_disk;
Missing remove the added disk before.
> +
> return 0;
>
> out_cleanup_disk:
> @@ -1568,6 +1632,8 @@ static void virtblk_remove(struct virtio_device *vdev)
> /* Make sure no work handler is accessing the device. */
> flush_work(&vblk->config_work);
>
> + virtblk_cdev_del(&vblk->cdev, &vblk->cdev_device);
> +
> del_gendisk(vblk->disk);
> blk_mq_free_tag_set(&vblk->tag_set);
>
> @@ -1674,13 +1740,27 @@ static int __init virtio_blk_init(void)
> goto out_destroy_workqueue;
> }
>
> + error = alloc_chrdev_region(&vd_chr_devt, 0, VIRTBLK_MINORS,
> + "vblk-generic");
> + if (error < 0)
> + goto unregister_chrdev;
Should unregister blkdev.
> +
> + vd_chr_class = class_create("vblk-generic");
> + if (IS_ERR(vd_chr_class)) {
> + error = PTR_ERR(vd_chr_class);
> + goto unregister_chrdev;
> + }
> +
> error = register_virtio_driver(&virtio_blk);
> if (error)
> goto out_unregister_blkdev;
You've missed destroying vd_chr_class.
> +
> return 0;
>
> out_unregister_blkdev:
> unregister_blkdev(major, "virtblk");
> +unregister_chrdev:
> + unregister_chrdev_region(vd_chr_devt, VIRTBLK_MINORS);
The out labels should be re-ordered, e.g. move this up.
> out_destroy_workqueue:
> destroy_workqueue(virtblk_wq);
> return error;
> @@ -1690,7 +1770,9 @@ static void __exit virtio_blk_fini(void)
> {
> unregister_virtio_driver(&virtio_blk);
> unregister_blkdev(major, "virtblk");
Also missed destroying vd_chr_class.
Thanks,
Joseph
> + unregister_chrdev_region(vd_chr_devt, VIRTBLK_MINORS);
> destroy_workqueue(virtblk_wq);
> + ida_destroy(&vd_chr_minor_ida);
> }
> module_init(virtio_blk_init);
> module_exit(virtio_blk_fini);
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v1 2/3] virtio-blk: add uring_cmd support for I/O passthru on chardev.
2024-12-18 9:24 ` [PATCH v1 2/3] virtio-blk: add uring_cmd support for I/O passthru on chardev Ferry Meng
@ 2024-12-30 8:00 ` Joseph Qi
0 siblings, 0 replies; 6+ messages in thread
From: Joseph Qi @ 2024-12-30 8:00 UTC (permalink / raw)
To: Ferry Meng, Michael S . Tsirkin, Jason Wang, linux-block,
Jens Axboe, virtualization
Cc: linux-kernel, io-uring, Stefan Hajnoczi, Christoph Hellwig,
Jeffle Xu
On 2024/12/18 17:24, Ferry Meng wrote:
> Add ->uring_cmd() support for virtio-blk chardev (/dev/vdXc0).
> According to virtio spec, in addition to passing 'hdr' info into kernel,
> we also need to pass vaddr & data length of the 'iov' requeired for the
> writev/readv op.
>
> Signed-off-by: Ferry Meng <[email protected]>
> ---
> drivers/block/virtio_blk.c | 223 +++++++++++++++++++++++++++++++-
> include/uapi/linux/virtio_blk.h | 16 +++
> 2 files changed, 235 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 3487aaa67514..cd88cf939144 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -18,6 +18,9 @@
> #include <linux/vmalloc.h>
> #include <uapi/linux/virtio_ring.h>
> #include <linux/cdev.h>
> +#include <linux/io_uring/cmd.h>
> +#include <linux/types.h>
> +#include <linux/uio.h>
>
> #define PART_BITS 4
> #define VQ_NAME_LEN 16
> @@ -54,6 +57,20 @@ static struct class *vd_chr_class;
>
> static struct workqueue_struct *virtblk_wq;
>
> +struct virtblk_uring_cmd_pdu {
> + struct request *req;
> + struct bio *bio;
> + int status;
> +};
> +
> +struct virtblk_command {
> + struct virtio_blk_outhdr out_hdr;
> +
> + __u64 data;
> + __u32 data_len;
> + __u32 flag;
> +};
> +
> struct virtio_blk_vq {
> struct virtqueue *vq;
> spinlock_t lock;
> @@ -122,6 +139,11 @@ struct virtblk_req {
> struct scatterlist sg[];
> };
>
> +static void __user *virtblk_to_user_ptr(uintptr_t ptrval)
> +{
Refer to nvme_to_user_ptr(), the logic for compat syscall is missing.
> + return (void __user *)ptrval;
> +}
> +
> static inline blk_status_t virtblk_result(u8 status)
> {
> switch (status) {
> @@ -259,9 +281,6 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
> if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) && op_is_zone_mgmt(req_op(req)))
> return BLK_STS_NOTSUPP;
>
> - /* Set fields for all request types */
> - vbr->out_hdr.ioprio = cpu_to_virtio32(vdev, req_get_ioprio(req));
> -
> switch (req_op(req)) {
> case REQ_OP_READ:
> type = VIRTIO_BLK_T_IN;
> @@ -309,9 +328,11 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
> type = VIRTIO_BLK_T_ZONE_RESET_ALL;
> break;
> case REQ_OP_DRV_IN:
> + case REQ_OP_DRV_OUT:
> /*
> * Out header has already been prepared by the caller (virtblk_get_id()
> - * or virtblk_submit_zone_report()), nothing to do here.
> + * virtblk_submit_zone_report() or io_uring passthrough cmd), nothing
> + * to do here.
> */
> return 0;
> default:
> @@ -323,6 +344,7 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
> vbr->in_hdr_len = in_hdr_len;
> vbr->out_hdr.type = cpu_to_virtio32(vdev, type);
> vbr->out_hdr.sector = cpu_to_virtio64(vdev, sector);
> + vbr->out_hdr.ioprio = cpu_to_virtio32(vdev, req_get_ioprio(req));
>
> if (type == VIRTIO_BLK_T_DISCARD || type == VIRTIO_BLK_T_WRITE_ZEROES ||
> type == VIRTIO_BLK_T_SECURE_ERASE) {
> @@ -832,6 +854,7 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str)
> vbr = blk_mq_rq_to_pdu(req);
> vbr->in_hdr_len = sizeof(vbr->in_hdr.status);
> vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_GET_ID);
> + vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(req));
> vbr->out_hdr.sector = 0;
>
> err = blk_rq_map_kern(q, req, id_str, VIRTIO_BLK_ID_BYTES, GFP_KERNEL);
> @@ -1250,6 +1273,197 @@ static const struct blk_mq_ops virtio_mq_ops = {
> .poll = virtblk_poll,
> };
>
> +static inline struct virtblk_uring_cmd_pdu *virtblk_get_uring_cmd_pdu(
> + struct io_uring_cmd *ioucmd)
> +{
> + return (struct virtblk_uring_cmd_pdu *)&ioucmd->pdu;
> +}
> +
> +static void virtblk_uring_task_cb(struct io_uring_cmd *ioucmd,
> + unsigned int issue_flags)
> +{
> + struct virtblk_uring_cmd_pdu *pdu = virtblk_get_uring_cmd_pdu(ioucmd);
> + struct virtblk_req *vbr = blk_mq_rq_to_pdu(pdu->req);
> + u64 result = 0;
> +
> + if (pdu->bio)
> + blk_rq_unmap_user(pdu->bio);
> +
> + /* currently result has no use, it should be zero as cqe->res */
> + io_uring_cmd_done(ioucmd, vbr->in_hdr.status, result, issue_flags);
> +}
> +
> +static enum rq_end_io_ret virtblk_uring_cmd_end_io(struct request *req,
> + blk_status_t err)
> +{
> + struct io_uring_cmd *ioucmd = req->end_io_data;
> + struct virtblk_uring_cmd_pdu *pdu = virtblk_get_uring_cmd_pdu(ioucmd);
> +
> + /*
> + * For iopoll, complete it directly. Note that using the uring_cmd
> + * helper for this is safe only because we check blk_rq_is_poll().
> + * As that returns false if we're NOT on a polled queue, then it's
> + * safe to use the polled completion helper.
> + *
> + * Otherwise, move the completion to task work.
> + */
> + if (blk_rq_is_poll(req)) {
> + if (pdu->bio)
> + blk_rq_unmap_user(pdu->bio);
> + io_uring_cmd_iopoll_done(ioucmd, 0, pdu->status);
> + } else {
> + io_uring_cmd_do_in_task_lazy(ioucmd, virtblk_uring_task_cb);
> + }
> +
> + return RQ_END_IO_FREE;
> +}
> +
> +static struct virtblk_req *virtblk_req(struct request *req)
> +{
> + return blk_mq_rq_to_pdu(req);
> +}
Don't think this helper is necessary.
You've already open-coded in other places.
> +
> +static inline enum req_op virtblk_req_op(const struct virtblk_uring_cmd *cmd)
> +{
> + return (cmd->type & VIRTIO_BLK_T_OUT) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
> +}
> +
> +static struct request *virtblk_alloc_user_request(
> + struct request_queue *q, struct virtblk_command *cmd,
> + blk_opf_t rq_flags, blk_mq_req_flags_t blk_flags)
> +{
> + struct request *req;
> +
> + req = blk_mq_alloc_request(q, rq_flags, blk_flags);
> + if (IS_ERR(req))
> + return req;
> +
> + req->rq_flags |= RQF_DONTPREP;
Do we have to do some other initialization? e.g. REQ_POLLED.
> + memcpy(&virtblk_req(req)->out_hdr, &cmd->out_hdr, sizeof(struct virtio_blk_outhdr));
> + return req;
> +}
> +
> +static int virtblk_map_user_request(struct request *req, u64 ubuffer,
> + unsigned int bufflen, struct io_uring_cmd *ioucmd,
> + bool vec)
> +{
> + struct request_queue *q = req->q;
> + struct virtio_blk *vblk = q->queuedata;
> + struct block_device *bdev = vblk ? vblk->disk->part0 : NULL;
> + struct bio *bio = NULL;
> + int ret;
> +
> + if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
> + struct iov_iter iter;
> +
> + /* fixedbufs is only for non-vectored io */
> + if (WARN_ON_ONCE(vec))
> + return -EINVAL;
> + ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
> + rq_data_dir(req), &iter, ioucmd);
> + if (ret < 0)
> + goto out;
> + ret = blk_rq_map_user_iov(q, req, NULL,
> + &iter, GFP_KERNEL);
> + } else {
> + ret = blk_rq_map_user_io(req, NULL,
> + virtblk_to_user_ptr(ubuffer),
> + bufflen, GFP_KERNEL, vec, 0,
> + 0, rq_data_dir(req));
> + }
> + if (ret)
> + goto out;
> +
> + bio = req->bio;
> + if (bdev)
> + bio_set_dev(bio, bdev);
> + return 0;
> +
> +out:
> + blk_mq_free_request(req);
> + return ret;
> +}
> +
> +static int virtblk_uring_cmd_io(struct virtio_blk *vblk,
> + struct io_uring_cmd *ioucmd, unsigned int issue_flags, bool vec)
> +{
> + struct virtblk_uring_cmd_pdu *pdu = virtblk_get_uring_cmd_pdu(ioucmd);
> + const struct virtblk_uring_cmd *cmd = io_uring_sqe_cmd(ioucmd->sqe);
> + struct request_queue *q = vblk->disk->queue;
> + struct virtblk_req *vbr;
> + struct virtblk_command d;
Or use 'c' for command?
> + struct request *req;
> + blk_opf_t rq_flags = REQ_ALLOC_CACHE | virtblk_req_op(cmd);
> + blk_mq_req_flags_t blk_flags = 0;
> + int ret;
> +
> + if (!capable(CAP_SYS_ADMIN))
> + return -EACCES;
> +
> + d.out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, READ_ONCE(cmd->ioprio));
> + d.out_hdr.type = cpu_to_virtio32(vblk->vdev, READ_ONCE(cmd->type));
> + d.out_hdr.sector = cpu_to_virtio64(vblk->vdev, READ_ONCE(cmd->sector));
> + d.data = READ_ONCE(cmd->data);
> + d.data_len = READ_ONCE(cmd->data_len);
> +
> + if (issue_flags & IO_URING_F_NONBLOCK) {
> + rq_flags |= REQ_NOWAIT;
> + blk_flags = BLK_MQ_REQ_NOWAIT;
> + }
> + if (issue_flags & IO_URING_F_IOPOLL)
> + rq_flags |= REQ_POLLED;
> +
> + req = virtblk_alloc_user_request(q, &d, rq_flags, blk_flags);
> + if (IS_ERR(req))
> + return PTR_ERR(req);
> +
> + vbr = virtblk_req(req);
> + vbr->in_hdr_len = sizeof(vbr->in_hdr.status);
> + if (d.data && d.data_len) {
> + ret = virtblk_map_user_request(req, d.data, d.data_len, ioucmd, vec);
> + if (ret)
> + return ret;
> + }
> +
> + /* to free bio on completion, as req->bio will be null at that time */
> + pdu->bio = req->bio;
> + pdu->req = req;
> + req->end_io_data = ioucmd;
> + req->end_io = virtblk_uring_cmd_end_io;
> + blk_execute_rq_nowait(req, false);
> + return -EIOCBQUEUED;
> +}
> +
> +
> +static int virtblk_uring_cmd(struct virtio_blk *vblk, struct io_uring_cmd *ioucmd,
> + unsigned int issue_flags)
> +{
> + int ret;
> +
> + BUILD_BUG_ON(sizeof(struct virtblk_uring_cmd_pdu) > sizeof(ioucmd->pdu));
io_uring passthrough requires big sqe/cqe support.
So it is deserved a check here.
Thanks,
Joseph
> +
> + switch (ioucmd->cmd_op) {
> + case VIRTBLK_URING_CMD_IO:
> + ret = virtblk_uring_cmd_io(vblk, ioucmd, issue_flags, false);
> + break;
> + case VIRTBLK_URING_CMD_IO_VEC:
> + ret = virtblk_uring_cmd_io(vblk, ioucmd, issue_flags, true);
> + break;
> + default:
> + ret = -ENOTTY;
> + }
> +
> + return ret;
> +}
> +
> +static int virtblk_chr_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags)
> +{
> + struct virtio_blk *vblk = container_of(file_inode(ioucmd->file)->i_cdev,
> + struct virtio_blk, cdev);
> +
> + return virtblk_uring_cmd(vblk, ioucmd, issue_flags);
> +}
> +
> static void virtblk_cdev_rel(struct device *dev)
> {
> ida_free(&vd_chr_minor_ida, MINOR(dev->devt));
> @@ -1297,6 +1511,7 @@ static int virtblk_cdev_add(struct virtio_blk *vblk,
>
> static const struct file_operations virtblk_chr_fops = {
> .owner = THIS_MODULE,
> + .uring_cmd = virtblk_chr_uring_cmd,
> };
>
> static unsigned int virtblk_queue_depth;
> diff --git a/include/uapi/linux/virtio_blk.h b/include/uapi/linux/virtio_blk.h
> index 3744e4da1b2a..93b6e1b5b9a4 100644
> --- a/include/uapi/linux/virtio_blk.h
> +++ b/include/uapi/linux/virtio_blk.h
> @@ -313,6 +313,22 @@ struct virtio_scsi_inhdr {
> };
> #endif /* !VIRTIO_BLK_NO_LEGACY */
>
> +struct virtblk_uring_cmd {
> + /* VIRTIO_BLK_T* */
> + __u32 type;
> + /* io priority. */
> + __u32 ioprio;
> + /* Sector (ie. 512 byte offset) */
> + __u64 sector;
> +
> + __u64 data;
> + __u32 data_len;
> + __u32 flag;
> +};
> +
> +#define VIRTBLK_URING_CMD_IO 1
> +#define VIRTBLK_URING_CMD_IO_VEC 2
> +
> /* And this is the final byte of the write scatter-gather list. */
> #define VIRTIO_BLK_S_OK 0
> #define VIRTIO_BLK_S_IOERR 1
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-12-30 8:00 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-18 9:24 [PATCH v1 0/3] virtio-blk: add io_uring passthrough support Ferry Meng
2024-12-18 9:24 ` [PATCH v1 1/3] virtio-blk: add virtio-blk chardev support Ferry Meng
2024-12-30 7:47 ` Joseph Qi
2024-12-18 9:24 ` [PATCH v1 2/3] virtio-blk: add uring_cmd support for I/O passthru on chardev Ferry Meng
2024-12-30 8:00 ` Joseph Qi
2024-12-18 9:24 ` [PATCH v1 3/3] virtio-blk: add uring_cmd iopoll support Ferry Meng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox