* [PATCH 1/2] block: add request polling helper @ 2023-05-30 17:23 ` Keith Busch 2023-05-30 17:23 ` [PATCH 2/2] nvme: improved uring polling Keith Busch ` (3 more replies) 0 siblings, 4 replies; 8+ messages in thread From: Keith Busch @ 2023-05-30 17:23 UTC (permalink / raw) To: linux-block, io-uring, linux-nvme, hch, axboe; +Cc: sagi, joshi.k, Keith Busch From: Keith Busch <[email protected]> This will be used by drivers that allocate polling requests. It interface does not require a bio, and can skip the overhead associated with polling those. Signed-off-by: Keith Busch <[email protected]> --- block/blk-mq.c | 29 ++++++++++++++++++++++++++--- include/linux/blk-mq.h | 2 ++ 2 files changed, 28 insertions(+), 3 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f6dad0886a2fa..3c12c476e3a5c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4740,10 +4740,9 @@ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues) } EXPORT_SYMBOL_GPL(blk_mq_update_nr_hw_queues); -int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob, - unsigned int flags) +static int blk_hctx_poll(struct request_queue *q, struct blk_mq_hw_ctx *hctx, + struct io_comp_batch *iob, unsigned int flags) { - struct blk_mq_hw_ctx *hctx = blk_qc_to_hctx(q, cookie); long state = get_current_state(); int ret; @@ -4768,6 +4767,30 @@ int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch * return 0; } +int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob, + unsigned int flags) +{ + return blk_hctx_poll(q, blk_qc_to_hctx(q, cookie), iob, flags); +} + +int blk_rq_poll(struct request *rq, struct io_comp_batch *iob, + unsigned int poll_flags) +{ + struct request_queue *q = rq->q; + int ret; + + if (!blk_rq_is_poll(rq)) + return 0; + if (!percpu_ref_tryget(&q->q_usage_counter)) + return 0; + + ret = blk_hctx_poll(q, rq->mq_hctx, iob, poll_flags); + blk_queue_exit(q); + + return ret; +} +EXPORT_SYMBOL_GPL(blk_rq_poll); + unsigned int blk_mq_rq_cpu(struct request *rq) { return rq->mq_ctx->cpu; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 06caacd77ed66..579818fa1f91d 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -722,6 +722,8 @@ int blk_mq_alloc_sq_tag_set(struct blk_mq_tag_set *set, void blk_mq_free_tag_set(struct blk_mq_tag_set *set); void blk_mq_free_request(struct request *rq); +int blk_rq_poll(struct request *rq, struct io_comp_batch *iob, + unsigned int poll_flags); bool blk_mq_queue_inflight(struct request_queue *q); -- 2.34.1 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/2] nvme: improved uring polling 2023-05-30 17:23 ` [PATCH 1/2] block: add request polling helper Keith Busch @ 2023-05-30 17:23 ` Keith Busch 2023-05-31 10:10 ` Kanchan Joshi ` (2 more replies) 2023-05-31 9:45 ` [PATCH 1/2] block: add request polling helper Kanchan Joshi ` (2 subsequent siblings) 3 siblings, 3 replies; 8+ messages in thread From: Keith Busch @ 2023-05-30 17:23 UTC (permalink / raw) To: linux-block, io-uring, linux-nvme, hch, axboe; +Cc: sagi, joshi.k, Keith Busch From: Keith Busch <[email protected]> Drivers can poll requests directly, so use that. We just need to ensure the driver's request was allocated from a polled hctx, so a special driver flag is added to struct io_uring_cmd. The first advantage is unshared and multipath namespaces can use the same polling callback, and multipath is guaranteed to get the same queue as the command was submitted on. Previously multipath polling might check a different path and poll the wrong info. The other advantage is we don't need a bio payload in order to poll, allowing commands like 'flush' and 'write zeroes' to be submitted on the same high priority queue as read and write commands. And using the request based polling skips the unnecessary bio overhead and xarray hctx lookup when we have a request. Signed-off-by: Keith Busch <[email protected]> --- drivers/nvme/host/ioctl.c | 68 +++++++++-------------------------- drivers/nvme/host/multipath.c | 2 +- drivers/nvme/host/nvme.h | 2 -- include/uapi/linux/io_uring.h | 2 ++ 4 files changed, 20 insertions(+), 54 deletions(-) diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index d24ea2e051564..3fa9a50433f18 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -505,7 +505,6 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req, { struct io_uring_cmd *ioucmd = req->end_io_data; struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd); - void *cookie = READ_ONCE(ioucmd->cookie); req->bio = pdu->bio; if (nvme_req(req)->flags & NVME_REQ_CANCELLED) @@ -518,9 +517,10 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req, * For iopoll, complete it directly. * Otherwise, move the completion to task work. */ - if (cookie != NULL && blk_rq_is_poll(req)) + if (blk_rq_is_poll(req)) { + WRITE_ONCE(ioucmd->cookie, NULL); nvme_uring_task_cb(ioucmd, IO_URING_F_UNLOCKED); - else + } else io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_cb); return RQ_END_IO_FREE; @@ -531,7 +531,6 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req, { struct io_uring_cmd *ioucmd = req->end_io_data; struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd); - void *cookie = READ_ONCE(ioucmd->cookie); req->bio = pdu->bio; pdu->req = req; @@ -540,9 +539,10 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req, * For iopoll, complete it directly. * Otherwise, move the completion to task work. */ - if (cookie != NULL && blk_rq_is_poll(req)) + if (blk_rq_is_poll(req)) { + WRITE_ONCE(ioucmd->cookie, NULL); nvme_uring_task_meta_cb(ioucmd, IO_URING_F_UNLOCKED); - else + } else io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_meta_cb); return RQ_END_IO_NONE; @@ -599,7 +599,6 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns, if (issue_flags & IO_URING_F_IOPOLL) rq_flags |= REQ_POLLED; -retry: req = nvme_alloc_user_request(q, &c, rq_flags, blk_flags); if (IS_ERR(req)) return PTR_ERR(req); @@ -613,17 +612,11 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns, return ret; } - if (issue_flags & IO_URING_F_IOPOLL && rq_flags & REQ_POLLED) { - if (unlikely(!req->bio)) { - /* we can't poll this, so alloc regular req instead */ - blk_mq_free_request(req); - rq_flags &= ~REQ_POLLED; - goto retry; - } else { - WRITE_ONCE(ioucmd->cookie, req->bio); - req->bio->bi_opf |= REQ_POLLED; - } + if (blk_rq_is_poll(req)) { + ioucmd->flags |= IORING_URING_CMD_POLLED; + WRITE_ONCE(ioucmd->cookie, req); } + /* to free bio on completion, as req->bio will be null at that time */ pdu->bio = req->bio; pdu->meta_len = d.metadata_len; @@ -782,18 +775,16 @@ int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, struct io_comp_batch *iob, unsigned int poll_flags) { - struct bio *bio; + struct request *req; int ret = 0; - struct nvme_ns *ns; - struct request_queue *q; + + if (!(ioucmd->flags & IORING_URING_CMD_POLLED)) + return 0; rcu_read_lock(); - bio = READ_ONCE(ioucmd->cookie); - ns = container_of(file_inode(ioucmd->file)->i_cdev, - struct nvme_ns, cdev); - q = ns->queue; - if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio && bio->bi_bdev) - ret = bio_poll(bio, iob, poll_flags); + req = READ_ONCE(ioucmd->cookie); + if (req && blk_rq_is_poll(req)) + ret = blk_rq_poll(req, iob, poll_flags); rcu_read_unlock(); return ret; } @@ -885,31 +876,6 @@ int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd, srcu_read_unlock(&head->srcu, srcu_idx); return ret; } - -int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, - struct io_comp_batch *iob, - unsigned int poll_flags) -{ - struct cdev *cdev = file_inode(ioucmd->file)->i_cdev; - struct nvme_ns_head *head = container_of(cdev, struct nvme_ns_head, cdev); - int srcu_idx = srcu_read_lock(&head->srcu); - struct nvme_ns *ns = nvme_find_path(head); - struct bio *bio; - int ret = 0; - struct request_queue *q; - - if (ns) { - rcu_read_lock(); - bio = READ_ONCE(ioucmd->cookie); - q = ns->queue; - if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio - && bio->bi_bdev) - ret = bio_poll(bio, iob, poll_flags); - rcu_read_unlock(); - } - srcu_read_unlock(&head->srcu, srcu_idx); - return ret; -} #endif /* CONFIG_NVME_MULTIPATH */ int nvme_dev_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags) diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 9171452e2f6d4..f17be1c72f4de 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -470,7 +470,7 @@ static const struct file_operations nvme_ns_head_chr_fops = { .unlocked_ioctl = nvme_ns_head_chr_ioctl, .compat_ioctl = compat_ptr_ioctl, .uring_cmd = nvme_ns_head_chr_uring_cmd, - .uring_cmd_iopoll = nvme_ns_head_chr_uring_cmd_iopoll, + .uring_cmd_iopoll = nvme_ns_chr_uring_cmd_iopoll, }; static int nvme_add_ns_head_cdev(struct nvme_ns_head *head) diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index bf46f122e9e1e..ca4ea89333660 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -847,8 +847,6 @@ long nvme_dev_ioctl(struct file *file, unsigned int cmd, unsigned long arg); int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, struct io_comp_batch *iob, unsigned int poll_flags); -int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, - struct io_comp_batch *iob, unsigned int poll_flags); int nvme_ns_chr_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags); int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd, diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 0716cb17e4360..f8d6ffe78073e 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -232,8 +232,10 @@ enum io_uring_op { * sqe->uring_cmd_flags * IORING_URING_CMD_FIXED use registered buffer; pass this flag * along with setting sqe->buf_index. + * IORING_URING_CMD_POLLED driver use only */ #define IORING_URING_CMD_FIXED (1U << 0) +#define IORING_URING_CMD_POLLED (1U << 31) /* -- 2.34.1 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] nvme: improved uring polling 2023-05-30 17:23 ` [PATCH 2/2] nvme: improved uring polling Keith Busch @ 2023-05-31 10:10 ` Kanchan Joshi 2023-05-31 13:02 ` Christoph Hellwig 2023-06-05 23:04 ` Sagi Grimberg 2 siblings, 0 replies; 8+ messages in thread From: Kanchan Joshi @ 2023-05-31 10:10 UTC (permalink / raw) To: Keith Busch Cc: linux-block, io-uring, linux-nvme, hch, axboe, sagi, Keith Busch [-- Attachment #1: Type: text/plain, Size: 317 bytes --] On Tue, May 30, 2023 at 10:23:43AM -0700, Keith Busch wrote: >From: Keith Busch <[email protected]> This may need to be wired up against the code after https://lore.kernel.org/linux-nvme/[email protected]/ Looks good otherwise. Reviewed-by: Kanchan Joshi <[email protected]> [-- Attachment #2: Type: text/plain, Size: 0 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] nvme: improved uring polling 2023-05-30 17:23 ` [PATCH 2/2] nvme: improved uring polling Keith Busch 2023-05-31 10:10 ` Kanchan Joshi @ 2023-05-31 13:02 ` Christoph Hellwig 2023-06-05 23:04 ` Sagi Grimberg 2 siblings, 0 replies; 8+ messages in thread From: Christoph Hellwig @ 2023-05-31 13:02 UTC (permalink / raw) To: Keith Busch Cc: linux-block, io-uring, linux-nvme, hch, axboe, sagi, joshi.k, Keith Busch Looks good to me, but I'll wait for the rebase for a formal review. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] nvme: improved uring polling 2023-05-30 17:23 ` [PATCH 2/2] nvme: improved uring polling Keith Busch 2023-05-31 10:10 ` Kanchan Joshi 2023-05-31 13:02 ` Christoph Hellwig @ 2023-06-05 23:04 ` Sagi Grimberg 2 siblings, 0 replies; 8+ messages in thread From: Sagi Grimberg @ 2023-06-05 23:04 UTC (permalink / raw) To: Keith Busch, linux-block, io-uring, linux-nvme, hch, axboe Cc: joshi.k, Keith Busch Looks nice, Reviewed-by: Sagi Grimberg <[email protected]> I'll look again after the rebase. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] block: add request polling helper 2023-05-30 17:23 ` [PATCH 1/2] block: add request polling helper Keith Busch 2023-05-30 17:23 ` [PATCH 2/2] nvme: improved uring polling Keith Busch @ 2023-05-31 9:45 ` Kanchan Joshi 2023-05-31 13:01 ` Christoph Hellwig 2023-06-05 23:03 ` Sagi Grimberg 3 siblings, 0 replies; 8+ messages in thread From: Kanchan Joshi @ 2023-05-31 9:45 UTC (permalink / raw) To: Keith Busch Cc: linux-block, io-uring, linux-nvme, hch, axboe, sagi, Keith Busch [-- Attachment #1: Type: text/plain, Size: 372 bytes --] On Tue, May 30, 2023 at 10:23:42AM -0700, Keith Busch wrote: >From: Keith Busch <[email protected]> > >This will be used by drivers that allocate polling requests. It >interface does not require a bio, and can skip the overhead associated >with polling those. > >Signed-off-by: Keith Busch <[email protected]> Looks good. Reviewed-by: Kanchan Joshi <[email protected]> [-- Attachment #2: Type: text/plain, Size: 0 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] block: add request polling helper 2023-05-30 17:23 ` [PATCH 1/2] block: add request polling helper Keith Busch 2023-05-30 17:23 ` [PATCH 2/2] nvme: improved uring polling Keith Busch 2023-05-31 9:45 ` [PATCH 1/2] block: add request polling helper Kanchan Joshi @ 2023-05-31 13:01 ` Christoph Hellwig 2023-06-05 23:03 ` Sagi Grimberg 3 siblings, 0 replies; 8+ messages in thread From: Christoph Hellwig @ 2023-05-31 13:01 UTC (permalink / raw) To: Keith Busch Cc: linux-block, io-uring, linux-nvme, hch, axboe, sagi, joshi.k, Keith Busch > +int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob, It would be nice to fix the overly long line while you're at it. > + unsigned int flags) > +{ > + return blk_hctx_poll(q, blk_qc_to_hctx(q, cookie), iob, flags); > +} But looking at the two callers of blk_mq_poll, shouldn't one use rq->mq_hctx to get the hctx anyway instead of doing repeated blk_qc_to_hctx in the polling loop? We could then just open code blk_qc_to_hctx in the remaining one. The rest looks good to me. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] block: add request polling helper 2023-05-30 17:23 ` [PATCH 1/2] block: add request polling helper Keith Busch ` (2 preceding siblings ...) 2023-05-31 13:01 ` Christoph Hellwig @ 2023-06-05 23:03 ` Sagi Grimberg 3 siblings, 0 replies; 8+ messages in thread From: Sagi Grimberg @ 2023-06-05 23:03 UTC (permalink / raw) To: Keith Busch, linux-block, io-uring, linux-nvme, hch, axboe Cc: joshi.k, Keith Busch Reviewed-by: Sagi Grimberg <[email protected]> ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-06-05 23:05 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <CGME20230530172405epcas5p317e15a619d6101c8875b920224f02e31@epcas5p3.samsung.com> 2023-05-30 17:23 ` [PATCH 1/2] block: add request polling helper Keith Busch 2023-05-30 17:23 ` [PATCH 2/2] nvme: improved uring polling Keith Busch 2023-05-31 10:10 ` Kanchan Joshi 2023-05-31 13:02 ` Christoph Hellwig 2023-06-05 23:04 ` Sagi Grimberg 2023-05-31 9:45 ` [PATCH 1/2] block: add request polling helper Kanchan Joshi 2023-05-31 13:01 ` Christoph Hellwig 2023-06-05 23:03 ` Sagi Grimberg
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox