public inbox for [email protected]
 help / color / mirror / Atom feed
From: Max Gurtovoy <[email protected]>
To: Jens Axboe <[email protected]>, <[email protected]>,
	<[email protected]>
Cc: Hannes Reinecke <[email protected]>
Subject: Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
Date: Thu, 16 Dec 2021 15:02:24 +0200	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>


On 12/15/2021 6:24 PM, Jens Axboe wrote:
> This enables the block layer to send us a full plug list of requests
> that need submitting. The block layer guarantees that they all belong
> to the same queue, but we do have to check the hardware queue mapping
> for each request.
>
> If errors are encountered, leave them in the passed in list. Then the
> block layer will handle them individually.
>
> This is good for about a 4% improvement in peak performance, taking us
> from 9.6M to 10M IOPS/core.
>
> Reviewed-by: Hannes Reinecke <[email protected]>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
>   drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 6be6b1ab4285..197aa45ef7ef 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>   	return BLK_STS_OK;
>   }
>   
> +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
> +{
> +	spin_lock(&nvmeq->sq_lock);
> +	while (!rq_list_empty(*rqlist)) {
> +		struct request *req = rq_list_pop(rqlist);
> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> +
> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
> +			nvmeq->sq_tail = 0;
> +	}
> +	nvme_write_sq_db(nvmeq, true);
> +	spin_unlock(&nvmeq->sq_lock);
> +}
> +
> +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
> +{
> +	/*
> +	 * We should not need to do this, but we're still using this to
> +	 * ensure we can drain requests on a dying queue.
> +	 */
> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
> +		return false;
> +	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
> +		return false;
> +
> +	req->mq_hctx->tags->rqs[req->tag] = req;
> +	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
> +}
> +
> +static void nvme_queue_rqs(struct request **rqlist)
> +{
> +	struct request *req = rq_list_peek(rqlist), *prev = NULL;
> +	struct request *requeue_list = NULL;
> +
> +	do {
> +		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
> +
> +		if (!nvme_prep_rq_batch(nvmeq, req)) {
> +			/* detach 'req' and add to remainder list */
> +			if (prev)
> +				prev->rq_next = req->rq_next;
> +			rq_list_add(&requeue_list, req);
> +		} else {
> +			prev = req;
> +		}
> +
> +		req = rq_list_next(req);
> +		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
> +			/* detach rest of list, and submit */
> +			prev->rq_next = NULL;

if req == NULL and prev == NULL we'll get a NULL deref here.

I think this can happen in the first iteration.

Correct me if I'm wrong..

> +			nvme_submit_cmds(nvmeq, rqlist);
> +			*rqlist = req;
> +		}
> +	} while (req);
> +
> +	*rqlist = requeue_list;
> +}
> +
>   static __always_inline void nvme_pci_unmap_rq(struct request *req)
>   {
>   	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> @@ -1678,6 +1738,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
>   
>   static const struct blk_mq_ops nvme_mq_ops = {
>   	.queue_rq	= nvme_queue_rq,
> +	.queue_rqs	= nvme_queue_rqs,
>   	.complete	= nvme_pci_complete_rq,
>   	.commit_rqs	= nvme_commit_rqs,
>   	.init_hctx	= nvme_init_hctx,

  parent reply	other threads:[~2021-12-16 13:02 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-20 20:36   ` Keith Busch
2021-12-20 20:47     ` Jens Axboe
2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-16 12:17   ` Max Gurtovoy
2021-12-15 16:24 ` [PATCH 3/4] nvme: separate command prep and issue Jens Axboe
2021-12-16  9:02   ` Christoph Hellwig
2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-15 17:29   ` Keith Busch
2021-12-15 20:27     ` Jens Axboe
2021-12-16  9:08   ` Christoph Hellwig
2021-12-16 13:06     ` Max Gurtovoy
2021-12-16 15:48       ` Jens Axboe
2021-12-16 16:00         ` Max Gurtovoy
2021-12-16 16:05           ` Jens Axboe
2021-12-16 16:19             ` Max Gurtovoy
2021-12-16 16:25               ` Jens Axboe
2021-12-16 16:34                 ` Max Gurtovoy
2021-12-16 16:36                   ` Jens Axboe
2021-12-16 16:57                     ` Max Gurtovoy
2021-12-16 17:16                       ` Jens Axboe
2021-12-19 12:14                         ` Max Gurtovoy
2021-12-19 14:48                           ` Jens Axboe
2021-12-20 10:11                             ` Max Gurtovoy
2021-12-20 14:19                               ` Jens Axboe
2021-12-20 14:25                                 ` Jens Axboe
2021-12-20 15:29                                 ` Max Gurtovoy
2021-12-20 16:34                                   ` Jens Axboe
2021-12-20 18:48                                     ` Max Gurtovoy
2021-12-20 18:58                                       ` Jens Axboe
2021-12-21 10:20                                         ` Max Gurtovoy
2021-12-21 15:23                                           ` Jens Axboe
2021-12-21 15:29                                             ` Max Gurtovoy
2021-12-21 15:33                                               ` Jens Axboe
2021-12-21 16:08                                                 ` Max Gurtovoy
2021-12-16 15:45     ` Jens Axboe
2021-12-16 16:15       ` Christoph Hellwig
2021-12-16 16:27         ` Jens Axboe
2021-12-16 16:30           ` Christoph Hellwig
2021-12-16 16:36             ` Jens Axboe
2021-12-16 13:02   ` Max Gurtovoy [this message]
2021-12-16 15:59     ` Jens Axboe
2021-12-16 16:06       ` Max Gurtovoy
2021-12-16 16:09         ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2021-12-16 16:05 [PATCHSET v4 0/4] Add support for list issue Jens Axboe
2021-12-16 16:05 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-16 16:38 [PATCHSET v5 0/4] Add support for list issue Jens Axboe
2021-12-16 16:39 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-16 17:53   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox