public inbox for [email protected]
 help / color / mirror / Atom feed
From: Ming Lei <[email protected]>
To: Kanchan Joshi <[email protected]>
Cc: [email protected], [email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected], [email protected],
	[email protected], [email protected]
Subject: Re: [PATCH v4 2/5] block: wire-up support for passthrough plugging
Date: Thu, 5 May 2022 22:21:15 +0800	[thread overview]
Message-ID: <YnPdW7T8JVRsNeno@T590> (raw)
In-Reply-To: <[email protected]>

On Thu, May 05, 2022 at 11:36:13AM +0530, Kanchan Joshi wrote:
> From: Jens Axboe <[email protected]>
> 
> Add support for plugging in passthrough path. When plugging is enabled, the
> requests are added to a plug instead of getting dispatched to the driver.
> And when the plug is finished, the whole batch gets dispatched via
> ->queue_rqs which turns out to be more efficient. Otherwise dispatching
> used to happen via ->queue_rq, one request at a time.
> 
> Signed-off-by: Jens Axboe <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> ---
>  block/blk-mq.c | 73 +++++++++++++++++++++++++++-----------------------
>  1 file changed, 39 insertions(+), 34 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 84d749511f55..2cf011b57cf9 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2340,6 +2340,40 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
>  	blk_mq_hctx_mark_pending(hctx, ctx);
>  }
>  
> +/*
> + * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
> + * queues. This is important for md arrays to benefit from merging
> + * requests.
> + */
> +static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
> +{
> +	if (plug->multiple_queues)
> +		return BLK_MAX_REQUEST_COUNT * 2;
> +	return BLK_MAX_REQUEST_COUNT;
> +}
> +
> +static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
> +{
> +	struct request *last = rq_list_peek(&plug->mq_list);
> +
> +	if (!plug->rq_count) {
> +		trace_block_plug(rq->q);
> +	} else if (plug->rq_count >= blk_plug_max_rq_count(plug) ||
> +		   (!blk_queue_nomerges(rq->q) &&
> +		    blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) {
> +		blk_mq_flush_plug_list(plug, false);
> +		trace_block_plug(rq->q);
> +	}
> +
> +	if (!plug->multiple_queues && last && last->q != rq->q)
> +		plug->multiple_queues = true;
> +	if (!plug->has_elevator && (rq->rq_flags & RQF_ELV))
> +		plug->has_elevator = true;
> +	rq->rq_next = NULL;
> +	rq_list_add(&plug->mq_list, rq);
> +	plug->rq_count++;
> +}
> +
>  /**
>   * blk_mq_request_bypass_insert - Insert a request at dispatch list.
>   * @rq: Pointer to request to be inserted.
> @@ -2353,7 +2387,12 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
>  				  bool run_queue)
>  {
>  	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
> +	struct blk_plug *plug = current->plug;
>  
> +	if (plug) {
> +		blk_add_rq_to_plug(plug, rq);
> +		return;
> +	}

This way looks a bit fragile.

blk_mq_request_bypass_insert() is called for dispatching io request too,
such as blk_insert_cloned_request(), then the request may be inserted to
scheduler finally from blk_mq_flush_plug_list().

Another issue in blk_execute_rq(), the request may stay in plug list
before polling, then hang forever.

Just wondering why not adding the pt request to plug in blk_execute_rq_nowait()
explicitly?


Thanks, 
Ming


  reply	other threads:[~2022-05-05 14:21 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20220505061142epcas5p2c943572766bfd5088138fe0f7873c96c@epcas5p2.samsung.com>
2022-05-05  6:06 ` [PATCH v4 0/5] io_uring passthrough for nvme Kanchan Joshi
     [not found]   ` <CGME20220505061144epcas5p3821a9516dad2b5eff5a25c56dbe164df@epcas5p3.samsung.com>
2022-05-05  6:06     ` [PATCH v4 1/5] fs,io_uring: add infrastructure for uring-cmd Kanchan Joshi
2022-05-05 12:52       ` Jens Axboe
2022-05-05 13:48         ` Ming Lei
2022-05-05 13:54           ` Jens Axboe
2022-05-05 13:29       ` Christoph Hellwig
2022-05-05 16:17       ` Jens Axboe
2022-05-05 17:04         ` Jens Axboe
2022-05-06  7:12         ` Kanchan Joshi
2022-05-10 14:23         ` Kanchan Joshi
2022-05-10 14:35           ` Jens Axboe
     [not found]   ` <CGME20220505061146epcas5p3919c48d58d353a62a5858ee10ad162a0@epcas5p3.samsung.com>
2022-05-05  6:06     ` [PATCH v4 2/5] block: wire-up support for passthrough plugging Kanchan Joshi
2022-05-05 14:21       ` Ming Lei [this message]
     [not found]   ` <CGME20220505061148epcas5p188618b5b15a95cbe48c8c1559a18c994@epcas5p1.samsung.com>
2022-05-05  6:06     ` [PATCH v4 3/5] nvme: refactor nvme_submit_user_cmd() Kanchan Joshi
2022-05-05 13:30       ` Christoph Hellwig
2022-05-05 18:37       ` Clay Mayers
2022-05-05 19:03         ` Jens Axboe
2022-05-05 19:11           ` Jens Axboe
2022-05-05 19:30             ` Clay Mayers
2022-05-05 19:31               ` Jens Axboe
2022-05-05 19:50                 ` hch
2022-05-05 20:44                   ` Jens Axboe
2022-05-06  5:56                     ` hch
     [not found]   ` <CGME20220505061150epcas5p2b60880c541a4b2f144c348834c7cbf0b@epcas5p2.samsung.com>
2022-05-05  6:06     ` [PATCH v4 4/5] nvme: wire-up uring-cmd support for io-passthru on char-device Kanchan Joshi
2022-05-05 13:33       ` Christoph Hellwig
2022-05-05 13:38       ` Jens Axboe
2022-05-05 13:42         ` Christoph Hellwig
2022-05-05 13:50           ` Jens Axboe
2022-05-05 17:23             ` Jens Axboe
2022-05-06  8:28               ` Christoph Hellwig
2022-05-06 13:37                 ` Jens Axboe
2022-05-06 14:50                   ` Christoph Hellwig
2022-05-06 14:57                     ` Jens Axboe
2022-05-07  5:03                       ` Christoph Hellwig
2022-05-07 12:53                         ` Jens Axboe
2022-05-09  6:00                           ` Christoph Hellwig
2022-05-09 12:52                             ` Jens Axboe
     [not found]   ` <CGME20220505061151epcas5p2523dc661a0daf3e6185dee771eade393@epcas5p2.samsung.com>
2022-05-05  6:06     ` [PATCH v4 5/5] nvme: add vectored-io support for uring-cmd Kanchan Joshi
2022-05-05 18:20   ` [PATCH v4 0/5] io_uring passthrough for nvme Jens Axboe
2022-05-05 18:29     ` Jens Axboe
2022-05-06  6:42       ` Kanchan Joshi
2022-05-06 13:14         ` Jens Axboe
2022-05-10  7:20     ` Christoph Hellwig
2022-05-10 12:29       ` Jens Axboe
2022-05-10 14:21         ` Kanchan Joshi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YnPdW7T8JVRsNeno@T590 \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox