public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Max Gurtovoy <[email protected]>,
	Christoph Hellwig <[email protected]>
Cc: [email protected], [email protected],
	Hannes Reinecke <[email protected]>, Oren Duer <[email protected]>
Subject: Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
Date: Mon, 20 Dec 2021 07:19:01 -0700	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 12/20/21 3:11 AM, Max Gurtovoy wrote:
> 
> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>
>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>> error?
>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>
>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>
>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>
>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>
>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>
>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>> requests are issued.
>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>> of 28 ? or 1 by 1 ?
>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>> the batching is directly driven by what the application is already
>>>>>> doing.
>>>>> I see. Thanks for the explanation.
>>>>>
>>>>> So it works only for io_uring based applications ?
>>>> It's only enabled for io_uring right now, but it's generically available
>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>> many IOs will be submitted
>>> Can you please share an example application (or is it fio patches) that
>>> can submit batches ? The same that was used to test this patchset is
>>> fine too.
>>>
>>> I would like to test it with our NVMe SNAP controllers and also to
>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>> You should just be able to use iodepth_batch with fio. For my peak
>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>> and do batches of 32 for complete and submit. You can just run:
>>
>> t/io_uring <dev or file>
>>
>> maybe adding -p0 for IRQ driven rather than polled IO.
> 
> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA 
> but it was never called using the t/io_uring test nor fio with 
> iodepth_batch=32 flag with io_uring engine.
> 
> Any idea what might be the issue ?
> 
> I installed fio from sources..

The two main restrictions right now are a scheduler and shared tags, are
you using any of those?

-- 
Jens Axboe


  reply	other threads:[~2021-12-20 14:19 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-20 20:36   ` Keith Busch
2021-12-20 20:47     ` Jens Axboe
2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-16 12:17   ` Max Gurtovoy
2021-12-15 16:24 ` [PATCH 3/4] nvme: separate command prep and issue Jens Axboe
2021-12-16  9:02   ` Christoph Hellwig
2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-15 17:29   ` Keith Busch
2021-12-15 20:27     ` Jens Axboe
2021-12-16  9:08   ` Christoph Hellwig
2021-12-16 13:06     ` Max Gurtovoy
2021-12-16 15:48       ` Jens Axboe
2021-12-16 16:00         ` Max Gurtovoy
2021-12-16 16:05           ` Jens Axboe
2021-12-16 16:19             ` Max Gurtovoy
2021-12-16 16:25               ` Jens Axboe
2021-12-16 16:34                 ` Max Gurtovoy
2021-12-16 16:36                   ` Jens Axboe
2021-12-16 16:57                     ` Max Gurtovoy
2021-12-16 17:16                       ` Jens Axboe
2021-12-19 12:14                         ` Max Gurtovoy
2021-12-19 14:48                           ` Jens Axboe
2021-12-20 10:11                             ` Max Gurtovoy
2021-12-20 14:19                               ` Jens Axboe [this message]
2021-12-20 14:25                                 ` Jens Axboe
2021-12-20 15:29                                 ` Max Gurtovoy
2021-12-20 16:34                                   ` Jens Axboe
2021-12-20 18:48                                     ` Max Gurtovoy
2021-12-20 18:58                                       ` Jens Axboe
2021-12-21 10:20                                         ` Max Gurtovoy
2021-12-21 15:23                                           ` Jens Axboe
2021-12-21 15:29                                             ` Max Gurtovoy
2021-12-21 15:33                                               ` Jens Axboe
2021-12-21 16:08                                                 ` Max Gurtovoy
2021-12-16 15:45     ` Jens Axboe
2021-12-16 16:15       ` Christoph Hellwig
2021-12-16 16:27         ` Jens Axboe
2021-12-16 16:30           ` Christoph Hellwig
2021-12-16 16:36             ` Jens Axboe
2021-12-16 13:02   ` Max Gurtovoy
2021-12-16 15:59     ` Jens Axboe
2021-12-16 16:06       ` Max Gurtovoy
2021-12-16 16:09         ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2021-12-16 16:05 [PATCHSET v4 0/4] Add support for list issue Jens Axboe
2021-12-16 16:05 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-16 16:38 [PATCHSET v5 0/4] Add support for list issue Jens Axboe
2021-12-16 16:39 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-16 17:53   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox