public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Kanchan Joshi <[email protected]>
Cc: Kanchan Joshi <[email protected]>,
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected], [email protected]
Subject: Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF Topic] Non-block IO
Date: Tue, 11 Apr 2023 20:12:44 -0600	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <CA+1E3rKrXOOBEaRb4pfE29wmhRP-fcUcSwQ4gobKGRxMGyS8jg@mail.gmail.com>

On 4/11/23 5:28?PM, Kanchan Joshi wrote:
> On Wed, Apr 12, 2023 at 4:23?AM Jens Axboe <[email protected]> wrote:
>>
>> On 4/11/23 4:48?PM, Kanchan Joshi wrote:
>>>>> 4. Direct NVMe queues - will there be interest in having io_uring
>>>>> managed NVMe queues?  Sort of a new ring, for which I/O is destaged from
>>>>> io_uring SQE to NVMe SQE without having to go through intermediate
>>>>> constructs (i.e., bio/request). Hopefully,that can further amp up the
>>>>> efficiency of IO.
>>>>
>>>> This is interesting, and I've pondered something like that before too. I
>>>> think it's worth investigating and hacking up a prototype. I recently
>>>> had one user of IOPOLL assume that setting up a ring with IOPOLL would
>>>> automatically create a polled queue on the driver side and that is what
>>>> would be used for IO. And while that's not how it currently works, it
>>>> definitely does make sense and we could make some things faster like
>>>> that. It would also potentially easier enable cancelation referenced in
>>>> #1 above, if it's restricted to the queue(s) that the ring "owns".
>>>
>>> So I am looking at prototyping it, exclusively for the polled-io case.
>>> And for that, is there already a way to ensure that there are no
>>> concurrent submissions to this ring (set with IORING_SETUP_IOPOLL
>>> flag)?
>>> That will be the case generally (and submissions happen under
>>> uring_lock mutex), but submission may still get punted to io-wq
>>> worker(s) which do not take that mutex.
>>> So the original task and worker may get into doing concurrent submissions.
>>
>> io-wq may indeed get in your way. But I think for something like this,
>> you'd never want to punt to io-wq to begin with. If userspace is managing
>> the queue, then by definition you cannot run out of tags.
> 
> Unfortunately we have lifetime differences between io_uring and NVMe.
> NVMe tag remains valid/occupied until completion (we do not have a
> nice sq->head to look at and decide).
> For io_uring, it can be reused much earlier i.e. just after submission.
> So tag shortage is possible.

The sqe cannot be the tag, the tag has to be generated separately. It
doesn't make sense to tie the sqe and tag together, as one is consumed
in order and the other one is not.

>> If there are
>> other conditions for this kind of request that may run into out-of-memory
>> conditions, then the error just needs to be returned.
> 
> I see, and IOSQE_ASYNC can also be flagged as an error/not-supported.

Yep!

-- 
Jens Axboe


  reply	other threads:[~2023-04-12  2:12 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20230210180226epcas5p1bd2e1150de067f8af61de2bbf571594d@epcas5p1.samsung.com>
2023-02-10 18:00 ` [LSF/MM/BPF ATTEND][LSF/MM/BPF Topic] Non-block IO Kanchan Joshi
2023-02-10 18:18   ` Bart Van Assche
2023-02-10 19:34     ` Kanchan Joshi
2023-02-13 20:24       ` Bart Van Assche
2023-02-10 19:47     ` Jens Axboe
2023-02-14 10:33     ` John Garry
2023-02-10 19:53   ` Jens Axboe
2023-02-13 11:54     ` Sagi Grimberg
2023-04-11 22:48     ` Kanchan Joshi
2023-04-11 22:53       ` Jens Axboe
2023-04-11 23:28         ` Kanchan Joshi
2023-04-12  2:12           ` Jens Axboe [this message]
2023-04-12  2:33       ` Ming Lei
2023-04-12 13:26         ` Kanchan Joshi
2023-04-12 13:47           ` Ming Lei
2023-02-10 20:07   ` Clay Mayers
2023-02-11  3:33   ` Ming Lei
2023-02-11 12:06   ` Hannes Reinecke
2023-02-28 16:05   ` John Meneghini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox