public inbox for [email protected]
 help / color / mirror / Atom feed
From: Ming Lei <[email protected]>
To: Kevin Wolf <[email protected]>
Cc: Jens Axboe <[email protected]>,
	[email protected], [email protected],
	Pavel Begunkov <[email protected]>,
	[email protected]
Subject: Re: [PATCH 5/9] io_uring: support SQE group
Date: Mon, 29 Apr 2024 11:34:42 +0800	[thread overview]
Message-ID: <Zi8VUmNWaEAS6nyE@fedora> (raw)
In-Reply-To: <[email protected]>

On Fri, Apr 26, 2024 at 07:05:17PM +0200, Kevin Wolf wrote:
> Am 26.04.2024 um 09:53 hat Ming Lei geschrieben:
> > On Thu, Apr 25, 2024 at 11:27:32AM +0200, Kevin Wolf wrote:
> > > Am 24.04.2024 um 03:39 hat Ming Lei geschrieben:
> > > > On Tue, Apr 23, 2024 at 03:08:55PM +0200, Kevin Wolf wrote:
> > > > > When I first read this patch, I wondered if it wouldn't make sense to
> > > > > allow linking a group with subsequent requests, e.g. first having a few
> > > > > requests that run in parallel and once all of them have completed
> > > > > continue with the next linked one sequentially.
> > > > > 
> > > > > For SQE bundles, you reused the LINK flag, which doesn't easily allow
> > > > > this. Ming's patch uses a new flag for groups, so the interface would be
> > > > > more obvious, you simply set the LINK flag on the last member of the
> > > > > group (or on the leader, doesn't really matter). Of course, this doesn't
> > > > > mean it has to be implemented now, but there is a clear way forward if
> > > > > it's wanted.
> > > > 
> > > > Reusing LINK for bundle breaks existed link chains (BUNDLE linked to
> > > > existed link chain), so I think it may not work.
> > > 
> > > You can always extend things *somehow*, but it wouldn't fit very
> > > naturally. That's why I feel your approach on this detail is a little
> > > better.
> > 
> > Linking group in traditionally way is real use case, please see
> > ublk-nbd's zero copy implementation.
> > 
> > https://github.com/ublk-org/ublksrv/blob/group-provide-buf/nbd/tgt_nbd.cpp
> 
> I'm not sure what you're trying to argue, I agreed with you twice?
> 
> I don't think Jens's bundles break existing users of links because the
> special meaning is only triggered with IORING_OP_BUNDLE. But obviously
> they don't allow linking a bundle with something else after it, which
> feels more limiting than necessary.

OK.

> 
> > > > The link rule is explicit for sqe group:
> > > > 
> > > > - only group leader can set link flag, which is applied on the whole
> > > > group: the next sqe in the link chain won't be started until the
> > > > previous linked sqe group is completed
> > > > 
> > > > - link flag can't be set for group members
> > > > 
> > > > Also sqe group doesn't limit async for both group leader and member.
> > > > 
> > > > sqe group vs link & async is covered in the last liburing test code.
> > > 
> > > Oh right, I didn't actually notice that you already implement what I
> > > proposed!
> > > 
> > > I was expecting the flag on the last SQE and I saw in the code that this
> > > isn't allowed, but I completely missed your comment that explicitly
> > > states that it's the group leader that gets the link flag. Of course,
> > > this is just as good.
> > > 
> > > > > The part that looks a bit arbitrary in Ming's patch is that the group
> > > > > leader is always completed before the rest starts. It makes perfect
> > > > > sense in the context that this series is really after (enabling zero
> > > > > copy for ublk), but it doesn't really allow the case you mention in the
> > > > > SQE bundle commit message, running everything in parallel and getting a
> > > > > single CQE for the whole group.
> > > > 
> > > > I think it should be easy to cover bundle in this way, such as add one
> > > > new op IORING_OP_BUNDLE as Jens did, and implement the single CQE for
> > > > whole group/bundle.
> > > 
> > > This requires an extra SQE compared to just creating the group with
> > > flags, but I suppose this is not a big problem. An alternative might be
> > > sending the CQE for the group leader only after the whole group has
> > > completed if we're okay with userspace never knowing when the leader
> > > itself completed.
> > > 
> > > However, assuming an IORING_OP_BUNDLE command, if this command only
> > > completes after the whole group, doesn't that conflict with the
> > 
> > Here the completion means committing CQE to userspace ring.
> > 
> > > principle that all other commands are only started after the first one
> > > has completed?
> > 
> > I meant IORING_OP_BUNDLE is the group leader, and the first one is the
> > the leader.
> > 
> > The member requests won't be started until the leader is completed, and
> > here the completion means that the request is completed from subsystem
> > (FS, netowork, ...), so there isn't conflict, but yes, we need to
> > describe the whole ideas/terms more carefully.
> 
> Is there precedence for requests that are completed, but don't result in
> a CQE immediately? But yes, it's the same as I had in mind above when I
> was talking about completing the leader only after the whole group has
> completed.
> 
> > > Maybe we shouldn't wait for the whole group leader request to complete,
> > > but just give the group leader a chance to prepare the group before all
> > > requests in the group (including the leader itself) are run in parallel.
> > > Maybe io_issue_sqe() could just start the rest of the group somewhere
> > > after calling def->issue() for the leader. Then you can't prepare the
> > > group buffer asynchronously, but I don't think this is needed, right?
> > 
> > That isn't true, if leader request is one network RX, we need to wait
> > until the recv is done, then the following member requests can be
> > started for consuming the received data.
> > 
> > Same example with the multiple copy one in last patch.
> 
> I don't see a group kernel buffer in the last patch at all? It seems to
> use userspace buffers. In which case the read doesn't have to be part of
> the group at all: You can have a read and link that with a group of
> writes. Then you have the clear semantics of link = sequential,
> group = parallel again.
> 
> But let's assume that the read actually did provide a group buffer.
> 
> What this example showed me is that grouping requests for parallel
> submission is logically independent from grouping requests for sharing a
> buffer. For full flexibility, they would probably have to be separate
> concepts. You could then have the same setup as before (read linked to a
> group of writes), but still share a group kernel buffer for the whole
> sequence.

kernel buffer lifetime is aligned with the whole group lifetime, that is why
the leader can't be moved out of group. Otherwise the two groups are
actually same, but still lots of things are in common, such as, how to handle
link for whole group, and the implementation should share most of
code, same with the group concept.

> 
> However, it's not clear if that the full flexibility is needed, and it
> would probably complicate the implementation a bit.
> 
> > > Your example with one read followed by multiple writes would then have
> > > to be written slightly differently: First the read outside of the group,
> > > linked to a group of writes. I honestly think this makes more sense as
> > > an interface, too, because then links are for sequential things and
> > > groups are (only) for parallel things. This feels clearer than having
> > > both a sequential and a parallel element in groups.
> > 
> > Group also implements 1:N dependency, in which N members depends on
> > single group leader, meantime there isn't any dependency among each
> > members. That is something the current io_uring is missing.
> 
> Dependencies are currently expressed with links, which is why I felt
> that it would be good to use them in this case, too. Groups that only
> include parallel requests and can be part of a link chain even provide
> N:M dependencies, so are even more powerful than the fixed 1:N of your
> groups.
> 
> The only thing that doesn't work as nicely then is sharing the buffer as
> long as it's not treated as a separate concept.

Right, kernel buffer lifetime is really one hard problem, aligning it
with group lifetime simplifies a lot for zero copy problem.

> 
> > > > > I suppose you could hack around the sequential nature of the first
> > > > > request by using an extra NOP as the group leader - which isn't any
> > > > > worse than having an IORING_OP_BUNDLE really, just looks a bit odd - but
> > > > > the group completion would still be missing. (Of course, removing the
> > > > > sequential first operation would mean that ublk wouldn't have the buffer
> > > > > ready any more when the other requests try to use it, so that would
> > > > > defeat the purpose of the series...)
> > > > > 
> > > > > I wonder if we can still combine both approaches and create some
> > > > > generally useful infrastructure and not something where it's visible
> > > > > that it was designed mostly for ublk's special case and other use cases
> > > > > just happened to be enabled as a side effect.
> > > > 
> > > > sqe group is actually one generic interface, please see the multiple
> > > > copy( copy one file to multiple destinations in single syscall for one
> > > > range) example in the last patch
> > > 
> > > Yes, that's an example that happens to work well with the model that you
> > > derived from ublk.
> > 
> > Not only for ublk and device zero copy, it also have the multiple copy example.
> 
> This is what I replied to. Yes, it's an example where the model works
> fine. This is not evidence that the model is as generic as it could be,
> just that it's an example that fits it.
> 
> > > If you have the opposite case, reading a buffer that is spread across
> > > multiple files and then writing it to one target (i.e. first step
> > > parallel, second step sequential), you can't represent this well
> > > currently. You could work around it by having a NOP leader, but that's
> > > not very elegant.
> > 
> > Yeah, linking the group(nop & reads) with the following write does
> > work for the above copy case, :-)
> > 
> > > 
> > > This asymmetry suggests that it's not the perfect interface yet.
> > 
> > 1:N dependency requires the asymmetry, an nothing in this world is perfect, :-)
> > But we can try to make it better.
> 
> The asymmetry doesn't contribute anything to the 1:N dependency. As
> discussed above, normal links combined with fully parallel (and
> therefore symmetrical) groups provide this functionality, too.
> 
> The only real reason I see for justifying it is the group kernel buffer.

Yes, but that is also exactly what the N depends on.

> 
> > > If the whole group runs in parallel instead, including the leader, then
> > > both examples become symmetrical. You have a group for the parallel I/O
> > > and a linked single request for the other operation.
> > > 
> > > Or if both steps are parallel, you can just have two linked groups.
> > 
> > I think sqe group can be extended to this way easily by one new flag if
> > there is such real use case. We still can use leader's link flag for
> > same purpose, an only advance the linking chain until the whole group
> > is done. 
> > 
> > Then sqe group just degrades to single link group without 1:N dependency
> > covered and leader is just for providing group link flag, looks it can be
> > well defined and documented, and could be useful too, IMO.
> 
> If you're willing to introduce a second flag, then I'd consider using
> that flag to define buffer sharing groups independently of groups for
> parallel execution.
> 
> I think it's usually preferable to build the semantics you need by
> combining flags that provide independent building blocks with a
> straightforward meaning than having a single complex building block and
> then flags that modify the way the complex concept works.

This way is fine.

> 
> > > > and it can support generic device zero copy: any device internal
> > > > buffer can be linked with io_uring operations in this way, which can't
> > > > be done by traditional splice/pipe.
> > > 
> > > Is this actually implemented or is it just a potential direction for the
> > > future?
> > 
> > It is potential direction since sqe group & provide buffer provides one
> > generic framework to export device internal buffer for further consuming
> > in zero copy & non-mmap way.
> 
> I see. This contributes a bit to the impression that much of the design
> is driven by ublk alone, because it's the only thing that seems to make
> use of group buffers so far.
> 
> Anyway, I'm just throwing in a few thoughts and ideas from outside. In
> the end, Jens and you need to agree on something.

Your thoughts and ideas are really helpful, thanks!


Thanks,
Ming


  reply	other threads:[~2024-04-29  3:34 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-08  1:03 [RFC PATCH 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
2024-04-08  1:03 ` [PATCH 1/9] io_uring: net: don't check sqe->__pad2[0] for send zc Ming Lei
2024-04-08  1:03 ` [PATCH 2/9] io_uring: support user sqe ext flags Ming Lei
2024-04-22 18:16   ` Jens Axboe
2024-04-23 13:57     ` Ming Lei
2024-04-29 15:24       ` Pavel Begunkov
2024-04-30  3:43         ` Ming Lei
2024-04-30 12:00           ` Pavel Begunkov
2024-04-30 12:56             ` Ming Lei
2024-04-30 14:10               ` Pavel Begunkov
2024-04-30 15:46                 ` Ming Lei
2024-05-02 14:22                   ` Pavel Begunkov
2024-05-04  1:19                     ` Ming Lei
2024-04-08  1:03 ` [PATCH 3/9] io_uring: add helper for filling cqes in __io_submit_flush_completions() Ming Lei
2024-04-08  1:03 ` [PATCH 4/9] io_uring: add one output argument to io_submit_sqe Ming Lei
2024-04-08  1:03 ` [PATCH 5/9] io_uring: support SQE group Ming Lei
2024-04-22 18:27   ` Jens Axboe
2024-04-23 13:08     ` Kevin Wolf
2024-04-24  1:39       ` Ming Lei
2024-04-25  9:27         ` Kevin Wolf
2024-04-26  7:53           ` Ming Lei
2024-04-26 17:05             ` Kevin Wolf
2024-04-29  3:34               ` Ming Lei [this message]
2024-04-29 15:48         ` Pavel Begunkov
2024-04-30  3:07           ` Ming Lei
2024-04-29 15:32       ` Pavel Begunkov
2024-04-30  3:03         ` Ming Lei
2024-04-30 12:27           ` Pavel Begunkov
2024-04-30 15:00             ` Ming Lei
2024-05-02 14:09               ` Pavel Begunkov
2024-05-04  1:56                 ` Ming Lei
2024-05-02 14:28               ` Pavel Begunkov
2024-04-24  0:46     ` Ming Lei
2024-04-08  1:03 ` [PATCH 6/9] io_uring: support providing sqe group buffer Ming Lei
2024-04-08  1:03 ` [PATCH 7/9] io_uring/uring_cmd: support provide group kernel buffer Ming Lei
2024-04-08  1:03 ` [PATCH 8/9] ublk: support provide io buffer Ming Lei
2024-04-08  1:03 ` [RFC PATCH 9/9] liburing: support sqe ext_flags & sqe group Ming Lei
2024-04-19  0:55 ` [RFC PATCH 0/9] io_uring: support sqe group and provide group kbuf Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zi8VUmNWaEAS6nyE@fedora \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox