public inbox for [email protected]
 help / color / mirror / Atom feed
From: Ming Lei <[email protected]>
To: Ziyang Zhang <[email protected]>
Cc: Pavel Begunkov <[email protected]>,
	Miklos Szeredi <[email protected]>,
	Bernd Schubert <[email protected]>, Jens Axboe <[email protected]>,
	Xiaoguang Wang <[email protected]>,
	[email protected], [email protected],
	[email protected]
Subject: Re: [PATCH V3 00/16] io_uring/ublk: add IORING_OP_FUSED_CMD
Date: Wed, 29 Mar 2023 16:52:16 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On Wed, Mar 29, 2023 at 02:57:38PM +0800, Ziyang Zhang wrote:
> On 2023/3/28 08:53, Ming Lei wrote:
> > Hi Ziyang,
> > 
> > On Tue, Mar 21, 2023 at 05:17:56PM +0800, Ziyang Zhang wrote:
> >> On 2023/3/19 00:23, Pavel Begunkov wrote:
> >>> On 3/16/23 03:13, Xiaoguang Wang wrote:
> >>>>> Add IORING_OP_FUSED_CMD, it is one special URING_CMD, which has to
> >>>>> be SQE128. The 1st SQE(master) is one 64byte URING_CMD, and the 2nd
> >>>>> 64byte SQE(slave) is another normal 64byte OP. For any OP which needs
> >>>>> to support slave OP, io_issue_defs[op].fused_slave needs to be set as 1,
> >>>>> and its ->issue() can retrieve/import buffer from master request's
> >>>>> fused_cmd_kbuf. The slave OP is actually submitted from kernel, part of
> >>>>> this idea is from Xiaoguang's ublk ebpf patchset, but this patchset
> >>>>> submits slave OP just like normal OP issued from userspace, that said,
> >>>>> SQE order is kept, and batching handling is done too.
> >>>> Thanks for this great work, seems that we're now in the right direction
> >>>> to support ublk zero copy, I believe this feature will improve io throughput
> >>>> greatly and reduce ublk's cpu resource usage.
> >>>>
> >>>> I have gone through your 2th patch, and have some little concerns here:
> >>>> Say we have one ublk loop target device, but it has 4 backend files,
> >>>> every file will carry 25% of device capacity and it's implemented in stripped
> >>>> way, then for every io request, current implementation will need issed 4
> >>>> fused_cmd, right? 4 slave sqes are necessary, but it would be better to
> >>>> have just one master sqe, so I wonder whether we can have another
> >>>> method. The key point is to let io_uring support register various kernel
> >>>> memory objects, which come from kernel, such as ITER_BVEC or
> >>>> ITER_KVEC. so how about below actions:
> >>>> 1. add a new infrastructure in io_uring, which will support to register
> >>>> various kernel memory objects in it, this new infrastructure could be
> >>>> maintained in a xarray structure, every memory objects in it will have
> >>>> a unique id. This registration could be done in a ublk uring cmd, io_uring
> >>>> offers registration interface.
> >>>> 2. then any sqe can use these memory objects freely, so long as it
> >>>> passes above unique id in sqe properly.
> >>>> Above are just rough ideas, just for your reference.
> >>>
> >>> It precisely hints on what I proposed a bit earlier, that makes
> >>> me not alone thinking that it's a good idea to have a design allowing
> >>> 1) multiple ops using a buffer and 2) to limiting it to one single
> >>> submission because the userspace might want to preprocess a part
> >>> of the data, multiplex it or on the opposite divide. I was mostly
> >>> coming from non ublk cases, and one example would be such zc recv,
> >>> parsing the app level headers and redirecting the rest of the data
> >>> somewhere.
> >>>
> >>> I haven't got a chance to work on it but will return to it in
> >>> a week. The discussion was here:
> >>>
> >>> https://lore.kernel.org/all/[email protected]/
> >>>
> >>
> >> Hi Pavel and all,
> >>
> >> I think it is a good idea to register some kernel objects(such as bvec)
> >> in io_uring and return a cookie(such as buf_idx) for READ/WRITE/SEND/RECV sqes.
> >> There are some ways to register user's buffer such as IORING_OP_PROVIDE_BUFFERS
> >> and IORING_REGISTER_PBUF_RING but there is not a way to register kernel buffer(bvec).
> >>
> >> I do not think reusing splice is a good idea because splice should run in io-wq.
> >> If we have a big sq depth there may be lots of io-wqs. Then lots of context switch
> >> may lower the IO performance especially for small IO size.
> > 
> > Agree, not only it is hard for splice to guarantee correctness of buffer lifetime,
> > but also it is much less efficient to support the feature in one very ugly way, not
> > mention Linus objects to extend splice wrt. buffer direction issue, see the reasoning
> > in my document:
> > 
> > https://github.com/ming1/linux/blob/my_v6.3-io_uring_fuse_cmd_v4/Documentation/block/ublk.rst#zero-copy
> > 
> >>
> >> Here are some rough ideas:
> >> (1) design a new OPCODE such as IORING_REGISTER_KOBJ to register kernel objects in
> >>     io_uring or
> >> (2) reuse uring-cmd. We can send uring-cmd to drivers(opcode may be CMD_REGISTER_KBUF)
> >>     and let drivers call io_uring_provide_kbuf() to register kbuf. io_uring_provide_kbuf()
> >>     is a new function provided by io_uring for drivers.
> >> (3) let the driver call io_uring_provide_kbuf() directly. For ublk, this function is called
> >>     before io_uring_cmd_done().
> > 
> > Can you explain a bit which use cases you are trying to address by
> > registering kernel io buffer unmapped to userspace?
> 
> Hi Ming,
> 
> Sorry there is no specific use case. In our product, we have to calculate cksum
> or compress data before sending IO to remote backend. So Xiaoguang's EBPF might
> be the final solution... :) But I'd rather to start here...

If chsum calculation and compression are done in userspace, the current zero
copy can't help you because the fused command is for sharing ublk client
io buffer to io_uring OPs only. And userspace has to reply on data copy
for checksum & compression.

ebpf could help you, but that is still one big project, not sure if
current prog is allowed to get kernel mapping of pages and read/write
via the kernel mapping.

> 
> I think you, Pavel and I all have the same basic idea: register the kernel object
> (bvec) first then incoming sqes can use it. But I think fused-cmd is too specific
> (hack) to ublk so other users of io_uring may not benefit from it.

fused command is actually one generic interface:

1) create relationship between primary command and secondary requests,
the current interface does support to setup 1:N relationship, and just
needs multiple secondary reqs following the primary command. If you
think following SQEs isn't flexible, you still can send multiple fused
requests with same primary cmd to relax the usage of following SQEs.

2) based on the above relationship, lots of thing can be done, sharing
buffer is just one function, it could be other kind of resource sharing.
The 'sharing' can be implemented as plugin way, such as passing
uring_command flags for specifying which kind of plugin is used.

I have re-organized code in my local repo in the above way.


> What if we design a general way which allows io_uring to register kernel objects
> (such as bvec) just like IORING_OP_PROVIDE_BUFFERS or IORING_REGISTER_PBUF_RING?
> Pavel said that registration replaces fuse master cmd. And I think so too.

The buffer belongs to device, not io_uring context. And the registration
isn't necessary, and not sure it is doable:

1) userspace hasn't buffer mapping, so can't use the buffer, you can't
calculate checksum and compress data by this registration

2) you just want to use the register id to build the relationship between
primary command and secondary OPs, but fused command can do it(see above)
because we want to solve buffer lifetime easily, fused command has same
lifetime with the buffer reference

3) not sure if the buffer registration is doable:

- only 1 sqe flags is left, how to distinguish normal fixed buffer
  with this kind of registration?

- the buffer belongs to device, if you register it in userspace, you
  have to unregister it in userspace since only userspace knows
  when the buffer isn't needed. Then this buffer lifetime will cross
  multiple OPs, what if the userspace is killed before unregistration.

So what is your real requirement for the buffer registration? I believe
fused command can solve requests relationship building(primary cmd vs.
secondary requests), which seems your only concern about buffer
registration.

> 
> > 
> > The buffer(request buffer, represented by bvec) are just bvecs, basically only
> > physical pages available, and the userspace does not have mapping(virtual address)
> > on this buffer and can't read/write the buffer, so I don't think it makes sense
> > to register the buffer somewhere for userspace, does it?
> 
> The userspace does not touch these registered kernel bvecs, but reference it id.
> For example, we can set "sqe->kobj_id" so this sqe can import this bvec as its
> RW buffer just like IORING_OP_PROVIDE_BUFFERS.
> 
> There is limitation on fused-cmd: secondary sqe has to be primary+1 or be linked.
> But with registration way we allow multiple OPs reference the kernel bvecs.

The interface in V5 actually starts to supports to 1:N relation between primary cmd
and secondary requests, but just implements 1:1 so far. It isn't hard to do 1:N.

Actually you can reach same purpose by sending multiple fused requests with same
primary req, and there shouldn't be performance effect since the primary command
handling is pretty thin(passing buffer reference).

> However
> we have to deal with buffer ownership/lifetime carefully.

That is one fundamental problem. If buffer is allowed to cross multiple
OPs, it can be hard to solve the lifetime issue. Not mention it is less efficient
to add one extra buffer un-registraion in fast io path.

> 
> > 
> > That said the buffer should only be used by kernel, such as io_uring normal OPs.
> > It is basically invisible for userspace, 
> > 
> > However, Xiaoguang's BPF might be one perfect supplement here[1], such as:
> > 
> > - add one generic io_uring BPF OP, which can run one specified registered BPF
> > program by passing bpf_prog_id
> > 
> > - link this BPF OP as slave request of fused command, then the ebpf prog can do
> > whatever on the kernel pages if kernel mapping & buffer read/write is allowed
> > for ebpf prog, and results can be returned into user via any bpf mapping(s)
> 
> In Xiaoguang's ublk-EBPF design, we almost avoid userspace code/logic while
> handling ublk io. So mix fused-cmd with ublk-EBPF may be a bad idea.

What I meant is to add io_uring generic ebpf OP, that isn't ublk dedicated ebpf.

The generic io_uring ebpf OP is for supporting encryption, checksum, or
simple packet parsing,  sort of thing, because the bvec buffer doesn't
have userspace mapping, and we want to avoid to copy data to userspace for
calculating checksum, encryption, ...

> 
> > 
> > - then userspace can decide how to handle the result from bpf mapping(s), such as,
> > submit another fused command to handle IO with part of the kernel buffer.
> > 
> > Also the buffer is io buffer, and its lifetime is pretty short, and register/
> > unregister introduces unnecessary cost in fast io path for any approach.
> 
> I'm not sure the io buffer has short lifetime in our product. :P In our product
> we can first issue a very big request with a big io buffer. Then the backend
> can parse&split it into pieces and distribute each piece to a specific socket_fd
> representing a storage node. This big io buffer may have long lifetime.

The short just means it is in fast io path, not like io_uring fixed buffer which
needs to register just once. IO handling is really fast, otherwise it isn't necessary
to consider zero copy at all.

So we do care performance effect from any unneccessary operation(such
as, buffer unregistration).


Thanks,
Ming


  reply	other threads:[~2023-03-29  8:53 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-14 12:57 [PATCH V3 00/16] io_uring/ublk: add IORING_OP_FUSED_CMD Ming Lei
2023-03-14 12:57 ` [PATCH V3 01/16] io_uring: increase io_kiocb->flags into 64bit Ming Lei
2023-03-14 12:57 ` [PATCH V3 02/16] io_uring: add IORING_OP_FUSED_CMD Ming Lei
2023-03-18 14:31   ` Jens Axboe
2023-03-18 15:24     ` Ming Lei
2023-03-18 16:00       ` Jens Axboe
2023-03-18 16:13       ` Ming Lei
2023-03-14 12:57 ` [PATCH V3 03/16] io_uring: support OP_READ/OP_WRITE for fused slave request Ming Lei
2023-03-14 12:57 ` [PATCH V3 04/16] io_uring: support OP_SEND_ZC/OP_RECV " Ming Lei
2023-03-14 12:57 ` [PATCH V3 05/16] block: ublk_drv: mark device as LIVE before adding disk Ming Lei
2023-03-14 12:57 ` [PATCH V3 06/16] block: ublk_drv: add common exit handling Ming Lei
2023-03-14 12:57 ` [PATCH V3 07/16] block: ublk_drv: don't consider flush request in map/unmap io Ming Lei
2023-03-14 12:57 ` [PATCH V3 08/16] block: ublk_drv: add two helpers to clean up map/unmap request Ming Lei
2023-03-14 12:57 ` [PATCH V3 09/16] block: ublk_drv: clean up several helpers Ming Lei
2023-03-14 12:57 ` [PATCH V3 10/16] block: ublk_drv: cleanup 'struct ublk_map_data' Ming Lei
2023-03-14 12:57 ` [PATCH V3 11/16] block: ublk_drv: cleanup ublk_copy_user_pages Ming Lei
2023-03-14 12:57 ` [PATCH V3 12/16] block: ublk_drv: grab request reference when the request is handled by userspace Ming Lei
2023-03-14 12:57 ` [PATCH V3 13/16] block: ublk_drv: support to copy any part of request pages Ming Lei
2023-03-14 12:57 ` [PATCH V3 14/16] block: ublk_drv: add read()/write() support for ublk char device Ming Lei
2023-03-14 12:57 ` [PATCH V3 15/16] block: ublk_drv: don't check buffer in case of zero copy Ming Lei
2023-03-14 12:57 ` [PATCH V3 16/16] block: ublk_drv: apply io_uring FUSED_CMD for supporting " Ming Lei
2023-03-16  3:13 ` [PATCH V3 00/16] io_uring/ublk: add IORING_OP_FUSED_CMD Xiaoguang Wang
2023-03-16  3:56   ` Ming Lei
2023-03-18 16:23   ` Pavel Begunkov
2023-03-18 16:39     ` Ming Lei
2023-03-21  9:17     ` Ziyang Zhang
2023-03-27 16:04       ` Pavel Begunkov
2023-03-28  1:01         ` Ming Lei
2023-03-28 11:01           ` Pavel Begunkov
2023-03-28  0:53       ` Ming Lei
2023-03-29  6:57         ` Ziyang Zhang
2023-03-29  8:52           ` Ming Lei [this message]
2023-03-25 14:15     ` Ming Lei
2023-03-17  8:14 ` Ming Lei
2023-03-18 12:59   ` Jens Axboe
2023-03-18 13:35     ` Ming Lei
2023-03-18 14:36       ` Jens Axboe
2023-03-18 15:06         ` Ming Lei
2023-03-18 16:51       ` Pavel Begunkov
2023-03-18 23:42         ` Ming Lei
2023-03-19  0:17           ` Ming Lei
2023-03-28 10:55           ` Pavel Begunkov
2023-03-28 13:01             ` Ming Lei
2023-03-29  6:59               ` Ziyang Zhang
2023-03-29 10:43               ` Pavel Begunkov
2023-03-29 11:55                 ` Ming Lei
2023-03-18 16:09 ` Jens Axboe
2023-03-18 17:01   ` Ming Lei
2023-03-21 15:56 ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox