From: Ming Lei <[email protected]>
To: Ziyang Zhang <[email protected]>
Cc: [email protected],
Miklos Szeredi <[email protected]>,
Xiaoguang Wang <[email protected]>,
Bernd Schubert <[email protected]>,
Pavel Begunkov <[email protected]>,
[email protected], Stefan Hajnoczi <[email protected]>,
[email protected], Jens Axboe <[email protected]>,
Dan Williams <[email protected]>,
[email protected]
Subject: Re: [PATCH V5 16/16] block: ublk_drv: apply io_uring FUSED_CMD for supporting zero copy
Date: Wed, 29 Mar 2023 18:52:06 +0800 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On Wed, Mar 29, 2023 at 06:01:16PM +0800, Ziyang Zhang wrote:
> On 2023/3/29 17:00, Ming Lei wrote:
> > On Wed, Mar 29, 2023 at 10:57:53AM +0800, Ziyang Zhang wrote:
> >> On 2023/3/28 23:09, Ming Lei wrote:
> >>> Apply io_uring fused command for supporting zero copy:
> >>>
> >>
> >> [...]
> >>
> >>>
> >>> @@ -1374,7 +1533,12 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> >>> if (!ubq || ub_cmd->q_id != ubq->q_id)
> >>> goto out;
> >>>
> >>> - if (ubq->ubq_daemon && ubq->ubq_daemon != current)
> >>> + /*
> >>> + * The fused command reads the io buffer data structure only, so it
> >>> + * is fine to be issued from other context.
> >>> + */
> >>> + if ((ubq->ubq_daemon && ubq->ubq_daemon != current) &&
> >>> + (cmd_op != UBLK_IO_FUSED_SUBMIT_IO))
> >>> goto out;
> >>>
> >>
> >> Hi Ming,
> >>
> >> What is your use case that fused io_uring cmd is issued from another thread?
> >> I think it is good practice to operate one io_uring instance in one thread
> >> only.
> >
> > So far we limit io command has to be issued from the queue context,
> > which is still not friendly from userspace viewpoint, the reason is
> > that we can't get io_uring exit notification and ublk's use case is
> > very special since the queued io command may not be completed forever,
>
> OK, so UBLK_IO_FUSED_SUBMIT_IO is guaranteed to be completed because it is
> not queued. FETCH_REQ and COMMIT_AMD_FETCH are queued io commands and could
> not be completed forever so they have to be issued from ubq_daemon. Right?
Yeah, any io command should be issued from ubq daemon context.
>
> BTW, maybe NEED_GET_DATA can be issued from other context...
So far it won't be supported.
As I mentioned in the link, if io_uring can provide io_uring exit
callback, we may relax this limit.
>
> > see:
> >
> > https://lore.kernel.org/linux-fsdevel/[email protected]/
> >
> > I remember that people raised concern about this implementation.
> >
> > But for normal IO, it could be issued from io wq simply because of
> > link(dependency) or whatever, and userspace is still allowed to submit
> > io from another pthread via same io_uring ctx.
>
> Yes, we can submit to the same ctx from different pthread but lock may be required.
Right.
> IMO, users may only choose ubq_daemon as the only submitter.
At least any io command should be issued from ubq daemon now, but normal
io can be issued from any context.
Thanks,
Ming
next prev parent reply other threads:[~2023-03-29 10:53 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-28 15:09 [PATCH V5 00/16] io_uring/ublk: add IORING_OP_FUSED_CMD Ming Lei
2023-03-28 15:09 ` [PATCH V5 01/16] io_uring: increase io_kiocb->flags into 64bit Ming Lei
2023-03-28 15:09 ` [PATCH V5 02/16] io_uring: add IORING_OP_FUSED_CMD Ming Lei
2023-03-28 17:33 ` kernel test robot
2023-03-28 15:09 ` [PATCH V5 03/16] io_uring: support normal SQE for fused command Ming Lei
2023-03-28 15:09 ` [PATCH V5 04/16] io_uring: support OP_READ/OP_WRITE for fused secondary request Ming Lei
2023-03-28 15:09 ` [PATCH V5 05/16] io_uring: support OP_SEND_ZC/OP_RECV " Ming Lei
2023-03-28 15:09 ` [PATCH V5 06/16] block: ublk_drv: add common exit handling Ming Lei
2023-03-28 15:09 ` [PATCH V5 07/16] block: ublk_drv: don't consider flush request in map/unmap io Ming Lei
2023-03-28 15:09 ` [PATCH V5 08/16] block: ublk_drv: add two helpers to clean up map/unmap request Ming Lei
2023-03-28 15:09 ` [PATCH V5 09/16] block: ublk_drv: clean up several helpers Ming Lei
2023-03-28 15:09 ` [PATCH V5 10/16] block: ublk_drv: cleanup 'struct ublk_map_data' Ming Lei
2023-03-28 15:09 ` [PATCH V5 11/16] block: ublk_drv: cleanup ublk_copy_user_pages Ming Lei
2023-03-28 15:09 ` [PATCH V5 12/16] block: ublk_drv: grab request reference when the request is handled by userspace Ming Lei
2023-03-28 15:09 ` [PATCH V5 13/16] block: ublk_drv: support to copy any part of request pages Ming Lei
2023-03-28 15:09 ` [PATCH V5 14/16] block: ublk_drv: add read()/write() support for ublk char device Ming Lei
2023-03-28 15:09 ` [PATCH V5 15/16] block: ublk_drv: don't check buffer in case of zero copy Ming Lei
2023-03-28 15:09 ` [PATCH V5 16/16] block: ublk_drv: apply io_uring FUSED_CMD for supporting " Ming Lei
2023-03-29 2:57 ` Ziyang Zhang
2023-03-29 9:00 ` Ming Lei
2023-03-29 10:01 ` Ziyang Zhang
2023-03-29 10:52 ` Ming Lei [this message]
2023-04-03 8:38 ` Ziyang Zhang
2023-04-03 9:22 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox