From: Noah Goldstein <[email protected]>
To: Pavel Begunkov <[email protected]>
Cc: "open list:IO_URING" <[email protected]>,
Jens Axboe <[email protected]>
Subject: Re: [PATCH 8/8] io_uring: rearrange io_read()/write()
Date: Sat, 16 Oct 2021 17:52:19 -0500 [thread overview]
Message-ID: <CAFUsyfKyRnXhcxOVfSAxeyKsQqGXJ7PdDYw3TXC3H+q_yp5LMA@mail.gmail.com> (raw)
In-Reply-To: <2c2536c5896d70994de76e387ea09a0402173a3f.1634144845.git.asml.silence@gmail.com>
On Thu, Oct 14, 2021 at 10:13 AM Pavel Begunkov <[email protected]> wrote:
>
> Combine force_nonblock branches (which is already optimised by
> compiler), flip branches so the most hot/common path is the first, e.g.
> as with non on-stack iov setup, and add extra likely/unlikely
> attributions for errror paths.
>
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
> fs/io_uring.c | 75 +++++++++++++++++++++++++--------------------------
> 1 file changed, 37 insertions(+), 38 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index f9af54b10238..8bbbe7ccad54 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -3395,7 +3395,7 @@ static bool io_rw_should_retry(struct io_kiocb *req)
>
> static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
> {
> - if (req->file->f_op->read_iter)
> + if (likely(req->file->f_op->read_iter))
> return call_read_iter(req->file, &req->rw.kiocb, iter);
> else if (req->file->f_op->read)
> return loop_rw_iter(READ, req, iter);
> @@ -3411,14 +3411,18 @@ static bool need_read_all(struct io_kiocb *req)
>
> static int io_read(struct io_kiocb *req, unsigned int issue_flags)
> {
> - struct io_rw_state __s, *s;
> + struct io_rw_state __s, *s = &__s;
> struct iovec *iovec;
> struct kiocb *kiocb = &req->rw.kiocb;
> bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
> struct io_async_rw *rw;
> ssize_t ret, ret2;
>
> - if (req_has_async_data(req)) {
> + if (!req_has_async_data(req)) {
> + ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
> + if (unlikely(ret < 0))
> + return ret;
> + } else {
> rw = req->async_data;
> s = &rw->s;
> /*
> @@ -3428,24 +3432,19 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
> */
> iov_iter_restore(&s->iter, &s->iter_state);
> iovec = NULL;
> - } else {
> - s = &__s;
> - ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
> - if (unlikely(ret < 0))
> - return ret;
> }
> req->result = iov_iter_count(&s->iter);
>
> - /* Ensure we clear previously set non-block flag */
> - if (!force_nonblock)
> - kiocb->ki_flags &= ~IOCB_NOWAIT;
> - else
> + if (force_nonblock) {
> + /* If the file doesn't support async, just async punt */
> + if (unlikely(!io_file_supports_nowait(req, READ))) {
> + ret = io_setup_async_rw(req, iovec, s, true);
> + return ret ?: -EAGAIN;
> + }
> kiocb->ki_flags |= IOCB_NOWAIT;
> -
> - /* If the file doesn't support async, just async punt */
> - if (force_nonblock && !io_file_supports_nowait(req, READ)) {
> - ret = io_setup_async_rw(req, iovec, s, true);
> - return ret ?: -EAGAIN;
> + } else {
> + /* Ensure we clear previously set non-block flag */
> + kiocb->ki_flags &= ~IOCB_NOWAIT;
> }
>
> ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), req->result);
> @@ -3541,40 +3540,40 @@ static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>
> static int io_write(struct io_kiocb *req, unsigned int issue_flags)
> {
> - struct io_rw_state __s, *s;
> - struct io_async_rw *rw;
> + struct io_rw_state __s, *s = &__s;
> struct iovec *iovec;
> struct kiocb *kiocb = &req->rw.kiocb;
> bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
> ssize_t ret, ret2;
>
> - if (req_has_async_data(req)) {
> - rw = req->async_data;
> - s = &rw->s;
> - iov_iter_restore(&s->iter, &s->iter_state);
> - iovec = NULL;
> - } else {
> - s = &__s;
> + if (!req_has_async_data(req)) {
> ret = io_import_iovec(WRITE, req, &iovec, s, issue_flags);
> if (unlikely(ret < 0))
> return ret;
> + } else {
> + struct io_async_rw *rw = req->async_data;
> +
> + s = &rw->s;
> + iov_iter_restore(&s->iter, &s->iter_state);
> + iovec = NULL;
> }
> req->result = iov_iter_count(&s->iter);
>
> - /* Ensure we clear previously set non-block flag */
> - if (!force_nonblock)
> - kiocb->ki_flags &= ~IOCB_NOWAIT;
> - else
> - kiocb->ki_flags |= IOCB_NOWAIT;
> + if (force_nonblock) {
> + /* If the file doesn't support async, just async punt */
> + if (unlikely(!io_file_supports_nowait(req, WRITE)))
> + goto copy_iov;
>
> - /* If the file doesn't support async, just async punt */
> - if (force_nonblock && !io_file_supports_nowait(req, WRITE))
> - goto copy_iov;
> + /* file path doesn't support NOWAIT for non-direct_IO */
> + if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
You can drop this 'force_nonblock' no?
> + (req->flags & REQ_F_ISREG))
> + goto copy_iov;
>
> - /* file path doesn't support NOWAIT for non-direct_IO */
> - if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
> - (req->flags & REQ_F_ISREG))
> - goto copy_iov;
> + kiocb->ki_flags |= IOCB_NOWAIT;
> + } else {
> + /* Ensure we clear previously set non-block flag */
> + kiocb->ki_flags &= ~IOCB_NOWAIT;
> + }
>
> ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
> if (unlikely(ret))
...
What swapping order of conditions below:
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
The ret2 check will almost certainly be faster than 2x deref.
> --
> 2.33.0
>
next prev parent reply other threads:[~2021-10-16 22:52 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
2021-10-14 15:10 ` [PATCH 1/8] io_uring: consistent typing for issue_flags Pavel Begunkov
2021-10-14 15:10 ` [PATCH 2/8] io_uring: prioritise read success path over fails Pavel Begunkov
2021-10-14 15:10 ` [PATCH 3/8] io_uring: optimise rw comletion handlers Pavel Begunkov
2021-10-14 15:10 ` [PATCH 4/8] io_uring: encapsulate rw state Pavel Begunkov
2021-10-18 6:06 ` Hao Xu
2021-10-14 15:10 ` [PATCH 5/8] io_uring: optimise read/write iov state storing Pavel Begunkov
2021-10-14 15:10 ` [PATCH 6/8] io_uring: optimise io_import_iovec nonblock passing Pavel Begunkov
2021-10-14 15:10 ` [PATCH 7/8] io_uring: clean up io_import_iovec Pavel Begunkov
2021-10-14 15:10 ` [PATCH 8/8] io_uring: rearrange io_read()/write() Pavel Begunkov
2021-10-16 22:52 ` Noah Goldstein [this message]
2021-10-16 23:25 ` Pavel Begunkov
2021-10-17 1:35 ` Noah Goldstein
2021-10-14 18:17 ` [PATCH for-next 0/8] read/write cleanup Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFUsyfKyRnXhcxOVfSAxeyKsQqGXJ7PdDYw3TXC3H+q_yp5LMA@mail.gmail.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox