public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
From: Caleb Sander Mateos <csander@purestorage.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: io-uring@vger.kernel.org
Subject: Re: [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method
Date: Fri, 6 Jun 2025 10:36:48 -0700	[thread overview]
Message-ID: <CADUfDZq1LaxzeuTDqMjF0H8L5cC36-dqZRhBYEsGQDjZFrZycw@mail.gmail.com> (raw)
In-Reply-To: <20250605194728.145287-3-axboe@kernel.dk>

On Thu, Jun 5, 2025 at 12:47 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> Will be called by the core of io_uring, if inline issue is not going
> to be tried for a request. Opcodes can define this handler to defer
> copying of SQE data that should remain stable.
>
> Called with IO_URING_F_INLINE set if this is an inline issue, and that
> flag NOT set if it's an out-of-line call. The handler can use this to
> determine if it's still safe to copy the SQE. The core should always
> guarantee that it will be safe, but by having this flag available the
> handler is able to check and fail.
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  io_uring/io_uring.c | 32 ++++++++++++++++++++++++--------
>  io_uring/opdef.h    |  1 +
>  2 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 079a95e1bd82..fdf23e81c4ff 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -147,7 +147,7 @@ static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
>                                          bool cancel_all,
>                                          bool is_sqpoll_thread);
>
> -static void io_queue_sqe(struct io_kiocb *req);
> +static void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe);
>  static void __io_req_caches_free(struct io_ring_ctx *ctx);
>
>  static __read_mostly DEFINE_STATIC_KEY_FALSE(io_key_has_sqarray);
> @@ -1377,7 +1377,7 @@ void io_req_task_submit(struct io_kiocb *req, io_tw_token_t tw)
>         else if (req->flags & REQ_F_FORCE_ASYNC)
>                 io_queue_iowq(req);
>         else
> -               io_queue_sqe(req);
> +               io_queue_sqe(req, NULL);

Passing NULL here is a bit weird. As I mentioned on patch 1, I think
it would make more sense to consider this task work path a
"non-inline" issue.

>  }
>
>  void io_req_task_queue_fail(struct io_kiocb *req, int ret)
> @@ -1935,14 +1935,30 @@ struct file *io_file_get_normal(struct io_kiocb *req, int fd)
>         return file;
>  }
>
> -static void io_queue_async(struct io_kiocb *req, int ret)
> +static int io_req_sqe_copy(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> +                          unsigned int issue_flags)
> +{
> +       const struct io_cold_def *def = &io_cold_defs[req->opcode];
> +
> +       if (!def->sqe_copy)
> +               return 0;
> +       return def->sqe_copy(req, sqe, issue_flags);
> +}
> +
> +static void io_queue_async(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> +                          int ret)
>         __must_hold(&req->ctx->uring_lock)
>  {
>         if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
> +fail:
>                 io_req_defer_failed(req, ret);
>                 return;
>         }
>
> +       ret = io_req_sqe_copy(req, sqe, 0);
> +       if (unlikely(ret))
> +               goto fail;

It seems possible to avoid the goto by just adding "|| unlikely(ret =
io_req_sqe_copy(req, sqe, 0))" to the if condition above. But the
control flow isn't super convoluted with the goto, so I don't feel
strongly.

> +
>         switch (io_arm_poll_handler(req, 0)) {
>         case IO_APOLL_READY:
>                 io_kbuf_recycle(req, 0);
> @@ -1957,7 +1973,7 @@ static void io_queue_async(struct io_kiocb *req, int ret)
>         }
>  }
>
> -static inline void io_queue_sqe(struct io_kiocb *req)
> +static inline void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>         __must_hold(&req->ctx->uring_lock)
>  {
>         int ret;
> @@ -1970,7 +1986,7 @@ static inline void io_queue_sqe(struct io_kiocb *req)
>          * doesn't support non-blocking read/write attempts
>          */
>         if (unlikely(ret))
> -               io_queue_async(req, ret);
> +               io_queue_async(req, sqe, ret);
>  }
>
>  static void io_queue_sqe_fallback(struct io_kiocb *req)
> @@ -2200,7 +2216,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>                 link->last = req;
>
>                 if (req->flags & IO_REQ_LINK_FLAGS)
> -                       return 0;
> +                       return io_req_sqe_copy(req, sqe, IO_URING_F_INLINE);

This only copies the SQE for the middle reqs in a linked chain. Don't
we need to copy it for the last req too? I would call
io_req_sqe_copy() unconditionally before the req->flags &
IO_REQ_LINK_FLAGS check.

>                 /* last request of the link, flush it */
>                 req = link->head;
>                 link->head = NULL;
> @@ -2216,10 +2232,10 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>  fallback:
>                         io_queue_sqe_fallback(req);
>                 }
> -               return 0;
> +               return io_req_sqe_copy(req, sqe, IO_URING_F_INLINE);

The first req in a linked chain hits this code path too, but it
doesn't usually need its SQE copied since it will still be issued
synchronously by default. (The io_queue_sqe_fallback() to handle
unterminated links in io_submit_state_end() might be an exception.)
But I am fine with copying the SQE for the first req too if it keeps
the code simpler. If the first req in a chain also has
REQ_F_FORCE_ASYNC | REQ_F_FAIL set, it will actually reach this
io_req_sqe_copy() twice (the second time from the goto fallback path).
I think the io_req_sqe_copy() can just move into the fallback block?

>         }
>
> -       io_queue_sqe(req);
> +       io_queue_sqe(req, sqe);
>         return 0;
>  }
>
> diff --git a/io_uring/opdef.h b/io_uring/opdef.h
> index 719a52104abe..71bfaa3c8afd 100644
> --- a/io_uring/opdef.h
> +++ b/io_uring/opdef.h
> @@ -38,6 +38,7 @@ struct io_issue_def {
>  struct io_cold_def {
>         const char              *name;
>
> +       int (*sqe_copy)(struct io_kiocb *, const struct io_uring_sqe *, unsigned int issue_flags);

nit: this line is a tad long

Best,
Caleb

>         void (*cleanup)(struct io_kiocb *);
>         void (*fail)(struct io_kiocb *);
>  };
> --
> 2.49.0
>

  parent reply	other threads:[~2025-06-06 17:37 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-05 19:40 [PATCHSET RFC v2 0/4] uring_cmd copy avoidance Jens Axboe
2025-06-05 19:40 ` [PATCH 1/4] io_uring: add IO_URING_F_INLINE issue flag Jens Axboe
2025-06-06 17:31   ` Caleb Sander Mateos
2025-06-06 21:02     ` Jens Axboe
2025-06-05 19:40 ` [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method Jens Axboe
2025-06-05 20:05   ` Jens Axboe
2025-06-06 17:36   ` Caleb Sander Mateos [this message]
2025-06-06 21:01     ` Jens Axboe
2025-06-05 19:40 ` [PATCH 3/4] io_uring/uring_cmd: get rid of io_uring_cmd_prep_setup() Jens Axboe
2025-06-06 17:37   ` Caleb Sander Mateos
2025-06-05 19:40 ` [PATCH 4/4] io_uring/uring_cmd: implement ->sqe_copy() to avoid unnecessary copies Jens Axboe
2025-06-06 17:39   ` Caleb Sander Mateos
2025-06-06 21:05     ` Jens Axboe
2025-06-06 22:08       ` Jens Axboe
2025-06-06 22:09         ` Caleb Sander Mateos
2025-06-06 23:53           ` Jens Axboe
2025-06-06 17:29 ` [PATCHSET RFC v2 0/4] uring_cmd copy avoidance Caleb Sander Mateos
2025-06-06 17:32   ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2025-06-06 21:54 [PATCHSET v3 " Jens Axboe
2025-06-06 21:54 ` [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method Jens Axboe
2025-06-07  0:50   ` Caleb Sander Mateos
2025-06-07 11:16     ` Jens Axboe
2025-06-09 17:36 [PATCHSET v4 0/4] uring_cmd copy avoidance Jens Axboe
2025-06-09 17:36 ` [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method Jens Axboe
2025-06-09 21:54   ` Caleb Sander Mateos
2025-06-10 13:32     ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADUfDZq1LaxzeuTDqMjF0H8L5cC36-dqZRhBYEsGQDjZFrZycw@mail.gmail.com \
    --to=csander@purestorage.com \
    --cc=axboe@kernel.dk \
    --cc=io-uring@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox