From: Jens Axboe <axboe@kernel.dk>
To: Caleb Sander Mateos <csander@purestorage.com>
Cc: io-uring@vger.kernel.org
Subject: Re: [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method
Date: Sat, 7 Jun 2025 05:16:39 -0600 [thread overview]
Message-ID: <950a9907-6820-4c2f-9901-8454b418e884@kernel.dk> (raw)
In-Reply-To: <CADUfDZq45a9K9SHEeTTFU5vpbbkFtOhjpW_ovAiV_Y-Xbdy=uA@mail.gmail.com>
On 6/6/25 6:50 PM, Caleb Sander Mateos wrote:
> On Fri, Jun 6, 2025 at 2:56?PM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> Will be called by the core of io_uring, if inline issue is not going
>> to be tried for a request. Opcodes can define this handler to defer
>> copying of SQE data that should remain stable.
>>
>> Only called if IO_URING_F_INLINE is set. If it isn't set, then there's a
>> bug in the core handling of this, and -EFAULT will be returned instead
>> to terminate the request. This will trigger a WARN_ON_ONCE(). Don't
>> expect this to ever trigger, and down the line this can be removed.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>> io_uring/io_uring.c | 25 ++++++++++++++++++++++---
>> io_uring/opdef.h | 1 +
>> 2 files changed, 23 insertions(+), 3 deletions(-)
>>
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index 0f9f6a173e66..9799a31a2b29 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -1935,14 +1935,31 @@ struct file *io_file_get_normal(struct io_kiocb *req, int fd)
>> return file;
>> }
>>
>> -static void io_queue_async(struct io_kiocb *req, int ret)
>> +static int io_req_sqe_copy(struct io_kiocb *req, unsigned int issue_flags)
>> +{
>> + const struct io_cold_def *def = &io_cold_defs[req->opcode];
>> +
>> + if (!def->sqe_copy)
>> + return 0;
>> + if (WARN_ON_ONCE(!(issue_flags & IO_URING_F_INLINE)))
>
> I'm pretty confident that every initial async path under
> io_submit_sqe() will call io_req_sqe_copy(). But I'm not positive that
> io_req_sqe_copy() won't get called *additional* times from non-inline
> contexts. One example scenario:
> - io_submit_sqe() calls io_queue_sqe()
> - io_issue_sqe() returns -EAGAIN, so io_queue_sqe() calls io_queue_async()
> - io_queue_async() calls io_req_sqe_copy() in inline context
> - io_queue_async() calls io_arm_poll_handler(), which returns
> IO_APOLL_READY, so io_req_task_queue() is called
> - Some other I/O to the file (possibly on a different task) clears the
> ready poll events
> - io_req_task_submit() calls io_queue_sqe() in task work context
> - io_issue_sqe() returns -EAGAIN again, so io_queue_async() is called
> - io_queue_async() calls io_req_sqe_copy() a second time in non-inline
> (task work) context
>
> If this is indeed possible, then I think we may need to relax this
> check so it only verifies that IO_URING_F_INLINE is set *the first
> time* io_req_sqe_copy() is called for a given req. (Or just remove the
> IO_URING_F_INLINE check entirely.)
Yes, the check is a bit eager indeed. I've added a flag for this, so
that we only go through the IO_URING_F_INLINE check and ->sqe_copy()
callback once.
>> + return -EFAULT;
>> + def->sqe_copy(req);
>> + return 0;
>> +}
>> +
>> +static void io_queue_async(struct io_kiocb *req, unsigned int issue_flags, int ret)
>> __must_hold(&req->ctx->uring_lock)
>> {
>> if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
>> +fail:
>> io_req_defer_failed(req, ret);
>> return;
>> }
>>
>> + ret = io_req_sqe_copy(req, issue_flags);
>> + if (unlikely(ret))
>> + goto fail;
>> +
>> switch (io_arm_poll_handler(req, 0)) {
>> case IO_APOLL_READY:
>> io_kbuf_recycle(req, 0);
>> @@ -1971,7 +1988,7 @@ static inline void io_queue_sqe(struct io_kiocb *req, unsigned int extra_flags)
>> * doesn't support non-blocking read/write attempts
>> */
>> if (unlikely(ret))
>> - io_queue_async(req, ret);
>> + io_queue_async(req, issue_flags, ret);
>> }
>>
>> static void io_queue_sqe_fallback(struct io_kiocb *req)
>> @@ -1986,6 +2003,8 @@ static void io_queue_sqe_fallback(struct io_kiocb *req)
>> req->flags |= REQ_F_LINK;
>> io_req_defer_failed(req, req->cqe.res);
>> } else {
>> + /* can't fail with IO_URING_F_INLINE */
>> + io_req_sqe_copy(req, IO_URING_F_INLINE);
>> if (unlikely(req->ctx->drain_active))
>> io_drain_req(req);
>> else
>> @@ -2201,7 +2220,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>> link->last = req;
>>
>> if (req->flags & IO_REQ_LINK_FLAGS)
>> - return 0;
>> + return io_req_sqe_copy(req, IO_URING_F_INLINE);
>
> I still think this misses the last req in a linked chain, which will
> be issued async but won't have IO_REQ_LINK_FLAGS set. Am I missing
> something?
Indeed, we need to call this before that section. Fixed that up too,
thanks.
--
Jens Axboe
next prev parent reply other threads:[~2025-06-07 11:16 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-06 21:54 [PATCHSET v3 0/4] uring_cmd copy avoidance Jens Axboe
2025-06-06 21:54 ` [PATCH 1/4] io_uring: add IO_URING_F_INLINE issue flag Jens Axboe
2025-06-07 0:49 ` Caleb Sander Mateos
2025-06-06 21:54 ` [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method Jens Axboe
2025-06-07 0:50 ` Caleb Sander Mateos
2025-06-07 11:16 ` Jens Axboe [this message]
2025-06-06 21:54 ` [PATCH 3/4] io_uring/uring_cmd: get rid of io_uring_cmd_prep_setup() Jens Axboe
2025-06-08 19:57 ` Anuj gupta
2025-06-06 21:54 ` [PATCH 4/4] io_uring/uring_cmd: implement ->sqe_copy() to avoid unnecessary copies Jens Axboe
2025-06-07 0:50 ` Caleb Sander Mateos
-- strict thread matches above, loose matches on Subject: below --
2025-06-09 17:36 [PATCHSET v4 0/4] uring_cmd copy avoidance Jens Axboe
2025-06-09 17:36 ` [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method Jens Axboe
2025-06-09 21:54 ` Caleb Sander Mateos
2025-06-10 13:32 ` Jens Axboe
2025-06-05 19:40 [PATCHSET RFC v2 0/4] uring_cmd copy avoidance Jens Axboe
2025-06-05 19:40 ` [PATCH 2/4] io_uring: add struct io_cold_def->sqe_copy() method Jens Axboe
2025-06-05 20:05 ` Jens Axboe
2025-06-06 17:36 ` Caleb Sander Mateos
2025-06-06 21:01 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=950a9907-6820-4c2f-9901-8454b418e884@kernel.dk \
--to=axboe@kernel.dk \
--cc=csander@purestorage.com \
--cc=io-uring@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox