From: Jens Axboe <[email protected]>
To: Pavel Begunkov <[email protected]>,
David Wei <[email protected]>,
[email protected]
Subject: Re: [PATCH next v1 2/2] io_uring: limit local tw done
Date: Wed, 20 Nov 2024 18:12:12 -0700 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 11/20/24 4:56 PM, Pavel Begunkov wrote:
> On 11/20/24 22:14, David Wei wrote:
>> Instead of eagerly running all available local tw, limit the amount of
>> local tw done to the max of IO_LOCAL_TW_DEFAULT_MAX (20) or wait_nr. The
>> value of 20 is chosen as a reasonable heuristic to allow enough work
>> batching but also keep latency down.
>>
>> Add a retry_llist that maintains a list of local tw that couldn't be
>> done in time. No synchronisation is needed since it is only modified
>> within the task context.
>>
>> Signed-off-by: David Wei <[email protected]>
>> ---
>> include/linux/io_uring_types.h | 1 +
>> io_uring/io_uring.c | 43 +++++++++++++++++++++++++---------
>> io_uring/io_uring.h | 2 +-
>> 3 files changed, 34 insertions(+), 12 deletions(-)
>>
>> diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
>> index 593c10a02144..011860ade268 100644
>> --- a/include/linux/io_uring_types.h
>> +++ b/include/linux/io_uring_types.h
>> @@ -336,6 +336,7 @@ struct io_ring_ctx {
>> */
>> struct {
>> struct llist_head work_llist;
>> + struct llist_head retry_llist;
>
> Fwiw, probably doesn't matter, but it doesn't even need
> to be atomic, it's queued and spliced while holding
> ->uring_lock, the pending check is also synchronised as
> there is only one possible task doing that.
>
>> unsigned long check_cq;
>> atomic_t cq_wait_nr;
>> atomic_t cq_timeouts;
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index 83bf041d2648..c3a7d0197636 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -121,6 +121,7 @@
> ...
>> static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts,
>> int min_events)
>> {
>> struct llist_node *node;
>> unsigned int loops = 0;
>> - int ret = 0;
>> + int ret, limit;
>> if (WARN_ON_ONCE(ctx->submitter_task != current))
>> return -EEXIST;
>> if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
>> atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
>> + limit = max(IO_LOCAL_TW_DEFAULT_MAX, min_events);
>> again:
>> + ret = __io_run_local_work_loop(&ctx->retry_llist.first, ts, limit);
>> + if (ctx->retry_llist.first)
>> + goto retry_done;
>> +
>> /*
>> * llists are in reverse order, flip it back the right way before
>> * running the pending items.
>> */
>> node = llist_reverse_order(llist_del_all(&ctx->work_llist));
>> - while (node) {
>> - struct llist_node *next = node->next;
>> - struct io_kiocb *req = container_of(node, struct io_kiocb,
>> - io_task_work.node);
>> - INDIRECT_CALL_2(req->io_task_work.func,
>> - io_poll_task_func, io_req_rw_complete,
>> - req, ts);
>> - ret++;
>> - node = next;
>> - }
>> + ret = __io_run_local_work_loop(&node, ts, ret);
>
> One thing that is not so nice is that now we have this handling and
> checks in the hot path, and __io_run_local_work_loop() most likely
> gets uninlined.
I don't think that really matters, it's pretty light. The main overhead
in this function is not the call, it's reordering requests and touching
cachelines of the requests.
I think it's pretty light as-is and actually looks pretty good. It's
also similar to how sqpoll bites over longer task_work lines, and
arguably a mistake that we allow huge depths of this when we can avoid
it with deferred task_work.
> I wonder, can we just requeue it via task_work again? We can even
> add a variant efficiently adding a list instead of a single entry,
> i.e. local_task_work_add(head, tail, ...);
I think that can only work if we change work_llist to be a regular list
with regular locking. Otherwise it's a bit of a mess with the list being
reordered, and then you're spending extra cycles on potentially
reordering all the entries again.
> I'm also curious what's the use case you've got that is hitting
> the problem?
I'll let David answer that one, but some task_work can take a while to
run, eg if it's not just posting a completion.
--
Jens Axboe
next prev parent reply other threads:[~2024-11-21 1:12 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-20 22:14 [PATCH next v1 0/2] limit local tw done David Wei
2024-11-20 22:14 ` [PATCH next v1 1/2] io_uring: add io_local_work_pending() David Wei
2024-11-20 23:45 ` Pavel Begunkov
2024-11-20 22:14 ` [PATCH next v1 2/2] io_uring: limit local tw done David Wei
2024-11-20 23:56 ` Pavel Begunkov
2024-11-21 0:52 ` David Wei
2024-11-21 1:12 ` Jens Axboe [this message]
2024-11-21 1:12 ` [PATCH next v1 0/2] " Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox