public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: David Wei <[email protected]>, [email protected]
Cc: Jens Axboe <[email protected]>
Subject: Re: [PATCH next v1 2/2] io_uring: limit local tw done
Date: Wed, 20 Nov 2024 23:56:15 +0000	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 11/20/24 22:14, David Wei wrote:
> Instead of eagerly running all available local tw, limit the amount of
> local tw done to the max of IO_LOCAL_TW_DEFAULT_MAX (20) or wait_nr. The
> value of 20 is chosen as a reasonable heuristic to allow enough work
> batching but also keep latency down.
> 
> Add a retry_llist that maintains a list of local tw that couldn't be
> done in time. No synchronisation is needed since it is only modified
> within the task context.
> 
> Signed-off-by: David Wei <[email protected]>
> ---
>   include/linux/io_uring_types.h |  1 +
>   io_uring/io_uring.c            | 43 +++++++++++++++++++++++++---------
>   io_uring/io_uring.h            |  2 +-
>   3 files changed, 34 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
> index 593c10a02144..011860ade268 100644
> --- a/include/linux/io_uring_types.h
> +++ b/include/linux/io_uring_types.h
> @@ -336,6 +336,7 @@ struct io_ring_ctx {
>   	 */
>   	struct {
>   		struct llist_head	work_llist;
> +		struct llist_head	retry_llist;

Fwiw, probably doesn't matter, but it doesn't even need
to be atomic, it's queued and spliced while holding
->uring_lock, the pending check is also synchronised as
there is only one possible task doing that.

>   		unsigned long		check_cq;
>   		atomic_t		cq_wait_nr;
>   		atomic_t		cq_timeouts;
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 83bf041d2648..c3a7d0197636 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -121,6 +121,7 @@
...
>   static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts,
>   			       int min_events)
>   {
>   	struct llist_node *node;
>   	unsigned int loops = 0;
> -	int ret = 0;
> +	int ret, limit;
>   
>   	if (WARN_ON_ONCE(ctx->submitter_task != current))
>   		return -EEXIST;
>   	if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
>   		atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
> +	limit = max(IO_LOCAL_TW_DEFAULT_MAX, min_events);
>   again:
> +	ret = __io_run_local_work_loop(&ctx->retry_llist.first, ts, limit);
> +	if (ctx->retry_llist.first)
> +		goto retry_done;
> +
>   	/*
>   	 * llists are in reverse order, flip it back the right way before
>   	 * running the pending items.
>   	 */
>   	node = llist_reverse_order(llist_del_all(&ctx->work_llist));
> -	while (node) {
> -		struct llist_node *next = node->next;
> -		struct io_kiocb *req = container_of(node, struct io_kiocb,
> -						    io_task_work.node);
> -		INDIRECT_CALL_2(req->io_task_work.func,
> -				io_poll_task_func, io_req_rw_complete,
> -				req, ts);
> -		ret++;
> -		node = next;
> -	}
> +	ret = __io_run_local_work_loop(&node, ts, ret);

One thing that is not so nice is that now we have this handling and
checks in the hot path, and __io_run_local_work_loop() most likely
gets uninlined.

I wonder, can we just requeue it via task_work again? We can even
add a variant efficiently adding a list instead of a single entry,
i.e. local_task_work_add(head, tail, ...);

I'm also curious what's the use case you've got that is hitting
the problem?

-- 
Pavel Begunkov

  reply	other threads:[~2024-11-20 23:55 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-20 22:14 [PATCH next v1 0/2] limit local tw done David Wei
2024-11-20 22:14 ` [PATCH next v1 1/2] io_uring: add io_local_work_pending() David Wei
2024-11-20 23:45   ` Pavel Begunkov
2024-11-20 22:14 ` [PATCH next v1 2/2] io_uring: limit local tw done David Wei
2024-11-20 23:56   ` Pavel Begunkov [this message]
2024-11-21  0:52     ` David Wei
2024-11-21 14:29       ` Pavel Begunkov
2024-11-21 14:34         ` Jens Axboe
2024-11-21 14:58           ` Pavel Begunkov
2024-11-21 15:02             ` Jens Axboe
2024-11-21  1:12     ` Jens Axboe
2024-11-21 14:25       ` Pavel Begunkov
2024-11-21 14:31         ` Jens Axboe
2024-11-21 15:07           ` Pavel Begunkov
2024-11-21 15:15             ` Jens Axboe
2024-11-21 15:22               ` Jens Axboe
2024-11-21 16:00                 ` Pavel Begunkov
2024-11-21 16:05                   ` Jens Axboe
2024-11-21 16:18                 ` Pavel Begunkov
2024-11-21 16:20                   ` Jens Axboe
2024-11-21 16:43                     ` Pavel Begunkov
2024-11-21 16:57                       ` Jens Axboe
2024-11-21 17:05                         ` Jens Axboe
2024-11-22 17:01                           ` Pavel Begunkov
2024-11-22 17:08                             ` Jens Axboe
2024-11-23  0:50                               ` Pavel Begunkov
2024-11-21 17:53             ` David Wei
2024-11-22 15:57               ` Pavel Begunkov
2024-11-21  1:12 ` [PATCH next v1 0/2] " Jens Axboe
2024-11-21 14:16 ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox