public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Bui Quang Minh <[email protected]>, [email protected]
Cc: Jens Axboe <[email protected]>, [email protected]
Subject: Re: [RFC PATCH 2/2] io_uring/io-wq: try to batch multiple free work
Date: Fri, 21 Feb 2025 12:44:56 +0000	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 2/21/25 04:19, Bui Quang Minh wrote:
> Currently, in case we don't use IORING_SETUP_DEFER_TASKRUN, when io
> worker frees work, it needs to add a task work. This creates contention
> on tctx->task_list. With this commit, io work queues free work on a
> local list and batch multiple free work in one call when the number of
> free work in local list exceeds IO_REQ_ALLOC_BATCH.

I see no relation to IO_REQ_ALLOC_BATCH, that should be
a separate macro.

> Signed-off-by: Bui Quang Minh <[email protected]>
> ---
>   io_uring/io-wq.c    | 62 +++++++++++++++++++++++++++++++++++++++++++--
>   io_uring/io-wq.h    |  4 ++-
>   io_uring/io_uring.c | 23 ++++++++++++++---
>   io_uring/io_uring.h |  6 ++++-
>   4 files changed, 87 insertions(+), 8 deletions(-)
> 
> diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
> index 5d0928f37471..096711707db9 100644
> --- a/io_uring/io-wq.c
> +++ b/io_uring/io-wq.c
...
> @@ -601,7 +622,41 @@ static void io_worker_handle_work(struct io_wq_acct *acct,
>   			wq->do_work(work);
>   			io_assign_current_work(worker, NULL);
>   
> -			linked = wq->free_work(work);
> +			/*
> +			 * All requests in free list must have the same
> +			 * io_ring_ctx.
> +			 */
> +			if (last_added_ctx && last_added_ctx != req->ctx) {
> +				flush_req_free_list(&free_list, tail);
> +				tail = NULL;
> +				last_added_ctx = NULL;
> +				free_req = 0;
> +			}
> +
> +			/*
> +			 * Try to batch free work when
> +			 * !IORING_SETUP_DEFER_TASKRUN to reduce contention
> +			 * on tctx->task_list.
> +			 */
> +			if (req->ctx->flags & IORING_SETUP_DEFER_TASKRUN)
> +				linked = wq->free_work(work, NULL, NULL);
> +			else
> +				linked = wq->free_work(work, &free_list, &did_free);

The problem here is that iowq is blocking and hence you lock up resources
of already completed request for who knows how long. In case of unbound
requests (see IO_WQ_ACCT_UNBOUND) it's indefinite, and it's absolutely
cannot be used without some kind of a timer. But even in case of bound
work, it can be pretty long.

Maybe, for bound requests it can target N like here, but read jiffies
in between each request and flush if it has been too long. So in worst
case the total delay is the last req execution time + DT. But even then
it feels wrong, especially with filesystems sometimes not even
honouring NOWAIT.

The question is, why do you force it into the worker pool with the
IOSQE_ASYNC flag? It's generally not recommended, and the name of the
flag is confusing as it should've been more like "WORKER_OFFLOAD".

> +
> +			if (did_free) {
> +				if (!tail)
> +					tail = free_list.first;
> +
> +				last_added_ctx = req->ctx;
> +				free_req++;
> +				if (free_req == IO_REQ_ALLOC_BATCH) {
> +					flush_req_free_list(&free_list, tail);
> +					tail = NULL;
> +					last_added_ctx = NULL;
> +					free_req = 0;
> +				}
> +			}
> +
>   			work = next_hashed;
>   			if (!work && linked && !io_wq_is_hashed(linked)) {
>   				work = linked;
> @@ -626,6 +681,9 @@ static void io_worker_handle_work(struct io_wq_acct *acct,
>   			break;
>   		raw_spin_lock(&acct->lock);
>   	} while (1);
> +
> +	if (free_list.first)
> +		flush_req_free_list(&free_list, tail);
>   }
>   
...

-- 
Pavel Begunkov


  reply	other threads:[~2025-02-21 12:43 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-21  4:19 [RFC PATCH 0/2] Batch free work in io-wq Bui Quang Minh
2025-02-21  4:19 ` [RFC PATCH 1/2] io_uring: make io_req_normal_work_add accept a list of requests Bui Quang Minh
2025-02-21  4:19 ` [RFC PATCH 2/2] io_uring/io-wq: try to batch multiple free work Bui Quang Minh
2025-02-21 12:44   ` Pavel Begunkov [this message]
2025-02-21 14:45     ` Bui Quang Minh
2025-02-21 14:52     ` Bui Quang Minh
2025-02-21 15:28       ` Pavel Begunkov
2025-02-21 15:41         ` Pavel Begunkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox