From: Pavel Begunkov <[email protected]>
To: Marcelo Diop-Gonzalez <[email protected]>, [email protected]
Cc: [email protected]
Subject: Re: [PATCH v3 1/1] io_uring: flush timeouts that should already have expired
Date: Thu, 14 Jan 2021 21:40:01 +0000 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 14/01/2021 15:50, Marcelo Diop-Gonzalez wrote:
> Right now io_flush_timeouts() checks if the current number of events
> is equal to ->timeout.target_seq, but this will miss some timeouts if
> there have been more than 1 event added since the last time they were
> flushed (possible in io_submit_flush_completions(), for example). Fix
> it by recording the last sequence at which timeouts were flushed so
> that the number of events seen can be compared to the number of events
> needed without overflow.
Looks good, but there is a little change I'll ask you to make (see
below). In a meanwhile I'll test it, so the patch on the fast track.
>
> Signed-off-by: Marcelo Diop-Gonzalez <[email protected]>
> ---
> fs/io_uring.c | 29 +++++++++++++++++++++++++----
> 1 file changed, 25 insertions(+), 4 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 372be9caf340..71d8fa0733ad 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -354,6 +354,7 @@ struct io_ring_ctx {
> unsigned cq_entries;
> unsigned cq_mask;
> atomic_t cq_timeouts;
> + unsigned cq_last_tm_flush;
> unsigned long cq_check_overflow;
> struct wait_queue_head cq_wait;
> struct fasync_struct *cq_fasync;
> @@ -1639,19 +1640,36 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx)
>
> static void io_flush_timeouts(struct io_ring_ctx *ctx)
> {
> - while (!list_empty(&ctx->timeout_list)) {
> + u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
This assignment should go after list_empty() -- because of the atomic part
my compiler can't reshuffle them itself.
> +
> + if (list_empty(&ctx->timeout_list))
> + return;
[...]
> static void io_commit_cqring(struct io_ring_ctx *ctx)
> @@ -5837,6 +5855,9 @@ static int io_timeout(struct io_kiocb *req)
> tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
> req->timeout.target_seq = tail + off;
>
> + /* Update the last seq here in case io_flush_timeouts() hasn't */
> + ctx->cq_last_tm_flush = tail;
Have to note that it's ok to do because we don't mix submissions and
completions, so io_timeout should never fall under same completion_lock
section as cq commit,
but otherwise some future locked version of io_timeout would be cutting
off a part of the current flush window (i.e. this [last, cur] thing).
--
Pavel Begunkov
next prev parent reply other threads:[~2021-01-14 21:44 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-14 15:50 [PATCH v3 0/1] io_uring: fix skipping of old timeout events Marcelo Diop-Gonzalez
2021-01-14 15:50 ` [PATCH v3 1/1] io_uring: flush timeouts that should already have expired Marcelo Diop-Gonzalez
2021-01-14 21:40 ` Pavel Begunkov [this message]
2021-01-15 14:45 ` Pavel Begunkov
2021-01-14 21:42 ` [PATCH v3 0/1] io_uring: fix skipping of old timeout events Pavel Begunkov
2021-01-15 16:37 ` Marcelo Diop-Gonzalez
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox