public inbox for [email protected]
 help / color / mirror / Atom feed
From: Marcelo Diop-Gonzalez <[email protected]>
To: Pavel Begunkov <[email protected]>
Cc: [email protected], [email protected]
Subject: Re: [PATCH v2 1/2] io_uring: only increment ->cq_timeouts along with ->cached_cq_tail
Date: Mon, 4 Jan 2021 11:49:52 -0500	[thread overview]
Message-ID: <CA+saATWskd9u9iSt6G-FaXmzw=n2osAasxBWGCEup2esbZE1XQ@mail.gmail.com> (raw)
In-Reply-To: <[email protected]>

Yeah I agree this one is kindof ugly... I'll try to think of a different way

-Marcelo

On Sat, Jan 2, 2021 at 3:07 PM Pavel Begunkov <[email protected]> wrote:
>
> On 19/12/2020 19:15, Marcelo Diop-Gonzalez wrote:
> > The quantity ->cached_cq_tail - ->cq_timeouts is used to tell how many
> > non-timeout events have happened, but this subtraction could overflow
> > if ->cq_timeouts is incremented more times than ->cached_cq_tail.
> > It's maybe unlikely, but currently this can happen if a timeout event
> > overflows the cqring, since in that case io_get_cqring() doesn't
> > increment ->cached_cq_tail, but ->cq_timeouts is incremented by the
> > caller. Fix it by incrementing ->cq_timeouts inside io_get_cqring().
> >
> > Signed-off-by: Marcelo Diop-Gonzalez <[email protected]>
> > ---
> >  fs/io_uring.c | 14 +++++++-------
> >  1 file changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/fs/io_uring.c b/fs/io_uring.c
> > index f3690dfdd564..f394bf358022 100644
> > --- a/fs/io_uring.c
> > +++ b/fs/io_uring.c
> > @@ -1582,8 +1582,6 @@ static void io_kill_timeout(struct io_kiocb *req)
> >
> >       ret = hrtimer_try_to_cancel(&io->timer);
> >       if (ret != -1) {
> > -             atomic_set(&req->ctx->cq_timeouts,
> > -                     atomic_read(&req->ctx->cq_timeouts) + 1);
> >               list_del_init(&req->timeout.list);
> >               io_cqring_fill_event(req, 0);
> >               io_put_req_deferred(req, 1);
> > @@ -1664,7 +1662,7 @@ static inline bool io_sqring_full(struct io_ring_ctx *ctx)
> >       return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == r->sq_ring_entries;
> >  }
> >
> > -static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
> > +static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx, u8 opcode)
> >  {
> >       struct io_rings *rings = ctx->rings;
> >       unsigned tail;
> > @@ -1679,6 +1677,10 @@ static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
> >               return NULL;
> >
> >       ctx->cached_cq_tail++;
> > +     if (opcode == IORING_OP_TIMEOUT)
> > +             atomic_set(&ctx->cq_timeouts,
> > +                        atomic_read(&ctx->cq_timeouts) + 1);
> > +
>
> Don't think I like it. The function is pretty hot, so wouldn't want that extra
> burden just for timeouts, which should be cold enough especially with the new
> timeout CQ waits. Also passing opcode here is awkward and not very great
> abstraction wise.
>
> >       return &rings->cqes[tail & ctx->cq_mask];
> >  }
> >
> > @@ -1728,7 +1730,7 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
> >               if (!io_match_task(req, tsk, files))
> >                       continue;
> >
> > -             cqe = io_get_cqring(ctx);
> > +             cqe = io_get_cqring(ctx, req->opcode);
> >               if (!cqe && !force)
> >                       break;
> >
> > @@ -1776,7 +1778,7 @@ static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
> >        * submission (by quite a lot). Increment the overflow count in
> >        * the ring.
> >        */
> > -     cqe = io_get_cqring(ctx);
> > +     cqe = io_get_cqring(ctx, req->opcode);
> >       if (likely(cqe)) {
> >               WRITE_ONCE(cqe->user_data, req->user_data);
> >               WRITE_ONCE(cqe->res, res);
> > @@ -5618,8 +5620,6 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
> >
> >       spin_lock_irqsave(&ctx->completion_lock, flags);
> >       list_del_init(&req->timeout.list);
> > -     atomic_set(&req->ctx->cq_timeouts,
> > -             atomic_read(&req->ctx->cq_timeouts) + 1);
> >
> >       io_cqring_fill_event(req, -ETIME);
> >       io_commit_cqring(ctx);
> >
>
> --
> Pavel Begunkov

  reply	other threads:[~2021-01-04 16:50 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-19 19:15 [PATCH v2 0/2] io_uring: fix skipping of old timeout events Marcelo Diop-Gonzalez
2020-12-19 19:15 ` [PATCH v2 1/2] io_uring: only increment ->cq_timeouts along with ->cached_cq_tail Marcelo Diop-Gonzalez
2021-01-02 20:03   ` Pavel Begunkov
2021-01-04 16:49     ` Marcelo Diop-Gonzalez [this message]
2020-12-19 19:15 ` [PATCH v2 2/2] io_uring: flush timeouts that should already have expired Marcelo Diop-Gonzalez
2021-01-02 19:54   ` Pavel Begunkov
2021-01-02 20:26     ` Pavel Begunkov
2021-01-08 15:57       ` Marcelo Diop-Gonzalez
2021-01-11  4:57         ` Pavel Begunkov
2021-01-11 15:28           ` Marcelo Diop-Gonzalez
2021-01-12 20:47         ` Pavel Begunkov
2021-01-13 14:41           ` Marcelo Diop-Gonzalez
2021-01-13 15:20             ` Pavel Begunkov
2021-01-14  0:46           ` Marcelo Diop-Gonzalez
2021-01-14 21:04             ` Pavel Begunkov
2021-01-04 17:56     ` Marcelo Diop-Gonzalez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+saATWskd9u9iSt6G-FaXmzw=n2osAasxBWGCEup2esbZE1XQ@mail.gmail.com' \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox