From: Olivier Langlois <[email protected]>
To: Hao Xu <[email protected]>, Jens Axboe <[email protected]>
Cc: Pavel Begunkov <[email protected]>,
io-uring <[email protected]>,
linux-kernel <[email protected]>
Subject: Re: [PATCH v1] io_uring: Add support for napi_busy_poll
Date: Fri, 25 Feb 2022 10:32:57 -0500 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On Fri, 2022-02-25 at 00:32 -0500, Olivier Langlois wrote:
>
> > >
> > > +#ifdef CONFIG_NET_RX_BUSY_POLL
> > > +static void io_adjust_busy_loop_timeout(struct timespec64 *ts,
> > > + struct io_wait_queue
> > > *iowq)
> > > +{
> > > + unsigned busy_poll_to = READ_ONCE(sysctl_net_busy_poll);
> > > + struct timespec64 pollto = ns_to_timespec64(1000 *
> > > (s64)busy_poll_to);
> > > +
> > > + if (timespec64_compare(ts, &pollto) > 0) {
> > > + *ts = timespec64_sub(*ts, pollto);
> > > + iowq->busy_poll_to = busy_poll_to;
> > > + } else {
> > > + iowq->busy_poll_to = timespec64_to_ns(ts) / 1000;
> >
> > How about timespec64_tons(ts) >> 10, since we don't need accurate
> > number.
>
> Fantastic suggestion! The kernel test robot did also detect an issue
> with that statement. I did discover do_div() in the meantime but what
> you suggest is better, IMHO...
After having seen Jens patch (io_uring: don't convert to jiffies for
waiting on timeouts), I think that I'll stick with do_div().
I have a hard time considering removing timing accuracy when effort is
made to make the same function more accurate...
>
>
> > > + !io_busy_loop_end(iowq, start_time));
> > > +}
> > > +#endif /* CONFIG_NET_RX_BUSY_POLL */
> > > +
> > > /*
> > > * Wait until events become available, if we don't already have
> > > some. The
> > > * application must reap them itself, as they reside on the
> > > shared cq ring.
> > > @@ -7729,12 +7906,20 @@ static int io_cqring_wait(struct
> > > io_ring_ctx *ctx, int min_events,
> > > if (!io_run_task_work())
> > > break;
> > > } while (1);
> > > -
> > > +#ifdef CONFIG_NET_RX_BUSY_POLL
> > > + iowq.busy_poll_to = 0;
> > > +#endif
> > > if (uts) {
> > > struct timespec64 ts;
> > >
> > > if (get_timespec64(&ts, uts))
> > > return -EFAULT;
> > > +#ifdef CONFIG_NET_RX_BUSY_POLL
> > > + if (!(ctx->flags & IORING_SETUP_SQPOLL) &&
> > > + !list_empty(&ctx->napi_list)) {
> > > + io_adjust_busy_loop_timeout(&ts, &iowq);
> > > + }
> > > +#endif
> > > timeout = timespec64_to_jiffies(&ts);
> > > }
> > >
> > > @@ -7759,6 +7944,10 @@ static int io_cqring_wait(struct
> > > io_ring_ctx
> > > *ctx, int min_events,
> > > iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) +
> > > min_events;
> > >
> > > trace_io_uring_cqring_wait(ctx, min_events);
> > > +#ifdef CONFIG_NET_RX_BUSY_POLL
> > > + if (iowq.busy_poll_to)
> > > + io_blocking_napi_busy_loop(ctx, &iowq);
> >
> > We may not need locks for the napi_list, the reason is we don't
> > need
> > to
> > poll an accurate list, the busy polling/NAPI itself is kind of
> > speculation. So the deletion is not an emergency.
> > To say the least, we can probably delay the deletion to some safe
> > place
> > like the original task's task work though this may cause other
> > problems...
>
> There are 2 concerns here.
>
> 1. Iterating a list while another thread modify it is not thread-safe
> unless you use a lock.
>
> If we offer napi_busy_poll() without sqpoll with the modification in
> io_cqring_wait(), this is a real possibility. A thread could call
> io_uring_enter(IORING_ENTER_GETEVENTS) while another thread calls
> io_uring_enter() to submit new sqes that could trigger a call to
> io_add_napi().
>
> If napi_busy_poll() is only offered through sqpoll thread, this
> becomes
> a non-issue since the only thread accessing/modifying the napi_list
> field is the sqpoll thread.
>
> Providing the patch benchmark result with v2 could help deciding what
> to do with this choice.
>
> 2. You are correct when you say that deletion is not an emergency.
>
> However, the design guideline that I did follow when writing the
> patch
> is that napi_busy_poll support should not impact users not using this
> feature. Doing the deletion where that patch is doing it fullfill
> this
> goal.
>
> Comparing a timeout value with the jiffies variable is very cheap and
> will only be performed when napi_busy_poll is used.
>
> The other option would be to add a refcount to each napi_entry and
> decrement it if needed everytime a request is discarded. Doing that
> that check for every requests that io_uring discards on completion, I
> am very confident that this would negatively impact various
> performance
> benchmarks that Jens routinely perform...
>
Another fact to consider, it is that I expect the content of napi_list
to be extremely stable. Regular entry deletion should not be a thing.
postponing the deletion using task work is not an option too. How would
io_busy_loop_end() discern between a pending list entry deletion and
any other task work making the busy looping stop?
next prev parent reply other threads:[~2022-02-25 15:33 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-19 8:03 [PATCH v1] io_uring: Add support for napi_busy_poll Olivier Langlois
2022-02-19 21:42 ` Olivier Langlois
2022-02-20 0:22 ` Jens Axboe
2022-02-20 18:37 ` Olivier Langlois
2022-02-20 19:38 ` Jens Axboe
2022-02-21 19:29 ` Olivier Langlois
2022-02-21 5:25 ` Hao Xu
2022-02-20 20:51 ` kernel test robot
2022-02-20 21:53 ` kernel test robot
2022-02-20 21:53 ` kernel test robot
2022-02-21 5:23 ` Hao Xu
2022-02-25 5:32 ` Olivier Langlois
2022-02-25 15:32 ` Olivier Langlois [this message]
2022-02-28 18:34 ` Hao Xu
2022-02-28 21:20 ` Olivier Langlois
2022-03-01 3:53 ` Hao Xu
2022-02-28 18:26 ` Hao Xu
2022-02-28 21:01 ` Olivier Langlois
2022-03-01 8:23 ` Hao Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2cedc9f21a1c89aa9fe1fa4dffc2ebeabeb761f5.camel@trillion01.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox