public inbox for [email protected]
 help / color / mirror / Atom feed
From: Stefan Roesch <[email protected]>
To: Jakub Kicinski <[email protected]>
Cc: [email protected], [email protected], [email protected],
	[email protected], [email protected],
	[email protected]
Subject: Re: [PATCH v13 1/7] net: split off __napi_busy_poll from napi_busy_poll
Date: Thu, 01 Jun 2023 21:12:10 -0700	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>


Jakub Kicinski <[email protected]> writes:

> On Wed, 31 May 2023 12:16:50 -0700 Stefan Roesch wrote:
>> > This will conflict with:
>> >
>> >     https://git.kernel.org/netdev/net-next/c/c857946a4e26
>> >
>> > :( Not sure what to do about it..
>> >
>> > Maybe we can merge a simpler version to unblock io-uring (just add
>> > need_resched() to your loop_end callback and you'll get the same
>> > behavior). Refactor in net-next in parallel. Then once trees converge
>> > do simple a cleanup and call the _rcu version?
>>
>> Jakub, I can certainly call need_resched() in the loop_end callback, but
>> isn't there a potential race? need_resched() in the loop_end callback
>> might not return true, but the need_resched() call in napi_busy_poll
>> does?
>
> need_resched() is best effort. It gets added to potentially long
> execution paths and loops. Extra single round thru the loop won't
> make a difference.

I might be missing something, however what can happen at a high-level is:

io_napi_blocking_busy_loop()
  rcu_read_lock()
  __io_napi_busy_do_busy_loop()
  rcu_read_unlock()

in __io_napi_do_busy_loop() we do

__io_napi_do_busy_loop()
  list_foreach_entry_rcu()
    napi_busy_loop()


and in napi_busy_loop()

napi_busy_loop()
  rcu_read_lock()
  __napi_busy_poll()
  loop_end()
  if (need_resched) {
    rcu_read_unlock()
    schedule()
  }


The problem with checking need_resched in loop_end is that need_resched
can be false in loop_end, however the check for need_resched in
napi_busy_loop succeeds. This means that we unlock the rcu read lock and
call schedule. However the code in io_napi_blocking_busy_loop still
believes we hold the read lock.

  reply	other threads:[~2023-06-02  4:18 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-18 21:17 [PATCH v13 0/7] io_uring: add napi busy polling support Stefan Roesch
2023-05-18 21:17 ` [PATCH v13 1/7] net: split off __napi_busy_poll from napi_busy_poll Stefan Roesch
2023-05-31 17:26   ` Jakub Kicinski
2023-06-05 17:47     ` Stefan Roesch
2023-06-05 18:00       ` Jakub Kicinski
     [not found]   ` <[email protected]>
     [not found]     ` <[email protected]>
2023-06-01  4:15       ` Jakub Kicinski
2023-06-02  4:12         ` Stefan Roesch [this message]
2023-06-02  4:26           ` Jakub Kicinski
2023-05-18 21:17 ` [PATCH v13 2/7] net: introduce napi_busy_loop_rcu() Stefan Roesch
     [not found]   ` <[email protected]>
2023-05-31 17:38     ` Jakub Kicinski
2023-06-05 17:45     ` Stefan Roesch
2023-05-18 21:17 ` [PATCH v13 3/7] io-uring: move io_wait_queue definition to header file Stefan Roesch
2023-05-18 21:17 ` [PATCH v13 4/7] io-uring: add napi busy poll support Stefan Roesch
2023-05-19  1:26   ` Jens Axboe
2023-05-19 23:11     ` Stefan Roesch
2023-05-19  9:53   ` Simon Horman
2023-05-19 23:17     ` Stefan Roesch
2023-05-18 21:17 ` [PATCH v13 5/7] io-uring: add sqpoll support for napi busy poll Stefan Roesch
2023-05-19  0:11   ` kernel test robot
2023-05-19  1:13     ` Jens Axboe
2023-05-19 23:29       ` Stefan Roesch
2023-05-19  4:35   ` kernel test robot
2023-05-18 21:17 ` [PATCH v13 6/7] io_uring: add register/unregister napi function Stefan Roesch
2023-05-19  1:30   ` Jens Axboe
2023-05-18 21:17 ` [PATCH v13 7/7] io_uring: add prefer busy poll to register and unregister napi api Stefan Roesch
2023-05-19  1:31 ` [PATCH v13 0/7] io_uring: add napi busy polling support Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox