From: Olivier Langlois <[email protected]>
To: Jens Axboe <[email protected]>,
Pavel Begunkov <[email protected]>,
[email protected]
Subject: Re: [PATCH 0/2] abstract napi tracking strategy
Date: Thu, 15 Aug 2024 18:17:29 -0400 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On Tue, 2024-08-13 at 15:44 -0600, Jens Axboe wrote:
> On 8/13/24 3:25 PM, Olivier Langlois wrote:
> > On Tue, 2024-08-13 at 12:33 -0600, Jens Axboe wrote:
> > > On 8/13/24 10:44 AM, Olivier Langlois wrote:
> > > > the actual napi tracking strategy is inducing a non-negligeable
> > > > overhead.
> > > > Everytime a multishot poll is triggered or any poll armed, if
> > > > the
> > > > napi is
> > > > enabled on the ring a lookup is performed to either add a new
> > > > napi
> > > > id into
> > > > the napi_list or its timeout value is updated.
> > > >
> > > > For many scenarios, this is overkill as the napi id list will
> > > > be
> > > > pretty
> > > > much static most of the time. To address this common scenario,
> > > > a
> > > > new
> > > > abstraction has been created following the common Linux kernel
> > > > idiom of
> > > > creating an abstract interface with a struct filled with
> > > > function
> > > > pointers.
> > > >
> > > > Creating an alternate napi tracking strategy is therefore made
> > > > in 2
> > > > phases.
> > > >
> > > > 1. Introduce the io_napi_tracking_ops interface
> > > > 2. Implement a static napi tracking by defining a new
> > > > io_napi_tracking_ops
> > >
> > > I don't think we should create ops for this, unless there's a
> > > strict
> > > need to do so. Indirect function calls aren't cheap, and the CPU
> > > side
> > > mitigations for security issues made them worse.
> > >
> > > You're not wrong that ops is not an uncommon idiom in the kernel,
> > > but
> > > it's a lot less prevalent as a solution than it used to. Exactly
> > > because
> > > of the above reasons.
> > >
> > ok. Do you have a reference explaining this?
> > and what type of construct would you use instead?
>
> See all the spectre nonsense, and the mitigations that followed from
> that.
>
> > AFAIK, a big performance killer is the branch mispredictions coming
> > from big switch/case or if/else if/else blocks and it was precisely
> > the
> > reason why you removed the big switch/case io_uring was having with
> > function pointers in io_issue_def...
>
> For sure, which is why io_uring itself ended up using indirect
> function
> calls, because the table just became unwieldy. But that's a different
> case from adding it for just a single case, or two. For those, branch
> prediction should be fine, as it would always have the same outcome.
>
> > I consumme an enormous amount of programming learning material
> > daily
> > and this is the first time that I am hearing this.
>
> The kernel and backend programming are a bit different in that
> regard,
> for better or for worse.
>
> > If there was a performance concern about this type of construct and
> > considering that my main programming language is C++, I am bit
> > surprised that I have not seen anything about some problems with
> > C++
> > vtbls...
>
> It's definitely slower than a direct function call, regardless of
> whether this is in the kernel or not. Can be mitigated by having the
> common case be predicted with a branch. See INDIRECT_CALL_*() in the
> kernel.
>
> > but oh well, I am learning new stuff everyday, so please share the
> > references you have about the topic so that I can perfect my
> > knowledge.
>
> I think lwn had a recent thing on indirect function calls as it
> pertains
> to the security modules, I'd check that first. But the spectre thing
> above is likely all you need!
>
Jens,
thx a lot about the clarifications. I will for sure investigate these
leads to better understand your fear of function callbacks...
I have little interest about Spectre and other mitigations and security
in general so I know very little about those topics.
A guy, that I value a lot his knowledge is Chandler Carruth from Google
who made a talk about Spectre in 2018:
https://www.youtube.com/watch?v=_f7O3IfIR2k
I will rewatch his talk, check LWN about the indirect functions calls
INDIRECT_CALL() macros that you mention...
AFAIK, the various kernel mitigations are mostly applied when
transiting from Kernel mode back to userspace.
because otherwise the compiled code for a usespace program would be
pretty much identical than kernel compiled code.
To my eyes, what is really important, it is that absolute best
technical solution is choosen and the only way that this discussion can
be done, it is with numbers. So I have created a small created a small
benchmark program to compare a function pointer indirect call vs
selecting a function in a 3 branches if/else if/else block. Here are
the results:
----------------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------------
BM_test_virtual 0.628 ns 0.627 ns 930255515
BM_test_ifElse 1.59 ns 1.58 ns 446805050
add this result with my concession in
https://lore.kernel.org/io-uring/[email protected]/
that you are right for 2 of the function pointers out of the 3
functions of io_napi_tracking_ops...
Hopefully this discussion will lead us toward the best solution.
keep in mind that this point is a minuscule issue.
if you prefer the 2.5x slower construct for any reason that you do not
have to justify, I'll accept your decision and rework my proposal to go
your way.
I believe that offering some form of NAPI static tracking is still a
very interesting feature no matter what is the outcome for this very
minor technical issue.
code:
/*
* virtual overhead vs if/else google benchmark
*
* Olivier Langlois - August 15, 2024
*
* compile cmd:
* g++ -std=c++26 -I.. -pthread -Wall -g -O3 -pipe
* -fno-omit-frame-pointer bench_virtual.cpp -lbenchmark -o
bench_virtual
*/
#include "benchmark/benchmark.h"
/*
* CppCon 2015: Chandler Carruth "Tuning C++: Benchmarks, and CPUs, and
* Compilers! Oh My!
* https://www.youtube.com/watch?v=nXaxk27zwlk
*/
static void escape(void *p)
{
asm volatile("" : : "g"(p) : "memory");
}
bool no_tracking_do_busy_loop()
{
int res{0};
escape(&res);
return res;
}
bool dynamic_tracking_do_busy_loop()
{
int res{1};
escape(&res);
return res;
}
bool static_tracking_do_busy_loop()
{
int res{2};
escape(&res);
return res;
}
class io_napi_tracking_ops
{
public:
virtual bool do_busy_loop() noexcept = 0;
};
class static_tracking_ops : public io_napi_tracking_ops
{
public:
bool do_busy_loop() noexcept override;
};
bool static_tracking_ops::do_busy_loop() noexcept
{
return static_tracking_do_busy_loop();
}
bool testVirtual(io_napi_tracking_ops *ptr)
{
return ptr->do_busy_loop();
}
bool testIfElseDispatch(int i)
{
if (i == 0)
return no_tracking_do_busy_loop();
else if (i == 1)
return dynamic_tracking_do_busy_loop();
else
return static_tracking_do_busy_loop();
}
void BM_test_virtual(benchmark::State &state)
{
static_tracking_ops vObj;
volatile io_napi_tracking_ops *ptr = &vObj;
for (auto _ : state) {
benchmark::DoNotOptimize(testVirtual(
const_cast<io_napi_tracking_ops *>(ptr)));
}
}
void BM_test_ifElse(benchmark::State &state)
{
volatile int i = 2;
for (auto _ : state) {
benchmark::DoNotOptimize(testIfElseDispatch(i));
}
}
BENCHMARK(BM_test_virtual);
BENCHMARK(BM_test_ifElse);
BENCHMARK_MAIN();
next prev parent reply other threads:[~2024-08-15 22:17 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-13 16:44 [PATCH 0/2] abstract napi tracking strategy Olivier Langlois
2024-08-13 17:10 ` [PATCH 1/2] io_uring/napi: Introduce io_napi_tracking_ops Olivier Langlois
2024-08-14 11:44 ` Olivier Langlois
2024-08-14 13:17 ` Jens Axboe
2024-08-13 17:11 ` [PATCH 2/2] io_uring/napi: add static napi tracking strategy Olivier Langlois
2024-08-13 18:33 ` [PATCH 0/2] abstract " Jens Axboe
2024-08-13 21:25 ` Olivier Langlois
2024-08-13 21:44 ` Jens Axboe
2024-08-15 22:17 ` Olivier Langlois [this message]
2024-08-15 22:44 ` Olivier Langlois
2024-08-16 14:26 ` Pavel Begunkov
2024-09-16 18:29 ` Olivier Langlois
2024-08-13 22:36 ` Pavel Begunkov
2024-08-14 13:28 ` Pavel Begunkov
2024-08-13 21:34 ` Olivier Langlois
2024-08-13 21:45 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f899f21be48509d72ed8a1955061bef98512fab4.camel@trillion01.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox