public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Pavel Begunkov <[email protected]>, [email protected]
Subject: Re: [RFC] io_uring: optimise requests referencing ctx
Date: Sat, 2 Oct 2021 07:09:31 -0600	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <69934153393c9af5f44c5312b89d4beb9ce0b591.1633176671.git.asml.silence@gmail.com>

On 10/2/21 6:15 AM, Pavel Begunkov wrote:
> Currenlty, we allocate one ctx reference per request at submission time
> and put them at free. It's batched and not so expensive but it still
> bloats the kernel, adds 2 function calls for rcu and adds some overhead
> for request counting in io_free_batch_list().
> 
> Always keep one reference with a request, even when it's freed and in
> io_uring request caches. There is extra work at ring exit / quiesce
> paths, which now need to put all cached requests. io_ring_exit_work() is
> already looping, so it's not a problem. Add hybrid-busy waiting to
> io_ctx_quiesce() as well for now.
> 
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
> 
> I want to get rid of extra request ctx referencing, but across different
> kernel versions have been getting "interesting" results loosing
> performance for nops test. Thus, it's only RFC to see whether I'm the
> only one seeing weird effects.

I ran this through the usual peak per-core testing:

Setup 1: 3970X, this one ends up being core limited
Setup 2: 5950X, this one ends up being device limited

Peak-1-threads is:

taskset -c 16 t/io_uring -b512 -d128 -c32 -s32 -p1 -F1 -B1 -n1 /dev/nvme1n1

Peak-2-threads is:

taskset -c 0,16 t/io_uring -b512 -d128 -s32 -c32 -p1 -F1 -B1 -n2 /dev/nvme2n1 /dev/nvme1n1

where 0/16 are thread siblings.

NOPS is:

taskset -c 16 t/io_uring -b512 -d128 -s32 -c32 -N1

Results are in IOPS, and peak-2-threads is only run on the faster box.

Setup/Test   |  Peak-1-thread   Peak-2-threads   NOPS   Diff
------------------------------------------------------------------
Setup 1 pre  |      3.81M            N/A         47.0M
Setup 1 post |      3.84M            N/A         47.6M  +0.8-1.2%
Setup 2 pre  |      5.11M            5.70M       70.3M
Setup 2 post |      5.17M            5.75M       73.1M  +1.2-4.0%

Looks like a nice win to me, on both setups.

-- 
Jens Axboe


      reply	other threads:[~2021-10-02 13:09 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-02 12:15 [RFC] io_uring: optimise requests referencing ctx Pavel Begunkov
2021-10-02 13:09 ` Jens Axboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox