public inbox for [email protected]
 help / color / mirror / Atom feed
From: Hao Xu <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Cc: Pavel Begunkov <[email protected]>,
	Joseph Qi <[email protected]>
Subject: Re: [PATCH 0/3] decouple work_list protection from the big wqe->lock
Date: Tue, 15 Feb 2022 01:18:21 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>


On 2/12/22 00:55, Jens Axboe wrote:
> On 2/6/22 2:52 AM, Hao Xu wrote:
>> wqe->lock is abused, it now protects acct->work_list, hash stuff,
>> nr_workers, wqe->free_list and so on. Lets first get the work_list out
>> of the wqe-lock mess by introduce a specific lock for work list. This
>> is the first step to solve the huge contension between work insertion
>> and work consumption.
>> good thing:
>>    - split locking for bound and unbound work list
>>    - reduce contension between work_list visit and (worker's)free_list.
>>
>> For the hash stuff, since there won't be a work with same file in both
>> bound and unbound work list, thus they won't visit same hash entry. it
>> works well to use the new lock to protect hash stuff.
>>
>> Results:
>> set max_unbound_worker = 4, test with echo-server:
>> nice -n -15 ./io_uring_echo_server -p 8081 -f -n 1000 -l 16
>> (-n connection, -l workload)
>> before this patch:
>> Samples: 2M of event 'cycles:ppp', Event count (approx.): 1239982111074
>> Overhead  Command          Shared Object         Symbol
>>    28.59%  iou-wrk-10021    [kernel.vmlinux]      [k] native_queued_spin_lock_slowpath
>>     8.89%  io_uring_echo_s  [kernel.vmlinux]      [k] native_queued_spin_lock_slowpath
>>     6.20%  iou-wrk-10021    [kernel.vmlinux]      [k] _raw_spin_lock
>>     2.45%  io_uring_echo_s  [kernel.vmlinux]      [k] io_prep_async_work
>>     2.36%  iou-wrk-10021    [kernel.vmlinux]      [k] _raw_spin_lock_irqsave
>>     2.29%  iou-wrk-10021    [kernel.vmlinux]      [k] io_worker_handle_work
>>     1.29%  io_uring_echo_s  [kernel.vmlinux]      [k] io_wqe_enqueue
>>     1.06%  iou-wrk-10021    [kernel.vmlinux]      [k] io_wqe_worker
>>     1.06%  io_uring_echo_s  [kernel.vmlinux]      [k] _raw_spin_lock
>>     1.03%  iou-wrk-10021    [kernel.vmlinux]      [k] __schedule
>>     0.99%  iou-wrk-10021    [kernel.vmlinux]      [k] tcp_sendmsg_locked
>>
>> with this patch:
>> Samples: 1M of event 'cycles:ppp', Event count (approx.): 708446691943
>> Overhead  Command          Shared Object         Symbol
>>    16.86%  iou-wrk-10893    [kernel.vmlinux]      [k] native_queued_spin_lock_slowpat
>>     9.10%  iou-wrk-10893    [kernel.vmlinux]      [k] _raw_spin_lock
>>     4.53%  io_uring_echo_s  [kernel.vmlinux]      [k] native_queued_spin_lock_slowpat
>>     2.87%  iou-wrk-10893    [kernel.vmlinux]      [k] io_worker_handle_work
>>     2.57%  iou-wrk-10893    [kernel.vmlinux]      [k] _raw_spin_lock_irqsave
>>     2.56%  io_uring_echo_s  [kernel.vmlinux]      [k] io_prep_async_work
>>     1.82%  io_uring_echo_s  [kernel.vmlinux]      [k] _raw_spin_lock
>>     1.33%  iou-wrk-10893    [kernel.vmlinux]      [k] io_wqe_worker
>>     1.26%  io_uring_echo_s  [kernel.vmlinux]      [k] try_to_wake_up
>>
>> spin_lock failure from 25.59% + 8.89% =  34.48% to 16.86% + 4.53% = 21.39%
>> TPS is similar, while cpu usage is from almost 400% to 350%
> I think this looks like a good start to improving the io-wq locking. I
> didnt spot anything immediately wrong with the series, my only worker
> was worker->flags protection, but I _think_ that looks OK to in terms of
> the worker itself doing the manipulations.

Yes, I went over the code  to make sure worker->flags is manipulated by 
the worker

itself when doing coding.


Regards,

Hao

>
> Let's queue this up for 5.18 testing, thanks!
>

  reply	other threads:[~2022-02-14 17:18 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-06  9:52 [PATCH 0/3] decouple work_list protection from the big wqe->lock Hao Xu
2022-02-06  9:52 ` [PATCH 1/3] io-wq: " Hao Xu
2022-02-06  9:52 ` [PATCH 2/3] io-wq: reduce acct->lock crossing functions lock/unlock Hao Xu
2022-02-06  9:52 ` [PATCH 3/3] io-wq: use IO_WQ_ACCT_NR rather than hardcoded number Hao Xu
2022-02-11 16:55 ` [PATCH 0/3] decouple work_list protection from the big wqe->lock Jens Axboe
2022-02-14 17:18   ` Hao Xu [this message]
2022-02-11 16:56 ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=19c981c2-95aa-b56a-2fea-7c5f4a886179@linux.alibaba.com \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox