public inbox for [email protected]
 help / color / mirror / Atom feed
From: Avi Kivity <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Subject: Re: IORING_OP_POLL_ADD slower than linux-aio IOCB_CMD_POLL
Date: Tue, 19 Apr 2022 14:57:35 +0300	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>


On 19/04/2022 14.38, Jens Axboe wrote:
> On 4/19/22 5:07 AM, Avi Kivity wrote:
>> A simple webserver shows about 5% loss compared to linux-aio.
>>
>>
>> I expect the loss is due to an optimization that io_uring lacks -
>> inline completion vs workqueue completion:
> I don't think that's it, io_uring never punts to a workqueue for
> completions.


I measured this:



  Performance counter stats for 'system wide':

          1,273,756 io_uring:io_uring_task_add

       12.288597765 seconds time elapsed

Which exactly matches with the number of requests sent. If that's the 
wrong counter to measure, I'm happy to try again with the correct counter.


>   The aio inline completions is more of a hack because it
> needs to do that, as always using a workqueue would lead to bad
> performance and higher overhead.
>
> So if there's a difference in performance, it's something else and we
> need to look at that. But your report is pretty lacking! What kernel are
> you running?


5.17.2-300.fc36.x86_64


> Do you have a test case of sorts?


Seastar's httpd, running on a single core, against wrk -c 1000 -t 4 
http://localhost:10000/.


Instructions:

   git clone --recursive -b io_uring https://github.com/avikivity/seastar

   cd seastar

   sudo ./install-dependencies.sh  # after carefully verifying it, of course

   ./configure.py --mode release

   ninja -C build/release apps/httpd/httpd

   ./build/release/apps/httpd/httpd --smp 1 [--reactor-backing 
io_uring|linux-aio|epoll]


and run wrk againt it.


> For a performance oriented network setup, I'd normally not consider data
> readiness poll replacements to be that interesting, my recommendation
> would be to use async send/recv for that instead. That's how io_uring is
> supposed to be used, in a completion based model.
>

That's true. Still, an existing system that evolved around poll will 
take some time and effort to migrate, and have slower IORING_OP_POLL 
means it cannot benefit from io_uring's many other advantages if it 
fears a regression from that difference.


Note that it's not just a matter of converting poll+recvmsg to 
IORING_OP_RECVMSG. If you support many connections, one must migrate to 
internal buffer selection, otherwise the memory load with a large number 
of idle connections is high. The end result is wonderful but the road 
there is long.



  reply	other threads:[~2022-04-19 12:00 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-19 11:07 IORING_OP_POLL_ADD slower than linux-aio IOCB_CMD_POLL Avi Kivity
2022-04-19 11:38 ` Jens Axboe
2022-04-19 11:57   ` Avi Kivity [this message]
2022-04-19 12:04     ` Jens Axboe
2022-04-19 12:21       ` Avi Kivity
2022-04-19 12:31         ` Jens Axboe
2022-04-19 15:21           ` Jens Axboe
2022-04-19 15:51             ` Avi Kivity
2022-04-19 17:14             ` Jens Axboe
2022-04-19 19:41               ` Avi Kivity
2022-04-19 19:58                 ` Jens Axboe
2022-04-20 11:55                   ` Avi Kivity
2022-04-20 12:09                     ` Jens Axboe
2022-04-21  9:05                       ` Avi Kivity
2022-06-15 10:12               ` Avi Kivity
2022-06-15 10:48                 ` Pavel Begunkov
2022-06-15 11:04                   ` Avi Kivity
2022-06-15 11:07                     ` Avi Kivity
2022-06-15 11:38                       ` Pavel Begunkov
2022-06-15 12:21                         ` Jens Axboe
2022-06-15 13:43                           ` Avi Kivity
2022-06-15 11:30                     ` Pavel Begunkov
2022-06-15 11:36                       ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox