public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Olivier Langlois <[email protected]>,
	Jens Axboe <[email protected]>,
	[email protected], [email protected]
Subject: Re: [PATCH] io_uring: reduce latency by reissueing the operation
Date: Sun, 20 Jun 2021 21:55:46 +0100	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 6/18/21 11:45 PM, Olivier Langlois wrote:
> On Thu, 2021-06-17 at 19:10 +0100, Pavel Begunkov wrote:
>>>
>>> For the patch performance testing I did use the simplest config:
>>> Single thread, 1 TCP connection, no sqpoll.
>>
>> Queue depth (QD) 1, right?
> 
> Since my io_uring usage is wrapped into a library and some parameters
> are fixed and adjusted to be able to handle the largest use-case
> scenario, QD was set to 256 for that test.
> 
> There is also few accessory fds such as 2 eventfd that are polled to
> interrupt the framework event loop but in practice they were silent
> during the whole testing period.
> 
> but no big batch submission for sure. At most maybe 2 sqes per
> submission.
> 
> 1 to provide back the buffer and the other to reinsert the read
> operation.

I see

[...]
>> 2) Do they return equivalent number of bytes? And what the
>> read/recv size (e.g. buffer size)?
> 
> Nothing escape your eagle vision attention Pavel...

It all may sound over scrutinising, but I just used to analyse
performance magic and see how edges may be polished. Not
a requirement for the patch 

> I set my read size to an arbitrarilly big size (20KB) just to be sure
> that I should, most of the time, never end up with partial reads and
> perform more syscalls that I could get away with big enough buffer
> size.
> 
> TBH, I didn't pay that much attention to this detail. out of my head, I
> would say that the average size is all over the place. It can go from
> 150 Bytes up to 15KB but I would think that the average must be between
> 1-2 MTU (around 2500 bytes).
> 
> That being said, the average read size must spread equally to the
> packets going to the regular path vs those of take the new shortcut, so
> I believe that the conclusion should still hold despite not having
> considered this aspect in the test.
>>
>> Because in theory can be that during a somewhat small delay for
>> punting to io-wq, more data had arrived and so async completion
>> pulls more data that takes more time. In that case the time
>> difference should also account the difference in amount of
>> data that it reads.
> 
> Good point. This did not even occur to me to consider this aspect but
> how many more packets would the network stack had the time to receive
> in an extra 16uSec period? (I am not on one of those crazy Fiber optic
> 200Gbps Mellanox card....) 1,2,3,4? We aren't talking multiple extra
> MBs to copy here...

Say 1-2. Need to check, but I think while processing them and
copying to the userspace there might arrive another one, and so
you have full 20KB instead of 4KB that would have been copied
inline. Plus io-wq overhead, 16us wouldn't be unreasonable then.

But that's "what if" thinking.

>>
>> 3) Curious, why read but not recv as you're working with sockets
> 
> I have learn network programming with the classic Stevens book. As far
> as I remember from what I have learned in the book, it is that the only
> benefit of recv() over read() is if you need to specify one of the
> funky flags that recv() allow you to provide to it, read() doesn't give
> access to that functionality.
> 
> If there is a performance benefit to use recv() over read() for tcp
> fds, that is something I am not aware of and if you confirm me that it
> is the case, that would be very easy for me to change my read calls for
> recv() ones...
> 
> Now that you ask the question, maybe read() is implemented with recv()

All sinks into the common code rather early

> but AFAIK, the native network functions are sendmsg and recvmsg so
> neither read() or recv() would have an edge over the other in that
> department, AFAIK...

For io_uring part, e.g. recv is slimmer than recvmsg, doesn't
need to copy extra.

Read can be more expensive on the io_uring side because it
may copy/alloc extra stuff. Plus additional logic on the
io_read() part for generality.

But don't expect it to be much of a difference, but never
tested.

> while we are talking about read() vs recv(), I am curious too about
> something, while working on my other patch (store back buffer in case
> of failure), I did notice that buffer address and bid weren't stored in
> the same fields.
> 
> io_put_recv_kbuf() vs io_put_rw_kbuf()
> 
> I didn't figure out why those values weren't stored in the same
> io_kiocb fields for recv operations...
> 
> Why is that?

Just because how it was done. May use cleaning up. e.g. I don't
like rewriting req->rw.addr with a selected buffer.

In general, the first 64B (cacheline) of io_kiocb (i.e. request)
is taken by per-opcode data, and we try to fit everything
related to a particular opcode there and not spill into
generic parts of the struct.

Another concern, in general, is not keeping everything tight
enough and shuffled right, so it doesn't read extra cachelines
in hot path.

>>
>> 4) Did you do any userspace measurements. And a question to
>> everyone in general, do we have any good net benchmarking tool
>> that works with io_uring? Like netperf? Hopefully spitting
>> out latency distribution.
> 
> No, I haven't.

With what was said, I'd expect ~same mean and elevated ~99%
reduced by the patch, which is also great. Latency is always
the hardest part.

>> Also, not particularly about reissue stuff, but a note to myself:
>> 59us is much, so I wonder where the overhead comes from.
>> Definitely not the iowq queueing (i.e. putting into a list).
>> - waking a worker?
>> - creating a new worker? Do we manage workers sanely? e.g.
>>   don't keep them constantly recreated and dying back.
>> - scheduling a worker?
> 
> creating a new worker is for sure not free but I would remove that
> cause from the suspect list as in my scenario, it was a one-shot event.

Not sure what you mean, but speculating, io-wq may have not
optimal policy for recycling worker threads leading to
recreating/removing more than needed. Depends on bugs, use
cases and so on.

> First measurement was even not significantly higher than all the other
> measurements.

You get a huge max for io-wq case. Obviously nothing can be
said just because of max. We'd need latency distribution
and probably longer runs, but I'm still curious where it's
coming from. Just keeping an eye in general

>>
>> Olivier, for how long did you run the test? >1 min?
> 
> much more than 1 minute. I would say something between 20-25 minutes.
> 
> I wanted a big enough sample size for those 2.5% special path events so
> that the conclusion could be statistically significant.

Great, if io-wq worker creation doesn't work right, then it's
because of policies and so on.

-- 
Pavel Begunkov

  reply	other threads:[~2021-06-20 20:56 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <[email protected]>
2021-06-10  9:03 ` [PATCH] io_uring: reduce latency by reissueing the operation Pavel Begunkov
     [not found]   ` <[email protected]>
2021-06-10 15:51     ` Pavel Begunkov
2021-06-10 17:56       ` Olivier Langlois
2021-06-10 19:32         ` Pavel Begunkov
2021-06-11  3:55           ` Olivier Langlois
2021-06-17 18:10             ` Pavel Begunkov
2021-06-18 22:45               ` Olivier Langlois
2021-06-20 20:55                 ` Pavel Begunkov [this message]
2021-06-20 21:31                   ` Olivier Langlois
2021-06-20 22:04                     ` Pavel Begunkov
     [not found] <[email protected]>
2021-06-16 12:48 ` Jens Axboe
2021-06-18 21:38   ` Olivier Langlois

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox