public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Pavel Begunkov <[email protected]>, [email protected]
Subject: Re: [PATCHSET 0/3] Improve MSG_RING SINGLE_ISSUER performance
Date: Tue, 28 May 2024 20:42:08 -0600	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 5/28/24 8:08 PM, Pavel Begunkov wrote:
> On 5/29/24 02:35, Jens Axboe wrote:
>> On 5/28/24 5:04 PM, Jens Axboe wrote:
>>> On 5/28/24 12:31 PM, Jens Axboe wrote:
>>>> I suspect a bug in the previous patches, because this is what the
>>>> forward port looks like. First, for reference, the current results:
>>>
>>> Got it sorted, and pinned sender and receiver on CPUs to avoid the
>>> variation. It looks like this with the task_work approach that I sent
>>> out as v1:
>>>
>>> Latencies for: Sender
>>>      percentiles (nsec):
>>>       |  1.0000th=[ 2160],  5.0000th=[ 2672], 10.0000th=[ 2768],
>>>       | 20.0000th=[ 3568], 30.0000th=[ 3568], 40.0000th=[ 3600],
>>>       | 50.0000th=[ 3600], 60.0000th=[ 3600], 70.0000th=[ 3632],
>>>       | 80.0000th=[ 3632], 90.0000th=[ 3664], 95.0000th=[ 3696],
>>>       | 99.0000th=[ 4832], 99.5000th=[15168], 99.9000th=[16192],
>>>       | 99.9500th=[16320], 99.9900th=[18304]
>>> Latencies for: Receiver
>>>      percentiles (nsec):
>>>       |  1.0000th=[ 1528],  5.0000th=[ 1576], 10.0000th=[ 1656],
>>>       | 20.0000th=[ 2040], 30.0000th=[ 2064], 40.0000th=[ 2064],
>>>       | 50.0000th=[ 2064], 60.0000th=[ 2064], 70.0000th=[ 2096],
>>>       | 80.0000th=[ 2096], 90.0000th=[ 2128], 95.0000th=[ 2160],
>>>       | 99.0000th=[ 3472], 99.5000th=[14784], 99.9000th=[15168],
>>>       | 99.9500th=[15424], 99.9900th=[17280]
>>>
>>> and here's the exact same test run on the current patches:
>>>
>>> Latencies for: Sender
>>>      percentiles (nsec):
>>>       |  1.0000th=[  362],  5.0000th=[  362], 10.0000th=[  370],
>>>       | 20.0000th=[  370], 30.0000th=[  370], 40.0000th=[  370],
>>>       | 50.0000th=[  374], 60.0000th=[  382], 70.0000th=[  382],
>>>       | 80.0000th=[  382], 90.0000th=[  382], 95.0000th=[  390],
>>>       | 99.0000th=[  402], 99.5000th=[  430], 99.9000th=[  900],
>>>       | 99.9500th=[  972], 99.9900th=[ 1432]
>>> Latencies for: Receiver
>>>      percentiles (nsec):
>>>       |  1.0000th=[ 1528],  5.0000th=[ 1544], 10.0000th=[ 1560],
>>>       | 20.0000th=[ 1576], 30.0000th=[ 1592], 40.0000th=[ 1592],
>>>       | 50.0000th=[ 1592], 60.0000th=[ 1608], 70.0000th=[ 1608],
>>>       | 80.0000th=[ 1640], 90.0000th=[ 1672], 95.0000th=[ 1688],
>>>       | 99.0000th=[ 1848], 99.5000th=[ 2128], 99.9000th=[14272],
>>>       | 99.9500th=[14784], 99.9900th=[73216]
>>>
>>> I'll try and augment the test app to do proper rated submissions, so I
>>> can ramp up the rates a bit and see what happens.
>>
>> And the final one, with the rated sends sorted out. One key observation
>> is that v1 trails the current edition, it just can't keep up as the rate
>> is increased. If we cap the rate at at what should be 33K messages per
>> second, v1 gets ~28K messages and has the following latency profile (for
>> a 3 second run)
> 
> Do you see where the receiver latency comes from? The wakeups are
> quite similar in nature, assuming it's all wait(nr=1) and CPUs
> are not 100% consumed. The hop back spoils scheduling timing?

I haven't dug into that side yet, but I'm guessing it's indeed
scheduling artifacts. It's all doing single waits, the sender is doing
io_uring_submit_and_wait(ring, 1) and the receiver is doing
io_uring_wait_cqe(ring, &cqe);

-- 
Jens Axboe


      reply	other threads:[~2024-05-29  2:42 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-24 22:58 [PATCHSET 0/3] Improve MSG_RING SINGLE_ISSUER performance Jens Axboe
2024-05-24 22:58 ` [PATCH 1/3] io_uring/msg_ring: split fd installing into a helper Jens Axboe
2024-05-24 22:58 ` [PATCH 2/3] io_uring/msg_ring: avoid double indirection task_work for data messages Jens Axboe
2024-05-28 13:18   ` Pavel Begunkov
2024-05-28 14:23     ` Jens Axboe
2024-05-28 13:32   ` Pavel Begunkov
2024-05-28 14:23     ` Jens Axboe
2024-05-28 16:23       ` Pavel Begunkov
2024-05-28 17:59         ` Jens Axboe
2024-05-29  2:04           ` Pavel Begunkov
2024-05-29  2:43             ` Jens Axboe
2024-05-24 22:58 ` [PATCH 3/3] io_uring/msg_ring: avoid double indirection task_work for fd passing Jens Axboe
2024-05-28 13:31 ` [PATCHSET 0/3] Improve MSG_RING SINGLE_ISSUER performance Pavel Begunkov
2024-05-28 14:34   ` Jens Axboe
2024-05-28 14:39     ` Jens Axboe
2024-05-28 15:27     ` Jens Axboe
2024-05-28 16:50     ` Pavel Begunkov
2024-05-28 18:07       ` Jens Axboe
2024-05-28 18:31         ` Jens Axboe
2024-05-28 23:04           ` Jens Axboe
2024-05-29  1:35             ` Jens Axboe
2024-05-29  2:08               ` Pavel Begunkov
2024-05-29  2:42                 ` Jens Axboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox