public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Dmitry Sychov <[email protected]>,
	Sergiy Yevtushenko <[email protected]>
Cc: Mark Papadakis <[email protected]>,
	"H. de Vries" <[email protected]>,
	io-uring <[email protected]>
Subject: Re: Any performance gains from using per thread(thread local) urings?
Date: Wed, 13 May 2020 19:02:15 +0300	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <CADPKF+dR=uQx9Dnu83ADghgei4KxwqnfBwONvp-ou--aePq0xg@mail.gmail.com>

On 13/05/2020 17:22, Dmitry Sychov wrote:
> Anyone could shed some light on the inner implementation of uring please? :)

It really depends on the workload, hardware, etc.

io_uring instances are intended to be independent, and each have one CQ and SQ.
The main user's concern should be synchronisation (in userspace) on CQ+SQ. E.g.
100+ cores hammering on a spinlock/mutex protecting an SQ wouldn't do any good.

Everything that can't be inline completed\submitted during io_urng_enter(), will
be offloaded to an internal thread pool (aka io-wq), which is per io_uring by
default, but can be shared if specified. There are pros and cons, but I'd
recommend first to share a single io-wq, and then experiment and tune.

Also, in-kernel submission is not instantaneous and done by only thread at any
moment. Single io_uring may bottleneck you there or add high latency in some cases.

And there a lot of details, probably worth of a separate write-up.

> 
> Specifically how well kernel scales with the increased number of user
> created urings?

Should scale well, especially for rw. Just don't overthrow the kernel with
threads from dozens of io-wqs.

> 
>> If kernel implementation will change from single to multiple queues,
>> user space is already prepared for this change.
> 
> Thats +1 for per-thread urings. An expectation for the kernel to
> become better and better in multiple urings scaling in the future.
> 
> On Wed, May 13, 2020 at 4:52 PM Sergiy Yevtushenko
> <[email protected]> wrote:
>>
>> Completely agree. Sharing state should be avoided as much as possible.
>> Returning to original question: I believe that uring-per-thread scheme is better regardless from how queue is managed inside the kernel.
>> - If there is only one queue inside the kernel, then it's more efficient to perform multiplexing/demultiplexing requests in kernel space
>> - If there are several queues inside the kernel, then user space code better matches kernel-space code.
>> - If kernel implementation will change from single to multiple queues, user space is already prepared for this change.
>>
>>
>> On Wed, May 13, 2020 at 3:30 PM Mark Papadakis <[email protected]> wrote:
>>>
>>>
>>>
>>>> On 13 May 2020, at 4:15 PM, Dmitry Sychov <[email protected]> wrote:
>>>>
>>>> Hey Mark,
>>>>
>>>> Or we could share one SQ and one CQ between multiple threads(bound by
>>>> the max number of CPU cores) for direct read/write access using very
>>>> light mutex to sync.
>>>>
>>>> This also solves threads starvation issue  - thread A submits the job
>>>> into shared SQ while thread B both collects and _processes_ the result
>>>> from the shared CQ instead of waiting on his own unique CQ for next
>>>> completion event.
>>>>
>>>
>>>
>>> Well, if the SQ submitted by A and its matching CQ is consumed by B, and A will need access to that CQ because it is tightly coupled to state it owns exclusively(for example), or other reasons, then you’d still need to move that CQ from B back to A, or share it somehow, which seems expensive-is.
>>>
>>> It depends on what kind of roles your threads have though; I am personally very much against sharing state between threads unless there a really good reason for it.
>>>
>>>
>>>
>>>
>>>
>>>
>>>> On Wed, May 13, 2020 at 2:56 PM Mark Papadakis
>>>> <[email protected]> wrote:
>>>>>
>>>>> For what it’s worth, I am (also) using using multiple “reactor” (i.e event driven) cores, each associated with one OS thread, and each reactor core manages its own io_uring context/queues.
>>>>>
>>>>> Even if scheduling all SQEs through a single io_uring SQ — by e.g collecting all such SQEs in every OS thread and then somehow “moving” them to the one OS thread that manages the SQ so that it can enqueue them all -- is very cheap, you ‘d still need to drain the CQ from that thread and presumably process those CQEs in a single OS thread, which will definitely be more work than having each reactor/OS thread dequeue CQEs for SQEs that itself submitted.
>>>>> You could have a single OS thread just for I/O and all other threads could do something else but you’d presumably need to serialize access/share state between them and the one OS thread for I/O which maybe a scalability bottleneck.
>>>>>
>>>>> ( if you are curious, you can read about it here https://medium.com/@markpapadakis/building-high-performance-services-in-2020-e2dea272f6f6 )
>>>>>
>>>>> If you experiment with the various possible designs though, I’d love it if you were to share your findings.
>>>>>
>>>>> —
>>>>> @markpapapdakis
>>>>>
>>>>>
>>>>>> On 13 May 2020, at 2:01 PM, Dmitry Sychov <[email protected]> wrote:
>>>>>>
>>>>>> Hi Hielke,
>>>>>>
>>>>>>> If you want max performance, what you generally will see in non-blocking servers is one event loop per core/thread.
>>>>>>> This means one ring per core/thread. Of course there is no simple answer to this.
>>>>>>> See how thread-based servers work vs non-blocking servers. E.g. Apache vs Nginx or Tomcat vs Netty.
>>>>>>
>>>>>> I think a lot depends on the internal uring implementation. To what
>>>>>> degree the kernel is able to handle multiple urings independently,
>>>>>> without much congestion points(like updates of the same memory
>>>>>> locations from multiple threads), thus taking advantage of one ring
>>>>>> per CPU core.
>>>>>>
>>>>>> For example, if the tasks from multiple rings are later combined into
>>>>>> single input kernel queue (effectively forming a congestion point) I
>>>>>> see
>>>>>> no reason to use exclusive ring per core in user space.
>>>>>>
>>>>>> [BTW in Windows IOCP is always one input+output queue for all(active) threads].
>>>>>>
>>>>>> Also we could pop out multiple completion events from a single CQ at
>>>>>> once to spread the handling to cores-bound threads .
>>>>>>
>>>>>> I thought about one uring per core at first, but now I'am not sure -
>>>>>> maybe the kernel devs have something to add to the discussion?
>>>>>>
>>>>>> P.S. uring is the main reason I'am switching from windows to linux dev
>>>>>> for client-sever app so I want to extract the max performance possible
>>>>>> out of this new exciting uring stuff. :)
>>>>>>
>>>>>> Thanks, Dmitry
>>>>>
>>>

-- 
Pavel Begunkov

  parent reply	other threads:[~2020-05-13 16:03 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-12 20:20 Any performance gains from using per thread(thread local) urings? Dmitry Sychov
2020-05-13  6:07 ` H. de Vries
2020-05-13 11:01   ` Dmitry Sychov
2020-05-13 11:56     ` Mark Papadakis
2020-05-13 13:15       ` Dmitry Sychov
2020-05-13 13:27         ` Mark Papadakis
2020-05-13 13:48           ` Dmitry Sychov
2020-05-13 14:12           ` Sergiy Yevtushenko
     [not found]           ` <CAO5MNut+nD-OqsKgae=eibWYuPim1f8-NuwqVpD87eZQnrwscA@mail.gmail.com>
2020-05-13 14:22             ` Dmitry Sychov
2020-05-13 14:31               ` Dmitry Sychov
2020-05-13 16:02               ` Pavel Begunkov [this message]
2020-05-13 19:23                 ` Dmitry Sychov
2020-05-14 10:06                   ` Pavel Begunkov
2020-05-14 11:35                     ` Dmitry Sychov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox