From: Pavel Begunkov <[email protected]>
To: Victor Stewart <[email protected]>
Cc: io-uring <[email protected]>
Subject: Re: io_uring-only sendmsg + recvmsg zerocopy
Date: Wed, 11 Nov 2020 18:50:50 +0000 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <CAM1kxwhdCoH7ZAmnaaDTohg3TUSWL264juamO1or_3m-JFnRyg@mail.gmail.com>
On 11/11/2020 16:49, Victor Stewart wrote:
> On Wed, Nov 11, 2020 at 1:00 AM Pavel Begunkov <[email protected]> wrote:
>> On 11/11/2020 00:07, Victor Stewart wrote:
>>> On Tue, Nov 10, 2020 at 11:26 PM Pavel Begunkov <[email protected]> wrote:
>>>>> NIC ACKs, instead of finding the socket's error queue and putting the
>>>>> completion there like MSG_ZEROCOPY, the kernel would find the io_uring
>>>>> instance the socket is registered to and call into an io_uring
>>>>> sendmsg_zerocopy_completion function. Then the cqe would get pushed
>>>>> onto the completion queue.>
>>>>> the "recvmsg zerocopy" is straight forward enough. mimicking
>>>>> TCP_ZEROCOPY_RECEIVE, i'll go into specifics next time.
>>>>
>>>> Receive side is inherently messed up. IIRC, TCP_ZEROCOPY_RECEIVE just
>>>> maps skbuffs into userspace, and in general unless there is a better
>>>> suited protocol (e.g. infiniband with richier src/dst tagging) or a very
>>>> very smart NIC, "true zerocopy" is not possible without breaking
>>>> multiplexing.
>>>>
>>>> For registered buffers you still need to copy skbuff, at least because
>>>> of security implications.
>>>
>>> we can actually just force those buffers to be mmap-ed, and then when
>>> packets arrive use vm_insert_pin or remap_pfn_range to change the
>>> physical pages backing the virtual memory pages submmited for reading
>>> via msg_iov. so it's transparent to userspace but still zerocopy.
>>> (might require the user to notify io_uring when reading is
>>> completed... but no matter).
>>
>> Yes, with io_uring zerocopy-recv may be done better than
>> TCP_ZEROCOPY_RECEIVE but
>> 1) it's still a remap. Yes, zerocopy, but not ideal
>> 2) won't work with registered buffers, which is basically a set
>> of pinned pages that have a userspace mapping. After such remap
>> that mapping wouldn't be in sync and that gets messy.
>
> well unless we can alleviate all copies, then there isn’t any point
> because it isn’t zerocopy.
>
> so in my server, i have a ceiling on the number of clients,
> preallocate them, and mmap anonymous noreserve read + write buffers
> for each.
>
> so say, 150,000 clients x (2MB * 2). which is 585GB. way more than the
> physical memory of my machine. (and have 10 instance of it per
> machine, so ~6TB lol). but at any one time probably 0.01% of that
> memory is in usage. and i just MADV_COLD the pages after consumption.
>
> this provides a persistent “vmem contiguous” stream buffer per client.
> which has a litany of benefits. but if we persistently pin pages, this
> ceases to work, because pin pages require persistent physical memory
> backing pages.
>
> But on the send side, if you don’t pin persistently, you’d have to pin
> on demand, which costs more than it’s worth for sends less than ~10KB.
having it non-contiguous and do round-robin IMHO would be a better shot
> And I guess there’s no way to avoid pinning and maintain kernel
> integrity. Maybe we could erase those userspace -> physical page
> mappings, then recreate them once the operation completes, but 1) that
> would require page aligned sends so that you could keep writing and
> sending while you waited for completions and 2) beyond being
> nonstandard and possibly unsafe, who says that would even cost less
> than pinning, definitely costs something. Might cost more because
> you’d have to get locks to the page table?
>
> So essentially on the send side the only way to zerocopy for free is
> to persistently pin (and give up my per client stream buffers).
>
> On the receive side actually the only way to realistically do zerocopy
> is to somehow pin a NIC RX queue to a process, and then persistently
> map the queue into the process’s memory as read only. That’s a
> security absurdly in the general case, but it could be root only
> usage. Then you’d recvmsg with a NULL msg_iov[0].iov_base, and have
> the packet buffer location and length written in. Might require driver
> buy-in, so might be impractical, but unsure.
https://blogs.oracle.com/linux/zero-copy-networking-in-uek6
scroll to AF_XDP
>
> Otherwise the only option is an even worse nightmare how
> TCP_ZEROCOPY_RECEIVE works, and ridiculously impractical for general
> purpose…
Well, that's not so bad, API with io_uring might be much better, but
still would require unmap. However, depending on a use case overhead
for small packets and/or shared b/w many thread mm can potentially be
a deal breaker.
> “Mapping of memory into a process's address space is done on a
> per-page granularity; there is no way to map a fraction of a page. So
> inbound network data must be both page-aligned and page-sized when it
> ends up in the receive buffer, or it will not be possible to map it
> into user space. Alignment can be a bit tricky because the packets
> coming out of the interface start with the protocol headers, not the
> data the receiving process is interested in. It is the data that must
> be aligned, not the headers. Achieving this alignment is possible, but
> it requires cooperation from the network interface
should support scatter-gather in other words
>
> It is also necessary to ensure that the data arrives in chunks that
> are a multiple of the system's page size, or partial pages of data
> will result. That can be done by setting the maximum transfer unit
> (MTU) size properly on the interface. That, in turn, can require
> knowledge of exactly what the incoming packets will look like; in a
> test program posted with the patch set, Dumazet sets the MTU to
> 61,512. That turns out to be space for fifteen 4096-byte pages of
> data, plus 40 bytes for the IPv6 header and 32 bytes for the TCP
> header.”
>
> https://lwn.net/Articles/752188/
>
> Either receive case also makes my persistent per client stream buffer
> zerocopy impossible lol.
it depends
>
> in short, zerocopy sendmsg with persistently pinned buffers is
> definitely possible and we should do that. (I'll just make it work on
> my end).
>
> recvmsg i'll have to do more research into the practicality of what I
> proposed above.
1. NIC is smart enough and can locate the end (userspace) buffer and
DMA there directly. That requires parsing TCP/UDP headers, etc., or
having a more versatile API like infiniband. + extra NIC features.
2. map skbufs into the userspace as TCP_ZEROCOPY_RECEIVE does.
--
Pavel Begunkov
prev parent reply other threads:[~2020-11-11 18:53 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-10 21:31 io_uring-only sendmsg + recvmsg zerocopy Victor Stewart
2020-11-10 23:23 ` Pavel Begunkov
[not found] ` <CAM1kxwjSyLb9ijs0=RZUA06E20qjwBnAZygwM3ckh10WozExag@mail.gmail.com>
2020-11-11 0:25 ` Fwd: " Victor Stewart
2020-11-11 0:57 ` Pavel Begunkov
2020-11-11 16:49 ` Victor Stewart
2020-11-11 18:50 ` Pavel Begunkov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox