public inbox for [email protected]
 help / color / mirror / Atom feed
* Quick question about your io_uring zerocopy work
@ 2022-07-21  9:38 Paul Cercueil
  2022-07-21 13:22 ` Pavel Begunkov
  0 siblings, 1 reply; 2+ messages in thread
From: Paul Cercueil @ 2022-07-21  9:38 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: io-uring

Hi Pavel,

Good job on the io_uring zerocopy stuff, that looks really interesting!

I'm working on adding a new userspace/kernelspace buffer interface for 
the IIO subsystem. My first idea (a few years ago already) was to add 
support for splice(), so that the data could be sent from IIO hardware 
directly to file or to the network.

It turned out not working really well because of how splice() works. 
The kernel would erase pages to be exchanged with the pipe data pages, 
so the speed gains obtained by not copying data pages were 
underwhelming and the CPU usage was almost as high (CPU usage being our 
limiting factor here).

We then settled for a dmabuf-based interface [1] which works great as a 
userspace/kernelspace interface, but doesn't allow zero-copy to disk or 
network (until someone adds support for it, I guess). The patchset got 
refused on the basis that (against all documentation) dmabuf really is 
a gpu/drm thing and shouldn't be used elsewhere.

My question for you is, would your new io_uring zerocopy work allow for 
instance to transfer data from storage to the network, without 
triggering this "page clearing" mechanism that splice() has?

Cheers!
-Paul

[1] 
https://lore.kernel.org/linux-doc/[email protected]/T/



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Quick question about your io_uring zerocopy work
  2022-07-21  9:38 Quick question about your io_uring zerocopy work Paul Cercueil
@ 2022-07-21 13:22 ` Pavel Begunkov
  0 siblings, 0 replies; 2+ messages in thread
From: Pavel Begunkov @ 2022-07-21 13:22 UTC (permalink / raw)
  To: Paul Cercueil; +Cc: io-uring

On 7/21/22 10:38, Paul Cercueil wrote:
> Hi Pavel,
> 
> Good job on the io_uring zerocopy stuff, that looks really interesting!
> 
> I'm working on adding a new userspace/kernelspace buffer interface for the IIO subsystem. My first idea (a few years ago already) was to add support for splice(), so that the data could be sent from IIO hardware directly to file or to the network.
> 
> It turned out not working really well because of how splice() works. The kernel would erase pages to be exchanged with the pipe data pages, so the speed gains obtained by not copying data pages were underwhelming and the CPU usage was almost as high (CPU usage being our limiting factor here).
> 
> We then settled for a dmabuf-based interface [1] which works great as a userspace/kernelspace interface, but doesn't allow zero-copy to disk or network (until someone adds support for it, I guess). The patchset got refused on the basis that (against all documentation) dmabuf really is a gpu/drm thing and shouldn't be used elsewhere.

The idea I've got is that passing buffers as dmabufs is the only viable
approach, especially since GPU <-> NIC transfers are of much interest
and there were attempts of exposing NVME's CMB as dma-bufs (not sure
where did it end).

> My question for you is, would your new io_uring zerocopy work allow for instance to transfer data from storage to the network, without triggering this "page clearing" mechanism that splice() has?

That's the plan. We prototyped it before but needs some more work
to be done.

> [1] https://lore.kernel.org/linux-doc/[email protected]/T/

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-07-21 13:23 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-21  9:38 Quick question about your io_uring zerocopy work Paul Cercueil
2022-07-21 13:22 ` Pavel Begunkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox