public inbox for [email protected]
 help / color / mirror / Atom feed
From: Paul Cercueil <[email protected]>
To: Christoph Hellwig <[email protected]>
Cc: "Jonathan Cameron" <[email protected]>,
	"Sumit Semwal" <[email protected]>,
	"Christian König" <[email protected]>,
	[email protected], [email protected],
	[email protected], [email protected],
	"Michael Hennerich" <[email protected]>,
	"Alexandru Ardelean" <[email protected]>,
	[email protected], [email protected]
Subject: Re: IIO, dmabuf, io_uring
Date: Mon, 16 Aug 2021 11:27:40 +0200	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

Hi Christoph,

Le sam., août 14 2021 at 09:30:19 +0200, Christoph Hellwig 
<[email protected]> a écrit :
> On Fri, Aug 13, 2021 at 01:41:26PM +0200, Paul Cercueil wrote:
>>  Hi,
>> 
>>  A few months ago we (ADI) tried to upstream the interface we use 
>> with our
>>  high-speed ADCs and DACs. It is a system with custom ioctls on the 
>> iio
>>  device node to dequeue and enqueue buffers (allocated with
>>  dma_alloc_coherent), that can then be mmap'd by userspace 
>> applications.
>>  Anyway, it was ultimately denied entry [1]; this API was okay in 
>> ~2014 when
>>  it was designed but it feels like re-inventing the wheel in 2021.
>> 
>>  Back to the drawing table, and we'd like to design something that 
>> we can
>>  actually upstream. This high-speed interface looks awfully similar 
>> to
>>  DMABUF, so we may try to implement a DMABUF interface for IIO, 
>> unless
>>  someone has a better idea.
> 
> To me this does sound a lot like a dma buf use case.  The interesting
> question to me is how to signal arrival of new data, or readyness to
> consume more data.  I suspect that people that are actually using
> dmabuf heavily at the moment (dri/media folks) might be able to chime
> in a little more on that.

Thanks for the feedback.

I haven't looked too much into how dmabuf works; but IIO device nodes 
right now have a regular stdio interface, so I believe poll() flags can 
be used to signal arrival of new data.

>>  Our first usecase is, we want userspace applications to be able to 
>> dequeue
>>  buffers of samples (from ADCs), and/or enqueue buffers of samples 
>> (for
>>  DACs), and to be able to manipulate them (mmapped buffers). With a 
>> DMABUF
>>  interface, I guess the userspace application would dequeue a dma 
>> buffer
>>  from the driver, mmap it, read/write the data, unmap it, then 
>> enqueue it to
>>  the IIO driver again so that it can be disposed of. Does that sound 
>> sane?
>> 
>>  Our second usecase is - and that's where things get tricky - to be 
>> able to
>>  stream the samples to another computer for processing, over 
>> Ethernet or
>>  USB. Our typical setup is a high-speed ADC/DAC on a dev board with 
>> a FPGA
>>  and a weak soft-core or low-power CPU; processing the data in-situ 
>> is not
>>  an option. Copying the data from one buffer to another is not an 
>> option
>>  either (way too slow), so we absolutely want zero-copy.
>> 
>>  Usual userspace zero-copy techniques (vmsplice+splice, MSG_ZEROCOPY 
>> etc)
>>  don't really work with mmapped kernel buffers allocated for DMA [2] 
>> and/or
>>  have a huge overhead, so the way I see it, we would also need DMABUF
>>  support in both the Ethernet stack and USB (functionfs) stack. 
>> However, as
>>  far as I understood, DMABUF is mostly a DRM/V4L2 thing, so I am 
>> really not
>>  sure we have the right idea here.
>> 
>>  And finally, there is the new kid in town, io_uring. I am not very 
>> literate
>>  about the topic, but it does not seem to be able to handle DMA 
>> buffers
>>  (yet?). The idea that we could dequeue a buffer of samples from the 
>> IIO
>>  device and send it over the network in one single syscall is 
>> appealing,
>>  though.
> 
> Think of io_uring really just as an async syscall layer.  It doesn't
> replace DMA buffers, but can be used as a different and for some
> workloads more efficient way to dispatch syscalls.

That was my thought, yes. Thanks.

Cheers,
-Paul



  reply	other threads:[~2021-08-16  9:27 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-13 11:41 IIO, dmabuf, io_uring Paul Cercueil
2021-08-13 17:20 ` Pavel Begunkov
2021-08-16  9:20   ` Paul Cercueil
2021-08-14  7:30 ` Christoph Hellwig
2021-08-16  9:27   ` Paul Cercueil [this message]
2021-08-16 15:01   ` [Linaro-mm-sig] " Daniel Vetter
2021-08-15 18:02 ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox