From: Avi Kivity <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Subject: Re: memory access op ideas
Date: Sun, 24 Apr 2022 16:04:01 +0300 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 23/04/2022 20.30, Jens Axboe wrote:
> On 4/23/22 10:23 AM, Avi Kivity wrote:
>> Perhaps the interface should be kept separate from io_uring. e.g. use
>> a pidfd to represent the address space, and then issue
>> IORING_OP_PREADV/IORING_OP_PWRITEV to initiate dma. Then one can copy
>> across process boundaries.
> Then you just made it a ton less efficient, particularly if you used the
> vectored read/write. For this to make sense, I think it has to be a
> separate op. At least that's the only implementation I'd be willing to
> entertain for the immediate copy.
Sorry, I caused a lot of confusion by bundling immediate copy and a DMA
engine interface. For sure the immediate copy should be a direct
implementation like you posted!
User-to-user copies are another matter. I feel like that should be a
stand-alone driver, and that io_uring should be an io_uring-y way to
access it. Just like io_uring isn't an NVMe driver.
>> A different angle is to use expose the dma device as a separate fd.
>> This can be useful as dma engine can often do other operations, like
>> xor or crc or encryption or compression. In any case I'd argue for the
>> interface to be useful outside io_uring, although that considerably
>> increases the scope. I also don't have a direct use case for it,
>> though I'm sure others will.
> I'd say that whoever does it get to at least dictate the initial
> implementation.
Of course, but bikeshedding from the sidelines never hurt anyone.
> For outside of io_uring, you're looking at a sync
> interface, which I think already exists for this (ioctls?).
Yes, it would be a asynchronous interface. I don't know if one exists,
but I can't claim to have kept track.
>
>> The kernel itself should find the DMA engine useful for things like
>> memory compaction.
> That's a very different use case though and just deals with wiring it up
> internally.
>
> Let's try and keep the scope here reasonable, imho nothing good comes
> out of attempting to do all the things at once.
>
For sure, I'm just noting that the DMA engine has many different uses
and so deserves an interface that is untied to io_uring.
next prev parent reply other threads:[~2022-04-24 13:04 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-13 10:33 memory access op ideas Avi Kivity
2022-04-22 12:52 ` Hao Xu
2022-04-22 13:24 ` Hao Xu
2022-04-22 13:38 ` Jens Axboe
2022-04-23 7:19 ` Hao Xu
2022-04-23 16:14 ` Avi Kivity
2022-04-22 14:50 ` Jens Axboe
2022-04-22 15:03 ` Jens Axboe
2022-04-23 16:30 ` Avi Kivity
2022-04-23 17:32 ` Jens Axboe
2022-04-23 18:02 ` Jens Axboe
2022-04-23 18:11 ` Jens Axboe
2022-04-22 20:03 ` Walker, Benjamin
2022-04-23 10:19 ` Pavel Begunkov
2022-04-23 13:20 ` Jens Axboe
2022-04-23 16:23 ` Avi Kivity
2022-04-23 17:30 ` Jens Axboe
2022-04-24 13:04 ` Avi Kivity [this message]
2022-04-24 13:30 ` Jens Axboe
2022-04-24 14:56 ` Avi Kivity
2022-04-25 0:45 ` Jens Axboe
2022-04-25 18:05 ` Walker, Benjamin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox