From: Avi Kivity <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Subject: Re: memory access op ideas
Date: Sun, 24 Apr 2022 17:56:00 +0300 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 24/04/2022 16.30, Jens Axboe wrote:
> On 4/24/22 7:04 AM, Avi Kivity wrote:
>> On 23/04/2022 20.30, Jens Axboe wrote:
>>> On 4/23/22 10:23 AM, Avi Kivity wrote:
>>>> Perhaps the interface should be kept separate from io_uring. e.g. use
>>>> a pidfd to represent the address space, and then issue
>>>> IORING_OP_PREADV/IORING_OP_PWRITEV to initiate dma. Then one can copy
>>>> across process boundaries.
>>> Then you just made it a ton less efficient, particularly if you used the
>>> vectored read/write. For this to make sense, I think it has to be a
>>> separate op. At least that's the only implementation I'd be willing to
>>> entertain for the immediate copy.
>>
>> Sorry, I caused a lot of confusion by bundling immediate copy and a
>> DMA engine interface. For sure the immediate copy should be a direct
>> implementation like you posted!
>>
>> User-to-user copies are another matter. I feel like that should be a
>> stand-alone driver, and that io_uring should be an io_uring-y way to
>> access it. Just like io_uring isn't an NVMe driver.
> Not sure I understand your logic here or the io_uring vs nvme driver
> reference, to be honest. io_uring _is_ a standalone way to access it,
> you can use it sync or async through that.
>
> If you're talking about a standalone op vs being useful from a command
> itself, I do think both have merit and I can see good use cases for
> both.
I'm saying that if dma is exposed to userspace, it should have a regular
synchronous interface (maybe open("/dev/dma"), maybe something else).
io_uring adds asynchrony to everything, but it's not everything's driver.
Anyway maybe we drifted off somewhere and this should be decided by
pragmatic concerns (like whatever the author of the driver prefers).
>
>>> For outside of io_uring, you're looking at a sync
>>> interface, which I think already exists for this (ioctls?).
>>
>> Yes, it would be a asynchronous interface. I don't know if one exists,
>> but I can't claim to have kept track.
> Again not following. So you're saying there should be a 2nd async
> interface for it?
No. And I misspelled "synchronous" as "asynchronous" (I was agreeing
with you that it would be a sync interface).
>
>>>> The kernel itself should find the DMA engine useful for things like
>>>> memory compaction.
>>> That's a very different use case though and just deals with wiring it up
>>> internally.
>>>
>>> Let's try and keep the scope here reasonable, imho nothing good comes
>>> out of attempting to do all the things at once.
>>>
>> For sure, I'm just noting that the DMA engine has many different uses
>> and so deserves an interface that is untied to io_uring.
> And again, not following, what's the point of having 2 interfaces for
> the same thing? I can sort of agree if one is just the basic ioctl kind
> of interface, a basic sync one. But outside of that I'm a bit puzzled as
> to why that would be useful at all.
>
Yes I meant the basic sync one. Sorry I caused quite a lot of confusion
here!
next prev parent reply other threads:[~2022-04-24 14:56 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-13 10:33 memory access op ideas Avi Kivity
2022-04-22 12:52 ` Hao Xu
2022-04-22 13:24 ` Hao Xu
2022-04-22 13:38 ` Jens Axboe
2022-04-23 7:19 ` Hao Xu
2022-04-23 16:14 ` Avi Kivity
2022-04-22 14:50 ` Jens Axboe
2022-04-22 15:03 ` Jens Axboe
2022-04-23 16:30 ` Avi Kivity
2022-04-23 17:32 ` Jens Axboe
2022-04-23 18:02 ` Jens Axboe
2022-04-23 18:11 ` Jens Axboe
2022-04-22 20:03 ` Walker, Benjamin
2022-04-23 10:19 ` Pavel Begunkov
2022-04-23 13:20 ` Jens Axboe
2022-04-23 16:23 ` Avi Kivity
2022-04-23 17:30 ` Jens Axboe
2022-04-24 13:04 ` Avi Kivity
2022-04-24 13:30 ` Jens Axboe
2022-04-24 14:56 ` Avi Kivity [this message]
2022-04-25 0:45 ` Jens Axboe
2022-04-25 18:05 ` Walker, Benjamin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox