public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: Kanchan Joshi <joshi.k@samsung.com>,
	Pavel Begunkov <asml.silence@gmail.com>,
	linux-block@vger.kernel.org, io-uring <io-uring@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Christoph Hellwig <hch@lst.de>, Anuj Gupta <anuj20.g@samsung.com>,
	Nitesh Shetty <nj.shetty@samsung.com>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>
Subject: Re: [LSF/MM/BPF TOPIC] dmabuf backed read/write
Date: Mon, 9 Feb 2026 08:54:49 -0400	[thread overview]
Message-ID: <20260209125449.GE1874040@nvidia.com> (raw)
In-Reply-To: <b69f230e-717c-4ad4-b086-ea480cf39b88@amd.com>

On Mon, Feb 09, 2026 at 11:13:42AM +0100, Christian König wrote:
> On 2/9/26 10:54, Kanchan Joshi wrote:
> > On 2/6/2026 8:50 PM, Jason Gunthorpe wrote:
> >>> I'm actually curious, is there a way to somehow create a
> >>> MEMORY_DEVICE_PCI_P2PDMA mapping out of a random dma-buf?
> >> No. The driver owning the P2P MMIO has to do this during its probe and
> >> then it has to provide a VMA with normal pages so GUP works. This is
> >> usally not hard on the exporting driver side.
> >>
> >> It costs some memory but then everything works naturally in the IO
> >> stack.
> >>
> >> Your project is interesting and would be a nice improvement, but I
> >> also don't entirely understand why you are bothering when the P2PDMA
> >> solution is already fully there ready to go... Is something preventing
> >> you from creating the P2PDMA pages for your exporting driver?
> > 
> > The exporter driver may have opted out of the P2PDMA struct page path
> > (MEMORY_DEVICE_PCI_P2PDMA route). This maybe a design choice to avoid
> > the system RAM overhead.

Currently you have to pay this tax to use the block stack.

It is certainly bad on x86, but for example 64k page size ARM pays
only 83MB, for the same configuration.

> That is a good argumentation, but the killer argument for DMA-buf to
> not use pages (or folios) is that the exported resource is sometimes
> not even memory.

I don't think anyone is saying that all DMA-buf must use pages, just
that if you want to use the MMIO with the *block stack* then a page
based approach already exists and is already being used. Usually
through VMAs.

I'm aware of all the downsides, but this proposal doesn't explain
which ones are motivating the work. Is the lack of pre-registration or
the tax the main motivation?

Jason

  reply	other threads:[~2026-02-09 12:54 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20260204153051epcas5p1c2efd01ef32883680fed2541f9fca6c2@epcas5p1.samsung.com>
2026-02-03 14:29 ` [LSF/MM/BPF TOPIC] dmabuf backed read/write Pavel Begunkov
2026-02-03 18:07   ` Keith Busch
2026-02-04  6:07     ` Anuj Gupta/Anuj Gupta
2026-02-04 11:38     ` Pavel Begunkov
2026-02-04 15:26   ` Nitesh Shetty
2026-02-09 11:15     ` Pavel Begunkov
2026-02-05  3:12   ` Ming Lei
2026-02-05 18:13     ` Pavel Begunkov
2026-02-05 17:41   ` Jason Gunthorpe
2026-02-05 19:06     ` Pavel Begunkov
2026-02-05 23:56       ` Jason Gunthorpe
2026-02-06 15:08         ` Pavel Begunkov
2026-02-06 15:20           ` Jason Gunthorpe
2026-02-06 17:57             ` Pavel Begunkov
2026-02-06 18:37               ` Jason Gunthorpe
2026-02-09 10:59                 ` Pavel Begunkov
2026-02-09 13:06                   ` Jason Gunthorpe
2026-02-09 13:09                     ` Christian König
2026-02-09 13:24                       ` Jason Gunthorpe
2026-02-09 13:55                         ` Christian König
2026-02-09 14:01                           ` Jason Gunthorpe
2026-02-09  9:54             ` Kanchan Joshi
2026-02-09 10:13               ` Christian König
2026-02-09 12:54                 ` Jason Gunthorpe [this message]
2026-02-09 10:04   ` Kanchan Joshi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260209125449.GE1874040@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=anuj20.g@samsung.com \
    --cc=asml.silence@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=hch@lst.de \
    --cc=io-uring@vger.kernel.org \
    --cc=joshi.k@samsung.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=nj.shetty@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox