public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
From: Dragos Tatulea <dtatulea@nvidia.com>
To: <almasrymina@google.com>, <asml.silence@gmail.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Simon Horman <horms@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>,
	Andrew Lunn <andrew+netdev@lunn.ch>
Cc: Dragos Tatulea <dtatulea@nvidia.com>, <cratiu@nvidia.com>,
	<parav@nvidia.com>, Christoph Hellwig <hch@infradead.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<io-uring@vger.kernel.org>, <linux-rdma@vger.kernel.org>
Subject: [RFC net-next v3 0/7] devmem/io_uring: allow more flexibility for ZC DMA devices
Date: Fri, 15 Aug 2025 14:03:41 +0300	[thread overview]
Message-ID: <20250815110401.2254214-2-dtatulea@nvidia.com> (raw)

For TCP zerocopy rx (io_uring, devmem), there is an assumption that the
parent device can do DMA. However that is not always the case:
- Scalable Function netdevs [1] have the DMA device in the grandparent.
- For Multi-PF netdevs [2] queues can be associated to different DMA
  devices.

The series adds an API for getting the DMA device for a netdev queue.
Drivers that have special requirements can implement the newly added
queue management op. Otherwise the parent will still be used as before.

This series continues with switching to this API for io_uring zcrx and
devmem and adds a ndo_queue_dma_dev op for mlx5.

The last part of the series changes devmem rx bind to get the DMA device
per queue and blocks the case when multiple queues use different DMA
devices. The tx bind is left as is.

[1] Documentation/networking/device_drivers/ethernet/mellanox/mlx5/switchdev.rst
[2] Documentation/networking/multi-pf-netdev.rst

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>

----
Changes sice v2 [3]:
- Downgraded to RFC status until consensus is reached.
- Implemented more generic approach as discussed during
  v2 review.
- Refactor devmem to get DMA device for multiple rx queues for
  multi PF netdev support.
- Renamed series with a more generic name.

Changes since v1 [2]:
- Dropped the Fixes tag.
- Added more documentation as requeseted.
- Renamed the patch title to better reflect its purpose.

Changes since RFC [1]:
- Upgraded from RFC status.
- Dropped driver specific bits for generic solution.
- Implemented single patch as a fix as requested in RFC.
- Handling of multi-PF netdevs will be handled in a subsequent patch
  series.

[1] RFC: https://lore.kernel.org/all/20250702172433.1738947-2-dtatulea@nvidia.com/
[2]  v1: https://lore.kernel.org/all/20250709124059.516095-2-dtatulea@nvidia.com/
[3]  v2: https://lore.kernel.org/all/20250711092634.2733340-2-dtatulea@nvidia.com/
---
Dragos Tatulea (7):
  queue_api: add support for fetching per queue DMA dev
  io_uring/zcrx: add support for custom DMA devices
  net: devmem: get netdev DMA device via new API
  net/mlx5e: add op for getting netdev DMA device
  net: devmem: pull out dma_dev out of net_devmem_bind_dmabuf
  net: devmem: pre-read requested rx queues during bind
  net: devmem: allow binding on rx queues with same MA devices

 .../net/ethernet/mellanox/mlx5/core/en_main.c |  24 ++++
 include/net/netdev_queues.h                   |  20 ++++
 io_uring/zcrx.c                               |   3 +-
 net/core/devmem.c                             |   8 +-
 net/core/devmem.h                             |   2 +
 net/core/netdev-genl.c                        | 113 +++++++++++++-----
 6 files changed, 137 insertions(+), 33 deletions(-)

-- 
2.50.1


             reply	other threads:[~2025-08-15 11:07 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-15 11:03 Dragos Tatulea [this message]
2025-08-15 11:03 ` [RFC net-next v3 2/7] io_uring/zcrx: add support for custom DMA devices Dragos Tatulea
2025-08-18 18:16   ` Mina Almasry
2025-08-19 16:04   ` Pavel Begunkov
2025-08-15 15:31 ` [RFC net-next v3 0/7] devmem/io_uring: allow more flexibility for ZC " Stanislav Fomichev
2025-08-15 15:48   ` Dragos Tatulea

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250815110401.2254214-2-dtatulea@nvidia.com \
    --to=dtatulea@nvidia.com \
    --cc=almasrymina@google.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=cratiu@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hch@infradead.org \
    --cc=horms@kernel.org \
    --cc=io-uring@vger.kernel.org \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=parav@nvidia.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox