public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH v6 00/15] io_uring zero copy rx
@ 2024-10-16 18:52 David Wei
  2024-10-16 18:52 ` [PATCH v6 01/15] net: prefix devmem specific helpers David Wei
                   ` (15 more replies)
  0 siblings, 16 replies; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

This patchset adds support for zero copy rx into userspace pages using
io_uring, eliminating a kernel to user copy.

We configure a page pool that a driver uses to fill a hw rx queue to
hand out user pages instead of kernel pages. Any data that ends up
hitting this hw rx queue will thus be dma'd into userspace memory
directly, without needing to be bounced through kernel memory. 'Reading'
data out of a socket instead becomes a _notification_ mechanism, where
the kernel tells userspace where the data is. The overall approach is
similar to the devmem TCP proposal.

This relies on hw header/data split, flow steering and RSS to ensure
packet headers remain in kernel memory and only desired flows hit a hw
rx queue configured for zero copy. Configuring this is outside of the
scope of this patchset.

We share netdev core infra with devmem TCP. The main difference is that
io_uring is used for the uAPI and the lifetime of all objects are bound
to an io_uring instance. Data is 'read' using a new io_uring request
type. When done, data is returned via a new shared refill queue. A zero
copy page pool refills a hw rx queue from this refill queue directly. Of
course, the lifetime of these data buffers are managed by io_uring
rather than the networking stack, with different refcounting rules.

This patchset is the first step adding basic zero copy support. We will
extend this iteratively with new features e.g. dynamically allocated
zero copy areas, THP support, dmabuf support, improved copy fallback,
general optimisations and more.

In terms of netdev support, we're first targeting Broadcom bnxt. Patches
aren't included since Taehee Yoo has already sent a more comprehensive
patchset adding support in [1]. Google gve should already support this,
and Mellanox mlx5 support is WIP pending driver changes.

===========
Performance
===========

Note: Comparison with epoll + TCP_ZEROCOPY_RECEIVE isn't done yet.

Test setup:
* AMD EPYC 9454
* Broadcom BCM957508 200G
* Kernel v6.11 base [2]
* liburing fork [3]
* kperf fork [4]
* 4K MTU
* Single TCP flow

With application thread + net rx softirq pinned to _different_ cores:

+-------------------------------+
| epoll     | io_uring          |
|-----------|-------------------|
| 82.2 Gbps | 116.2 Gbps (+41%) |
+-------------------------------+

Pinned to _same_ core:

+-------------------------------+
| epoll     | io_uring          |
|-----------|-------------------|
| 62.6 Gbps | 80.9 Gbps (+29%)  |
+-------------------------------+

==============
Patch overview
==============

Networking folks would be mostly interested in patches 1-8, 11, 12 and
14. Patches 1-8 make the necessary prerequisite changes in netdev core.

Patch 11 implements struct memory_provider_ops, patch 12 adds a
recv_actor_t func used with tcp_read_sock(), and patch 14 passes it all
to netdev via the queue API.

io_uring folks would be mostly interested in patches 9-15:

* Initial registration that sets up a hw rx queue.
* Shared ringbuf for userspace to return buffers.
* New request type for doing zero copy rx reads.

=====
Links
=====

Broadcom bnxt support:
[1]: https://lore.kernel.org/netdev/[email protected]/

Linux kernel branch:
[2]: https://github.com/isilence/linux.git zcrx/v6

liburing for testing:
[3]: https://github.com/isilence/liburing.git zcrx/next

kperf for testing:
[4]: https://git.kernel.dk/kperf.git

Changes in v6:
--------------
Please note: Comparison with TCP_ZEROCOPY_RECEIVE isn't done yet.

net:
* Drop a devmem.h clean up patch.
* Migrate to netdev_get_by_index from deprecated API.
* Fix !CONFIG_NET_DEVMEM build.
* Don’t return into the page pool cache directly, use a new helper 
* Refactor napi_execute

io_uring:
* Require IORING_RECV_MULTISHOT flag set.
* Add unselectable CONFIG_IO_URING_ZCRX.
* Pulled latest io_uring changes.
* Unexport io_uring_pp_zc_ops.

Changes in v5:
--------------
* Rebase on top of merged net_iov + netmem infra.
* Decouple net_iov from devmem TCP.
* Use netdev queue API to allocate an rx queue.
* Minor uAPI enhancements for future extensibility.
* QoS improvements with request throttling.

Changes in RFC v4:
------------------
* Rebased on top of Mina Almasry's TCP devmem patchset and latest
  net-next, now sharing common infra e.g.:
    * netmem_t and net_iovs
    * Page pool memory provider
* The registered buffer (rbuf) completion queue where completions from
  io_recvzc requests are posted is removed. Now these post into the main
  completion queue, using big (32-byte) CQEs. The first 16 bytes is an
  ordinary CQE, while the latter 16 bytes contain the io_uring_rbuf_cqe
  as before. This vastly simplifies the uAPI and removes a level of
  indirection in userspace when looking for payloads.
  * The rbuf refill queue is still needed for userspace to return
    buffers to kernel.
* Simplified code and uAPI on the io_uring side, particularly
  io_recvzc() and io_zc_rx_recv(). Many unnecessary lines were removed
  e.g. extra msg flags, readlen, etc.

Changes in RFC v3:
------------------
* Rebased on top of Jakub Kicinski's memory provider API RFC. The ZC
  pool added is now a backend for memory provider.
* We're also reusing ppiov infrastructure. The refcounting rules stay
  the same but it's shifted into ppiov->refcount. That lets us to
  flexibly manage buffer lifetimes without adding any extra code to the
  common networking paths. It'd also make it easier to support dmabufs
  and device memory in the future.
  * io_uring also knows about pages, and so ppiovs might unnecessarily
    break tools inspecting data, that can easily be solved later.

Many patches are not for upstream as they depend on work in progress,
namely from Mina:

* struct netmem_t
* Driver ndo commands for Rx queue configs
* struct page_pool_iov and shared pp infra

Changes in RFC v2:
------------------
* Added copy fallback support if userspace memory allocated for ZC Rx
  runs out, or if header splitting or flow steering fails.
* Added veth support for ZC Rx, for testing and demonstration. We will
  need to figure out what driver would be best for such testing
  functionality in the future. Perhaps netdevsim?
* Added socket registration API to io_uring to associate specific
  sockets with ifqs/Rx queues for ZC.
* Added multi-socket support, such that multiple connections can be
  steered into the same hardware Rx queue.
* Added Netbench server/client support.

David Wei (4):
  io_uring/zcrx: add interface queue and refill queue
  io_uring/zcrx: add io_zcrx_area
  io_uring/zcrx: add io_recvzc request
  io_uring/zcrx: set pp memory provider for an rx queue

Jakub Kicinski (1):
  net: page_pool: create hooks for custom page providers

Pavel Begunkov (10):
  net: prefix devmem specific helpers
  net: generalise net_iov chunk owners
  net: prepare for non devmem TCP memory providers
  net: page_pool: add ->scrub mem provider callback
  net: page pool: add helper creating area from pages
  net: page_pool: introduce page_pool_mp_return_in_cache
  net: add helper executing custom callback from napi
  io_uring/zcrx: implement zerocopy receive pp memory provider
  io_uring/zcrx: add copy fallback
  io_uring/zcrx: throttle receive requests

 Kconfig                                 |   2 +
 include/linux/io_uring_types.h          |   3 +
 include/net/busy_poll.h                 |   6 +
 include/net/netmem.h                    |  21 +-
 include/net/page_pool/memory_provider.h |  14 +
 include/net/page_pool/types.h           |  10 +
 include/uapi/linux/io_uring.h           |  54 ++
 io_uring/KConfig                        |  10 +
 io_uring/Makefile                       |   1 +
 io_uring/io_uring.c                     |   7 +
 io_uring/io_uring.h                     |  10 +
 io_uring/memmap.c                       |   8 +
 io_uring/net.c                          |  74 +++
 io_uring/opdef.c                        |  16 +
 io_uring/register.c                     |   7 +
 io_uring/rsrc.c                         |   2 +-
 io_uring/rsrc.h                         |   1 +
 io_uring/zcrx.c                         | 838 ++++++++++++++++++++++++
 io_uring/zcrx.h                         |  76 +++
 net/core/dev.c                          |  77 ++-
 net/core/devmem.c                       |  51 +-
 net/core/devmem.h                       |  45 +-
 net/core/page_pool.c                    | 102 ++-
 net/core/page_pool_user.c               |  15 +-
 net/ipv4/tcp.c                          |   8 +-
 25 files changed, 1384 insertions(+), 74 deletions(-)
 create mode 100644 include/net/page_pool/memory_provider.h
 create mode 100644 io_uring/KConfig
 create mode 100644 io_uring/zcrx.c
 create mode 100644 io_uring/zcrx.h

-- 
2.43.5


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v6 01/15] net: prefix devmem specific helpers
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-16 18:52 ` [PATCH v6 02/15] net: generalise net_iov chunk owners David Wei
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

Add prefixes to all helpers that are specific to devmem TCP, i.e.
net_iov_binding[_id].

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 net/core/devmem.c |  2 +-
 net/core/devmem.h | 14 +++++++-------
 net/ipv4/tcp.c    |  2 +-
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/net/core/devmem.c b/net/core/devmem.c
index 11b91c12ee11..858982858f81 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -93,7 +93,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
 
 void net_devmem_free_dmabuf(struct net_iov *niov)
 {
-	struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov);
+	struct net_devmem_dmabuf_binding *binding = net_devmem_iov_binding(niov);
 	unsigned long dma_addr = net_devmem_get_dma_addr(niov);
 
 	if (WARN_ON(!gen_pool_has_addr(binding->chunk_pool, dma_addr,
diff --git a/net/core/devmem.h b/net/core/devmem.h
index 76099ef9c482..99782ddeca40 100644
--- a/net/core/devmem.h
+++ b/net/core/devmem.h
@@ -86,11 +86,16 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov)
 }
 
 static inline struct net_devmem_dmabuf_binding *
-net_iov_binding(const struct net_iov *niov)
+net_devmem_iov_binding(const struct net_iov *niov)
 {
 	return net_iov_owner(niov)->binding;
 }
 
+static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov)
+{
+	return net_devmem_iov_binding(niov)->id;
+}
+
 static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov)
 {
 	struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov);
@@ -99,11 +104,6 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov)
 	       ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT);
 }
 
-static inline u32 net_iov_binding_id(const struct net_iov *niov)
-{
-	return net_iov_owner(niov)->binding->id;
-}
-
 static inline void
 net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding)
 {
@@ -171,7 +171,7 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov)
 	return 0;
 }
 
-static inline u32 net_iov_binding_id(const struct net_iov *niov)
+static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov)
 {
 	return 0;
 }
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 82cc4a5633ce..e928efc22f80 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2494,7 +2494,7 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
 
 				/* Will perform the exchange later */
 				dmabuf_cmsg.frag_token = tcp_xa_pool.tokens[tcp_xa_pool.idx];
-				dmabuf_cmsg.dmabuf_id = net_iov_binding_id(niov);
+				dmabuf_cmsg.dmabuf_id = net_devmem_iov_binding_id(niov);
 
 				offset += copy;
 				remaining_len -= copy;
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
  2024-10-16 18:52 ` [PATCH v6 01/15] net: prefix devmem specific helpers David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-23  7:20   ` Christoph Hellwig
  2024-10-16 18:52 ` [PATCH v6 03/15] net: page_pool: create hooks for custom page providers David Wei
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner,
which serves as a useful abstraction to share data and provide a
context. However, it's too devmem specific, and we want to reuse it for
other memory providers, and for that we need to decouple net_iov from
devmem. Make net_iov to point to a new base structure called
net_iov_area, which dmabuf_genpool_chunk_owner extends.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 include/net/netmem.h | 21 ++++++++++++++++++++-
 net/core/devmem.c    | 25 +++++++++++++------------
 net/core/devmem.h    | 25 +++++++++----------------
 3 files changed, 42 insertions(+), 29 deletions(-)

diff --git a/include/net/netmem.h b/include/net/netmem.h
index 8a6e20be4b9d..3795ded30d2c 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -24,11 +24,20 @@ struct net_iov {
 	unsigned long __unused_padding;
 	unsigned long pp_magic;
 	struct page_pool *pp;
-	struct dmabuf_genpool_chunk_owner *owner;
+	struct net_iov_area *owner;
 	unsigned long dma_addr;
 	atomic_long_t pp_ref_count;
 };
 
+struct net_iov_area {
+	/* Array of net_iovs for this area. */
+	struct net_iov *niovs;
+	size_t num_niovs;
+
+	/* Offset into the dma-buf where this chunk starts.  */
+	unsigned long base_virtual;
+};
+
 /* These fields in struct page are used by the page_pool and net stack:
  *
  *        struct {
@@ -54,6 +63,16 @@ NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr);
 NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count);
 #undef NET_IOV_ASSERT_OFFSET
 
+static inline struct net_iov_area *net_iov_owner(const struct net_iov *niov)
+{
+	return niov->owner;
+}
+
+static inline unsigned int net_iov_idx(const struct net_iov *niov)
+{
+	return niov - net_iov_owner(niov)->niovs;
+}
+
 /* netmem */
 
 /**
diff --git a/net/core/devmem.c b/net/core/devmem.c
index 858982858f81..5c10cf0e2a18 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -32,14 +32,15 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
 {
 	struct dmabuf_genpool_chunk_owner *owner = chunk->owner;
 
-	kvfree(owner->niovs);
+	kvfree(owner->area.niovs);
 	kfree(owner);
 }
 
 static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov)
 {
-	struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov);
+	struct dmabuf_genpool_chunk_owner *owner;
 
+	owner = net_devmem_iov_to_chunk_owner(niov);
 	return owner->base_dma_addr +
 	       ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT);
 }
@@ -82,7 +83,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
 
 	offset = dma_addr - owner->base_dma_addr;
 	index = offset / PAGE_SIZE;
-	niov = &owner->niovs[index];
+	niov = &owner->area.niovs[index];
 
 	niov->pp_magic = 0;
 	niov->pp = NULL;
@@ -250,9 +251,9 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
 			goto err_free_chunks;
 		}
 
-		owner->base_virtual = virtual;
+		owner->area.base_virtual = virtual;
 		owner->base_dma_addr = dma_addr;
-		owner->num_niovs = len / PAGE_SIZE;
+		owner->area.num_niovs = len / PAGE_SIZE;
 		owner->binding = binding;
 
 		err = gen_pool_add_owner(binding->chunk_pool, dma_addr,
@@ -264,17 +265,17 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
 			goto err_free_chunks;
 		}
 
-		owner->niovs = kvmalloc_array(owner->num_niovs,
-					      sizeof(*owner->niovs),
-					      GFP_KERNEL);
-		if (!owner->niovs) {
+		owner->area.niovs = kvmalloc_array(owner->area.num_niovs,
+						   sizeof(*owner->area.niovs),
+						   GFP_KERNEL);
+		if (!owner->area.niovs) {
 			err = -ENOMEM;
 			goto err_free_chunks;
 		}
 
-		for (i = 0; i < owner->num_niovs; i++) {
-			niov = &owner->niovs[i];
-			niov->owner = owner;
+		for (i = 0; i < owner->area.num_niovs; i++) {
+			niov = &owner->area.niovs[i];
+			niov->owner = &owner->area;
 			page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
 						      net_devmem_get_dma_addr(niov));
 		}
diff --git a/net/core/devmem.h b/net/core/devmem.h
index 99782ddeca40..a2b9913e9a17 100644
--- a/net/core/devmem.h
+++ b/net/core/devmem.h
@@ -10,6 +10,8 @@
 #ifndef _NET_DEVMEM_H
 #define _NET_DEVMEM_H
 
+#include <net/netmem.h>
+
 struct netlink_ext_ack;
 
 struct net_devmem_dmabuf_binding {
@@ -51,17 +53,11 @@ struct net_devmem_dmabuf_binding {
  * allocations from this chunk.
  */
 struct dmabuf_genpool_chunk_owner {
-	/* Offset into the dma-buf where this chunk starts.  */
-	unsigned long base_virtual;
+	struct net_iov_area area;
+	struct net_devmem_dmabuf_binding *binding;
 
 	/* dma_addr of the start of the chunk.  */
 	dma_addr_t base_dma_addr;
-
-	/* Array of net_iovs for this chunk. */
-	struct net_iov *niovs;
-	size_t num_niovs;
-
-	struct net_devmem_dmabuf_binding *binding;
 };
 
 void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding);
@@ -75,20 +71,17 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
 void dev_dmabuf_uninstall(struct net_device *dev);
 
 static inline struct dmabuf_genpool_chunk_owner *
-net_iov_owner(const struct net_iov *niov)
+net_devmem_iov_to_chunk_owner(const struct net_iov *niov)
 {
-	return niov->owner;
-}
+	struct net_iov_area *owner = net_iov_owner(niov);
 
-static inline unsigned int net_iov_idx(const struct net_iov *niov)
-{
-	return niov - net_iov_owner(niov)->niovs;
+	return container_of(owner, struct dmabuf_genpool_chunk_owner, area);
 }
 
 static inline struct net_devmem_dmabuf_binding *
 net_devmem_iov_binding(const struct net_iov *niov)
 {
-	return net_iov_owner(niov)->binding;
+	return net_devmem_iov_to_chunk_owner(niov)->binding;
 }
 
 static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov)
@@ -98,7 +91,7 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov)
 
 static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov)
 {
-	struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov);
+	struct net_iov_area *owner = net_iov_owner(niov);
 
 	return owner->base_virtual +
 	       ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT);
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 03/15] net: page_pool: create hooks for custom page providers
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
  2024-10-16 18:52 ` [PATCH v6 01/15] net: prefix devmem specific helpers David Wei
  2024-10-16 18:52 ` [PATCH v6 02/15] net: generalise net_iov chunk owners David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-16 18:52 ` [PATCH v6 04/15] net: prepare for non devmem TCP memory providers David Wei
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Jakub Kicinski <[email protected]>

The page providers which try to reuse the same pages will
need to hold onto the ref, even if page gets released from
the pool - as in releasing the page from the pp just transfers
the "ownership" reference from pp to the provider, and provider
will wait for other references to be gone before feeding this
page back into the pool.

Signed-off-by: Jakub Kicinski <[email protected]>
[Pavel] Rebased, renamed callback, +converted devmem
Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 include/net/page_pool/types.h |  9 +++++++++
 net/core/devmem.c             | 14 +++++++++++++-
 net/core/page_pool.c          | 17 +++++++++--------
 3 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index c022c410abe3..8a35fe474adb 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -152,8 +152,16 @@ struct page_pool_stats {
  */
 #define PAGE_POOL_FRAG_GROUP_ALIGN	(4 * sizeof(long))
 
+struct memory_provider_ops {
+	netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp);
+	bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem);
+	int (*init)(struct page_pool *pool);
+	void (*destroy)(struct page_pool *pool);
+};
+
 struct pp_memory_provider_params {
 	void *mp_priv;
+	const struct memory_provider_ops *mp_ops;
 };
 
 struct page_pool {
@@ -215,6 +223,7 @@ struct page_pool {
 	struct ptr_ring ring;
 
 	void *mp_priv;
+	const struct memory_provider_ops *mp_ops;
 
 #ifdef CONFIG_PAGE_POOL_STATS
 	/* recycle stats are per-cpu to avoid locking */
diff --git a/net/core/devmem.c b/net/core/devmem.c
index 5c10cf0e2a18..01738029e35c 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -26,6 +26,8 @@
 /* Protected by rtnl_lock() */
 static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1);
 
+static const struct memory_provider_ops dmabuf_devmem_ops;
+
 static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
 					       struct gen_pool_chunk *chunk,
 					       void *not_used)
@@ -117,6 +119,7 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
 		WARN_ON(rxq->mp_params.mp_priv != binding);
 
 		rxq->mp_params.mp_priv = NULL;
+		rxq->mp_params.mp_ops = NULL;
 
 		rxq_idx = get_netdev_rx_queue_index(rxq);
 
@@ -142,7 +145,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
 	}
 
 	rxq = __netif_get_rx_queue(dev, rxq_idx);
-	if (rxq->mp_params.mp_priv) {
+	if (rxq->mp_params.mp_ops) {
 		NL_SET_ERR_MSG(extack, "designated queue already memory provider bound");
 		return -EEXIST;
 	}
@@ -160,6 +163,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
 		return err;
 
 	rxq->mp_params.mp_priv = binding;
+	rxq->mp_params.mp_ops = &dmabuf_devmem_ops;
 
 	err = netdev_rx_queue_restart(dev, rxq_idx);
 	if (err)
@@ -169,6 +173,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
 
 err_xa_erase:
 	rxq->mp_params.mp_priv = NULL;
+	rxq->mp_params.mp_ops = NULL;
 	xa_erase(&binding->bound_rxqs, xa_idx);
 
 	return err;
@@ -388,3 +393,10 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem)
 	/* We don't want the page pool put_page()ing our net_iovs. */
 	return false;
 }
+
+static const struct memory_provider_ops dmabuf_devmem_ops = {
+	.init			= mp_dmabuf_devmem_init,
+	.destroy		= mp_dmabuf_devmem_destroy,
+	.alloc_netmems		= mp_dmabuf_devmem_alloc_netmems,
+	.release_netmem		= mp_dmabuf_devmem_release_page,
+};
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index a813d30d2135..c21c5b9edc68 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -284,10 +284,11 @@ static int page_pool_init(struct page_pool *pool,
 		rxq = __netif_get_rx_queue(pool->slow.netdev,
 					   pool->slow.queue_idx);
 		pool->mp_priv = rxq->mp_params.mp_priv;
+		pool->mp_ops = rxq->mp_params.mp_ops;
 	}
 
-	if (pool->mp_priv) {
-		err = mp_dmabuf_devmem_init(pool);
+	if (pool->mp_ops) {
+		err = pool->mp_ops->init(pool);
 		if (err) {
 			pr_warn("%s() mem-provider init failed %d\n", __func__,
 				err);
@@ -584,8 +585,8 @@ netmem_ref page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp)
 		return netmem;
 
 	/* Slow-path: cache empty, do real allocation */
-	if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv)
-		netmem = mp_dmabuf_devmem_alloc_netmems(pool, gfp);
+	if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
+		netmem = pool->mp_ops->alloc_netmems(pool, gfp);
 	else
 		netmem = __page_pool_alloc_pages_slow(pool, gfp);
 	return netmem;
@@ -676,8 +677,8 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem)
 	bool put;
 
 	put = true;
-	if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv)
-		put = mp_dmabuf_devmem_release_page(pool, netmem);
+	if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
+		put = pool->mp_ops->release_netmem(pool, netmem);
 	else
 		__page_pool_release_page_dma(pool, netmem);
 
@@ -1010,8 +1011,8 @@ static void __page_pool_destroy(struct page_pool *pool)
 	page_pool_unlist(pool);
 	page_pool_uninit(pool);
 
-	if (pool->mp_priv) {
-		mp_dmabuf_devmem_destroy(pool);
+	if (pool->mp_ops) {
+		pool->mp_ops->destroy(pool);
 		static_branch_dec(&page_pool_mem_providers);
 	}
 
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 04/15] net: prepare for non devmem TCP memory providers
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (2 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 03/15] net: page_pool: create hooks for custom page providers David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-16 18:52 ` [PATCH v6 05/15] net: page_pool: add ->scrub mem provider callback David Wei
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

There is a good bunch of places in generic paths assuming that the only
page pool memory provider is devmem TCP. As we want to reuse the net_iov
and provider infrastructure, we need to patch it up and explicitly check
the provider type when we branch into devmem TCP code.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 net/core/devmem.c         | 10 ++++++++--
 net/core/devmem.h         |  8 ++++++++
 net/core/page_pool_user.c | 15 +++++++++------
 net/ipv4/tcp.c            |  6 ++++++
 4 files changed, 31 insertions(+), 8 deletions(-)

diff --git a/net/core/devmem.c b/net/core/devmem.c
index 01738029e35c..78983a98e5dc 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -28,6 +28,12 @@ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1);
 
 static const struct memory_provider_ops dmabuf_devmem_ops;
 
+bool net_is_devmem_page_pool_ops(const struct memory_provider_ops *ops)
+{
+	return ops == &dmabuf_devmem_ops;
+}
+EXPORT_SYMBOL_GPL(net_is_devmem_page_pool_ops);
+
 static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
 					       struct gen_pool_chunk *chunk,
 					       void *not_used)
@@ -316,10 +322,10 @@ void dev_dmabuf_uninstall(struct net_device *dev)
 	unsigned int i;
 
 	for (i = 0; i < dev->real_num_rx_queues; i++) {
-		binding = dev->_rx[i].mp_params.mp_priv;
-		if (!binding)
+		if (dev->_rx[i].mp_params.mp_ops != &dmabuf_devmem_ops)
 			continue;
 
+		binding = dev->_rx[i].mp_params.mp_priv;
 		xa_for_each(&binding->bound_rxqs, xa_idx, rxq)
 			if (rxq == &dev->_rx[i]) {
 				xa_erase(&binding->bound_rxqs, xa_idx);
diff --git a/net/core/devmem.h b/net/core/devmem.h
index a2b9913e9a17..a3fdd66bb05b 100644
--- a/net/core/devmem.h
+++ b/net/core/devmem.h
@@ -116,6 +116,8 @@ struct net_iov *
 net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding);
 void net_devmem_free_dmabuf(struct net_iov *ppiov);
 
+bool net_is_devmem_page_pool_ops(const struct memory_provider_ops *ops);
+
 #else
 struct net_devmem_dmabuf_binding;
 
@@ -168,6 +170,12 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov)
 {
 	return 0;
 }
+
+static inline bool
+net_is_devmem_page_pool_ops(const struct memory_provider_ops *ops)
+{
+	return false;
+}
 #endif
 
 #endif /* _NET_DEVMEM_H */
diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c
index 48335766c1bf..604862a73535 100644
--- a/net/core/page_pool_user.c
+++ b/net/core/page_pool_user.c
@@ -214,7 +214,7 @@ static int
 page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool,
 		  const struct genl_info *info)
 {
-	struct net_devmem_dmabuf_binding *binding = pool->mp_priv;
+	struct net_devmem_dmabuf_binding *binding;
 	size_t inflight, refsz;
 	void *hdr;
 
@@ -244,8 +244,11 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool,
 			 pool->user.detach_time))
 		goto err_cancel;
 
-	if (binding && nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id))
-		goto err_cancel;
+	if (net_is_devmem_page_pool_ops(pool->mp_ops)) {
+		binding = pool->mp_priv;
+		if (nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id))
+			goto err_cancel;
+	}
 
 	genlmsg_end(rsp, hdr);
 
@@ -353,16 +356,16 @@ void page_pool_unlist(struct page_pool *pool)
 int page_pool_check_memory_provider(struct net_device *dev,
 				    struct netdev_rx_queue *rxq)
 {
-	struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv;
+	void *mp_priv = rxq->mp_params.mp_priv;
 	struct page_pool *pool;
 	struct hlist_node *n;
 
-	if (!binding)
+	if (!mp_priv)
 		return 0;
 
 	mutex_lock(&page_pools_lock);
 	hlist_for_each_entry_safe(pool, n, &dev->page_pools, user.list) {
-		if (pool->mp_priv != binding)
+		if (pool->mp_priv != mp_priv)
 			continue;
 
 		if (pool->slow.queue_idx == get_netdev_rx_queue_index(rxq)) {
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index e928efc22f80..31e01da61c12 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -277,6 +277,7 @@
 #include <net/ip.h>
 #include <net/sock.h>
 #include <net/rstreason.h>
+#include <net/page_pool/types.h>
 
 #include <linux/uaccess.h>
 #include <asm/ioctls.h>
@@ -2476,6 +2477,11 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
 			}
 
 			niov = skb_frag_net_iov(frag);
+			if (net_is_devmem_page_pool_ops(niov->pp->mp_ops)) {
+				err = -ENODEV;
+				goto out;
+			}
+
 			end = start + skb_frag_size(frag);
 			copy = end - offset;
 
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 05/15] net: page_pool: add ->scrub mem provider callback
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (3 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 04/15] net: prepare for non devmem TCP memory providers David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-16 18:52 ` [PATCH v6 06/15] net: page pool: add helper creating area from pages David Wei
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

Some page pool memory providers like io_uring need to catch the point
when the page pool is asked to be destroyed. ->destroy is not enough
because it relies on the page pool to wait for its buffers first, but
for that to happen a provider might need to react, e.g. to collect all
buffers that are currently given to the user space.

Add a new provider's scrub callback serving the purpose and called off
the pp's generic (cold) scrubbing path, i.e. page_pool_scrub().

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 include/net/page_pool/types.h | 1 +
 net/core/page_pool.c          | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 8a35fe474adb..fd0376ad0d26 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -157,6 +157,7 @@ struct memory_provider_ops {
 	bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem);
 	int (*init)(struct page_pool *pool);
 	void (*destroy)(struct page_pool *pool);
+	void (*scrub)(struct page_pool *pool);
 };
 
 struct pp_memory_provider_params {
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index c21c5b9edc68..9a675e16e6a4 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -1038,6 +1038,9 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
 
 static void page_pool_scrub(struct page_pool *pool)
 {
+	if (pool->mp_ops && pool->mp_ops->scrub)
+		pool->mp_ops->scrub(pool);
+
 	page_pool_empty_alloc_cache_once(pool);
 	pool->destroy_cnt++;
 
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 06/15] net: page pool: add helper creating area from pages
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (4 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 05/15] net: page_pool: add ->scrub mem provider callback David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-16 18:52 ` [PATCH v6 07/15] net: page_pool: introduce page_pool_mp_return_in_cache David Wei
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

Add a helper that takes an array of pages and initialises passed in
memory provider's area with them, where each net_iov takes one page.
It's also responsible for setting up dma mappings.

We keep it in page_pool.c not to leak netmem details to outside
providers like io_uring, which don't have access to netmem_priv.h
and other private helpers.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 include/net/page_pool/memory_provider.h | 10 ++++
 net/core/page_pool.c                    | 63 ++++++++++++++++++++++++-
 2 files changed, 71 insertions(+), 2 deletions(-)
 create mode 100644 include/net/page_pool/memory_provider.h

diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h
new file mode 100644
index 000000000000..83d7eec0058d
--- /dev/null
+++ b/include/net/page_pool/memory_provider.h
@@ -0,0 +1,10 @@
+#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H
+#define _NET_PAGE_POOL_MEMORY_PROVIDER_H
+
+int page_pool_mp_init_paged_area(struct page_pool *pool,
+				struct net_iov_area *area,
+				struct page **pages);
+void page_pool_mp_release_area(struct page_pool *pool,
+				struct net_iov_area *area);
+
+#endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 9a675e16e6a4..8bd4a3c80726 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -13,6 +13,7 @@
 
 #include <net/netdev_rx_queue.h>
 #include <net/page_pool/helpers.h>
+#include <net/page_pool/memory_provider.h>
 #include <net/xdp.h>
 
 #include <linux/dma-direction.h>
@@ -459,7 +460,8 @@ page_pool_dma_sync_for_device(const struct page_pool *pool,
 		__page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
 }
 
-static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
+static bool page_pool_dma_map_page(struct page_pool *pool, netmem_ref netmem,
+				   struct page *page)
 {
 	dma_addr_t dma;
 
@@ -468,7 +470,7 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
 	 * into page private data (i.e 32bit cpu with 64bit DMA caps)
 	 * This mapping is kept for lifetime of page, until leaving pool.
 	 */
-	dma = dma_map_page_attrs(pool->p.dev, netmem_to_page(netmem), 0,
+	dma = dma_map_page_attrs(pool->p.dev, page, 0,
 				 (PAGE_SIZE << pool->p.order), pool->p.dma_dir,
 				 DMA_ATTR_SKIP_CPU_SYNC |
 					 DMA_ATTR_WEAK_ORDERING);
@@ -490,6 +492,11 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
 	return false;
 }
 
+static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
+{
+	return page_pool_dma_map_page(pool, netmem, netmem_to_page(netmem));
+}
+
 static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
 						 gfp_t gfp)
 {
@@ -1154,3 +1161,55 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
 	}
 }
 EXPORT_SYMBOL(page_pool_update_nid);
+
+static void page_pool_release_page_dma(struct page_pool *pool,
+				       netmem_ref netmem)
+{
+	__page_pool_release_page_dma(pool, netmem);
+}
+
+int page_pool_mp_init_paged_area(struct page_pool *pool,
+				 struct net_iov_area *area,
+				 struct page **pages)
+{
+	struct net_iov *niov;
+	netmem_ref netmem;
+	int i, ret = 0;
+
+	if (!pool->dma_map)
+		return -EOPNOTSUPP;
+
+	for (i = 0; i < area->num_niovs; i++) {
+		niov = &area->niovs[i];
+		netmem = net_iov_to_netmem(niov);
+
+		page_pool_set_pp_info(pool, netmem);
+		if (!page_pool_dma_map_page(pool, netmem, pages[i])) {
+			ret = -EINVAL;
+			goto err_unmap_dma;
+		}
+	}
+	return 0;
+
+err_unmap_dma:
+	while (i--) {
+		netmem = net_iov_to_netmem(&area->niovs[i]);
+		page_pool_release_page_dma(pool, netmem);
+	}
+	return ret;
+}
+
+void page_pool_mp_release_area(struct page_pool *pool,
+			       struct net_iov_area *area)
+{
+	int i;
+
+	if (!pool->dma_map)
+		return;
+
+	for (i = 0; i < area->num_niovs; i++) {
+		struct net_iov *niov = &area->niovs[i];
+
+		page_pool_release_page_dma(pool, net_iov_to_netmem(niov));
+	}
+}
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 07/15] net: page_pool: introduce page_pool_mp_return_in_cache
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (5 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 06/15] net: page pool: add helper creating area from pages David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-16 18:52 ` [PATCH v6 08/15] net: add helper executing custom callback from napi David Wei
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

Add a helper that allows a page pool memory provider to efficiently
return a netmem off the allocation callback.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 include/net/page_pool/memory_provider.h |  4 ++++
 net/core/page_pool.c                    | 19 +++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h
index 83d7eec0058d..352b3a35d31c 100644
--- a/include/net/page_pool/memory_provider.h
+++ b/include/net/page_pool/memory_provider.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
 #ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H
 #define _NET_PAGE_POOL_MEMORY_PROVIDER_H
 
@@ -7,4 +9,6 @@ int page_pool_mp_init_paged_area(struct page_pool *pool,
 void page_pool_mp_release_area(struct page_pool *pool,
 				struct net_iov_area *area);
 
+void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem);
+
 #endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 8bd4a3c80726..9078107c906d 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -1213,3 +1213,22 @@ void page_pool_mp_release_area(struct page_pool *pool,
 		page_pool_release_page_dma(pool, net_iov_to_netmem(niov));
 	}
 }
+
+/*
+ * page_pool_mp_return_in_cache() - return a netmem to the allocation cache.
+ * @pool:	pool from which pages were allocated
+ * @netmem:	netmem to return
+ *
+ * Return already allocated and accounted netmem to the page pool's allocation
+ * cache. The function doesn't provide synchronisation and must only be called
+ * from the napi context.
+ */
+void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem)
+{
+	if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL))
+		return;
+
+	page_pool_dma_sync_for_device(pool, netmem, -1);
+	page_pool_fragment_netmem(netmem, 1);
+	pool->alloc.cache[pool->alloc.count++] = netmem;
+}
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 08/15] net: add helper executing custom callback from napi
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (6 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 07/15] net: page_pool: introduce page_pool_mp_return_in_cache David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 14:25   ` Paolo Abeni
  2024-10-16 18:52 ` [PATCH v6 09/15] io_uring/zcrx: add interface queue and refill queue David Wei
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

It's useful to have napi private bits and pieces like page pool's fast
allocating cache, so that the hot allocation path doesn't have to do any
additional synchronisation. In case of io_uring memory provider
introduced in following patches, we keep the consumer end of the
io_uring's refill queue private to napi as it's a hot path.

However, from time to time we need to synchronise with the napi, for
example to add more user memory or allocate fallback buffers. Add a
helper function napi_execute that allows to run a custom callback from
under napi context so that it can access and modify napi protected
parts of io_uring. It works similar to busy polling and stops napi from
running in the meantime, so it's supposed to be a slow control path.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 include/net/busy_poll.h |  6 ++++
 net/core/dev.c          | 77 ++++++++++++++++++++++++++++++++---------
 2 files changed, 66 insertions(+), 17 deletions(-)

diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
index f03040baaefd..3fd9e65731e9 100644
--- a/include/net/busy_poll.h
+++ b/include/net/busy_poll.h
@@ -47,6 +47,7 @@ bool sk_busy_loop_end(void *p, unsigned long start_time);
 void napi_busy_loop(unsigned int napi_id,
 		    bool (*loop_end)(void *, unsigned long),
 		    void *loop_end_arg, bool prefer_busy_poll, u16 budget);
+void napi_execute(unsigned napi_id, void (*cb)(void *), void *cb_arg);
 
 void napi_busy_loop_rcu(unsigned int napi_id,
 			bool (*loop_end)(void *, unsigned long),
@@ -63,6 +64,11 @@ static inline bool sk_can_busy_loop(struct sock *sk)
 	return false;
 }
 
+static inline void napi_execute(unsigned napi_id,
+				void (*cb)(void *), void *cb_arg)
+{
+}
+
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 static inline unsigned long busy_loop_current_time(void)
diff --git a/net/core/dev.c b/net/core/dev.c
index c682173a7642..f3bd5fd56286 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6347,6 +6347,30 @@ enum {
 	NAPI_F_END_ON_RESCHED	= 2,
 };
 
+static inline bool napi_state_start_busy_polling(struct napi_struct *napi,
+						 unsigned flags)
+{
+	unsigned long val = READ_ONCE(napi->state);
+
+	/* If multiple threads are competing for this napi,
+	 * we avoid dirtying napi->state as much as we can.
+	 */
+	if (val & (NAPIF_STATE_DISABLE | NAPIF_STATE_SCHED |
+		   NAPIF_STATE_IN_BUSY_POLL))
+		goto fail;
+
+	if (cmpxchg(&napi->state, val,
+		    val | NAPIF_STATE_IN_BUSY_POLL |
+			  NAPIF_STATE_SCHED) != val)
+		goto fail;
+
+	return true;
+fail:
+	if (flags & NAPI_F_PREFER_BUSY_POLL)
+		set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state);
+	return false;
+}
+
 static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock,
 			   unsigned flags, u16 budget)
 {
@@ -6422,24 +6446,8 @@ static void __napi_busy_loop(unsigned int napi_id,
 		local_bh_disable();
 		bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
 		if (!napi_poll) {
-			unsigned long val = READ_ONCE(napi->state);
-
-			/* If multiple threads are competing for this napi,
-			 * we avoid dirtying napi->state as much as we can.
-			 */
-			if (val & (NAPIF_STATE_DISABLE | NAPIF_STATE_SCHED |
-				   NAPIF_STATE_IN_BUSY_POLL)) {
-				if (flags & NAPI_F_PREFER_BUSY_POLL)
-					set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state);
+			if (!napi_state_start_busy_polling(napi, flags))
 				goto count;
-			}
-			if (cmpxchg(&napi->state, val,
-				    val | NAPIF_STATE_IN_BUSY_POLL |
-					  NAPIF_STATE_SCHED) != val) {
-				if (flags & NAPI_F_PREFER_BUSY_POLL)
-					set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state);
-				goto count;
-			}
 			have_poll_lock = netpoll_poll_lock(napi);
 			napi_poll = napi->poll;
 		}
@@ -6503,6 +6511,41 @@ void napi_busy_loop(unsigned int napi_id,
 }
 EXPORT_SYMBOL(napi_busy_loop);
 
+void napi_execute(unsigned napi_id,
+		  void (*cb)(void *), void *cb_arg)
+{
+	struct napi_struct *napi;
+	void *have_poll_lock = NULL;
+
+	guard(rcu)();
+	napi = napi_by_id(napi_id);
+	if (!napi)
+		return;
+
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+		preempt_disable();
+
+	for (;;) {
+		local_bh_disable();
+
+		if (napi_state_start_busy_polling(napi, 0)) {
+			have_poll_lock = netpoll_poll_lock(napi);
+			cb(cb_arg);
+			local_bh_enable();
+			busy_poll_stop(napi, have_poll_lock, 0, 1);
+			break;
+		}
+
+		local_bh_enable();
+		if (unlikely(need_resched()))
+			break;
+		cpu_relax();
+	}
+
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+		preempt_enable();
+}
+
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 static void __napi_hash_add_with_id(struct napi_struct *napi,
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 09/15] io_uring/zcrx: add interface queue and refill queue
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (7 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 08/15] net: add helper executing custom callback from napi David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 15:33   ` Jens Axboe
  2024-10-16 18:52 ` [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area David Wei
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: David Wei <[email protected]>

Add a new object called an interface queue (ifq) that represents a net
rx queue that has been configured for zero copy. Each ifq is registered
using a new registration opcode IORING_REGISTER_ZCRX_IFQ.

The refill queue is allocated by the kernel and mapped by userspace
using a new offset IORING_OFF_RQ_RING, in a similar fashion to the main
SQ/CQ. It is used by userspace to return buffers that it is done with,
which will then be re-used by the netdev again.

The main CQ ring is used to notify userspace of received data by using
the upper 16 bytes of a big CQE as a new struct io_uring_zcrx_cqe. Each
entry contains the offset + len to the data.

For now, each io_uring instance only has a single ifq.

Signed-off-by: David Wei <[email protected]>
---
 Kconfig                        |   2 +
 include/linux/io_uring_types.h |   3 +
 include/uapi/linux/io_uring.h  |  43 ++++++++++
 io_uring/KConfig               |  10 +++
 io_uring/Makefile              |   1 +
 io_uring/io_uring.c            |   7 ++
 io_uring/memmap.c              |   8 ++
 io_uring/register.c            |   7 ++
 io_uring/zcrx.c                | 143 +++++++++++++++++++++++++++++++++
 io_uring/zcrx.h                |  39 +++++++++
 10 files changed, 263 insertions(+)
 create mode 100644 io_uring/KConfig
 create mode 100644 io_uring/zcrx.c
 create mode 100644 io_uring/zcrx.h

diff --git a/Kconfig b/Kconfig
index 745bc773f567..529ea7694ba9 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "lib/Kconfig"
 source "lib/Kconfig.debug"
 
 source "Documentation/Kconfig"
+
+source "io_uring/KConfig"
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 9c7e1d3f06e5..f5cbc06cc0e6 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -39,6 +39,8 @@ enum io_uring_cmd_flags {
 	IO_URING_F_COMPAT		= (1 << 12),
 };
 
+struct io_zcrx_ifq;
+
 struct io_wq_work_node {
 	struct io_wq_work_node *next;
 };
@@ -373,6 +375,7 @@ struct io_ring_ctx {
 	struct io_alloc_cache		rsrc_node_cache;
 	struct wait_queue_head		rsrc_quiesce_wq;
 	unsigned			rsrc_quiesce;
+	struct io_zcrx_ifq		*ifq;
 
 	u32			pers_next;
 	struct xarray		personalities;
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 86cb385fe0b5..d398e19f8eea 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -467,6 +467,8 @@ struct io_uring_cqe {
 #define IORING_OFF_PBUF_RING		0x80000000ULL
 #define IORING_OFF_PBUF_SHIFT		16
 #define IORING_OFF_MMAP_MASK		0xf8000000ULL
+#define IORING_OFF_RQ_RING		0x20000000ULL
+#define IORING_OFF_RQ_SHIFT		16
 
 /*
  * Filled with the offset for mmap(2)
@@ -615,6 +617,9 @@ enum io_uring_register_op {
 	/* send MSG_RING without having a ring */
 	IORING_REGISTER_SEND_MSG_RING		= 31,
 
+	/* register a netdev hw rx queue for zerocopy */
+	IORING_REGISTER_ZCRX_IFQ		= 32,
+
 	/* this goes last */
 	IORING_REGISTER_LAST,
 
@@ -845,6 +850,44 @@ enum io_uring_socket_op {
 	SOCKET_URING_OP_SETSOCKOPT,
 };
 
+/* Zero copy receive refill queue entry */
+struct io_uring_zcrx_rqe {
+	__u64	off;
+	__u32	len;
+	__u32	__pad;
+};
+
+struct io_uring_zcrx_cqe {
+	__u64	off;
+	__u64	__pad;
+};
+
+/* The bit from which area id is encoded into offsets */
+#define IORING_ZCRX_AREA_SHIFT	48
+#define IORING_ZCRX_AREA_MASK	(~(((__u64)1 << IORING_ZCRX_AREA_SHIFT) - 1))
+
+struct io_uring_zcrx_offsets {
+	__u32	head;
+	__u32	tail;
+	__u32	rqes;
+	__u32	mmap_sz;
+	__u64	__resv[2];
+};
+
+/*
+ * Argument for IORING_REGISTER_ZCRX_IFQ
+ */
+struct io_uring_zcrx_ifq_reg {
+	__u32	if_idx;
+	__u32	if_rxq;
+	__u32	rq_entries;
+	__u32	flags;
+
+	__u64	area_ptr; /* pointer to struct io_uring_zcrx_area_reg */
+	struct io_uring_zcrx_offsets offsets;
+	__u64	__resv[3];
+};
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/io_uring/KConfig b/io_uring/KConfig
new file mode 100644
index 000000000000..9e2a4beba1ef
--- /dev/null
+++ b/io_uring/KConfig
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# io_uring configuration
+#
+
+config IO_URING_ZCRX
+	def_bool y
+	depends on PAGE_POOL
+	depends on INET
+	depends on NET_RX_BUSY_POLL
diff --git a/io_uring/Makefile b/io_uring/Makefile
index 53167bef37d7..a95b0b8229c9 100644
--- a/io_uring/Makefile
+++ b/io_uring/Makefile
@@ -14,6 +14,7 @@ obj-$(CONFIG_IO_URING)		+= io_uring.o opdef.o kbuf.o rsrc.o notif.o \
 					epoll.o statx.o timeout.o fdinfo.o \
 					cancel.o waitid.o register.o \
 					truncate.o memmap.o
+obj-$(CONFIG_IO_URING_ZCRX)	+= zcrx.o
 obj-$(CONFIG_IO_WQ)		+= io-wq.o
 obj-$(CONFIG_FUTEX)		+= futex.o
 obj-$(CONFIG_NET_RX_BUSY_POLL) += napi.o
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index d7ad4ea5f40b..fc43e6feb5c5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -97,6 +97,7 @@
 #include "uring_cmd.h"
 #include "msg_ring.h"
 #include "memmap.h"
+#include "zcrx.h"
 
 #include "timeout.h"
 #include "poll.h"
@@ -2739,6 +2740,7 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
 		return;
 
 	mutex_lock(&ctx->uring_lock);
+	io_unregister_zcrx_ifqs(ctx);
 	if (ctx->buf_data)
 		__io_sqe_buffers_unregister(ctx);
 	if (ctx->file_data)
@@ -2910,6 +2912,11 @@ static __cold void io_ring_exit_work(struct work_struct *work)
 			io_cqring_overflow_kill(ctx);
 			mutex_unlock(&ctx->uring_lock);
 		}
+		if (ctx->ifq) {
+			mutex_lock(&ctx->uring_lock);
+			io_shutdown_zcrx_ifqs(ctx);
+			mutex_unlock(&ctx->uring_lock);
+		}
 
 		if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
 			io_move_task_work_from_local(ctx);
diff --git a/io_uring/memmap.c b/io_uring/memmap.c
index a0f32a255fd1..4c384e8615f6 100644
--- a/io_uring/memmap.c
+++ b/io_uring/memmap.c
@@ -12,6 +12,7 @@
 
 #include "memmap.h"
 #include "kbuf.h"
+#include "zcrx.h"
 
 static void *io_mem_alloc_compound(struct page **pages, int nr_pages,
 				   size_t size, gfp_t gfp)
@@ -223,6 +224,10 @@ static void *io_uring_validate_mmap_request(struct file *file, loff_t pgoff,
 		io_put_bl(ctx, bl);
 		return ptr;
 		}
+	case IORING_OFF_RQ_RING:
+		if (!ctx->ifq)
+			return ERR_PTR(-EINVAL);
+		return ctx->ifq->rq_ring;
 	}
 
 	return ERR_PTR(-EINVAL);
@@ -261,6 +266,9 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
 						ctx->n_sqe_pages);
 	case IORING_OFF_PBUF_RING:
 		return io_pbuf_mmap(file, vma);
+	case IORING_OFF_RQ_RING:
+		return io_uring_mmap_pages(ctx, vma, ctx->ifq->rqe_pages,
+						ctx->ifq->n_rqe_pages);
 	}
 
 	return -EINVAL;
diff --git a/io_uring/register.c b/io_uring/register.c
index 52b2f9b74af8..1fac52b14e3d 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -29,6 +29,7 @@
 #include "napi.h"
 #include "eventfd.h"
 #include "msg_ring.h"
+#include "zcrx.h"
 
 #define IORING_MAX_RESTRICTIONS	(IORING_RESTRICTION_LAST + \
 				 IORING_REGISTER_LAST + IORING_OP_LAST)
@@ -549,6 +550,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			break;
 		ret = io_register_clone_buffers(ctx, arg);
 		break;
+	case IORING_REGISTER_ZCRX_IFQ:
+		ret = -EINVAL;
+		if (!arg || nr_args != 1)
+			break;
+		ret = io_register_zcrx_ifq(ctx, arg);
+		break;
 	default:
 		ret = -EINVAL;
 		break;
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
new file mode 100644
index 000000000000..4c53fd4f7bb3
--- /dev/null
+++ b/io_uring/zcrx.c
@@ -0,0 +1,143 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/io_uring.h>
+
+#include <uapi/linux/io_uring.h>
+
+#include "io_uring.h"
+#include "kbuf.h"
+#include "memmap.h"
+#include "zcrx.h"
+
+#define IO_RQ_MAX_ENTRIES		32768
+
+static int io_allocate_rbuf_ring(struct io_zcrx_ifq *ifq,
+				 struct io_uring_zcrx_ifq_reg *reg)
+{
+	size_t off, size;
+	void *ptr;
+
+	off = sizeof(struct io_uring);
+	size = off + sizeof(struct io_uring_zcrx_rqe) * reg->rq_entries;
+
+	ptr = io_pages_map(&ifq->rqe_pages, &ifq->n_rqe_pages, size);
+	if (IS_ERR(ptr))
+		return PTR_ERR(ptr);
+
+	ifq->rq_ring = (struct io_uring *)ptr;
+	ifq->rqes = (struct io_uring_zcrx_rqe *)(ptr + off);
+	return 0;
+}
+
+static void io_free_rbuf_ring(struct io_zcrx_ifq *ifq)
+{
+	io_pages_unmap(ifq->rq_ring, &ifq->rqe_pages, &ifq->n_rqe_pages, true);
+	ifq->rq_ring = NULL;
+	ifq->rqes = NULL;
+}
+
+static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
+{
+	struct io_zcrx_ifq *ifq;
+
+	ifq = kzalloc(sizeof(*ifq), GFP_KERNEL);
+	if (!ifq)
+		return NULL;
+
+	ifq->if_rxq = -1;
+	ifq->ctx = ctx;
+	return ifq;
+}
+
+static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq)
+{
+	io_free_rbuf_ring(ifq);
+	kfree(ifq);
+}
+
+int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
+			  struct io_uring_zcrx_ifq_reg __user *arg)
+{
+	struct io_uring_zcrx_ifq_reg reg;
+	struct io_zcrx_ifq *ifq;
+	size_t ring_sz, rqes_sz;
+	int ret;
+
+	/*
+	 * 1. Interface queue allocation.
+	 * 2. It can observe data destined for sockets of other tasks.
+	 */
+	if (!capable(CAP_NET_ADMIN))
+		return -EPERM;
+
+	/* mandatory io_uring features for zc rx */
+	if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN &&
+	      ctx->flags & IORING_SETUP_CQE32))
+		return -EINVAL;
+	if (ctx->ifq)
+		return -EBUSY;
+	if (copy_from_user(&reg, arg, sizeof(reg)))
+		return -EFAULT;
+	if (reg.__resv[0] || reg.__resv[1] || reg.__resv[2])
+		return -EINVAL;
+	if (reg.if_rxq == -1 || !reg.rq_entries || reg.flags)
+		return -EINVAL;
+	if (reg.rq_entries > IO_RQ_MAX_ENTRIES) {
+		if (!(ctx->flags & IORING_SETUP_CLAMP))
+			return -EINVAL;
+		reg.rq_entries = IO_RQ_MAX_ENTRIES;
+	}
+	reg.rq_entries = roundup_pow_of_two(reg.rq_entries);
+
+	if (!reg.area_ptr)
+		return -EFAULT;
+
+	ifq = io_zcrx_ifq_alloc(ctx);
+	if (!ifq)
+		return -ENOMEM;
+
+	ret = io_allocate_rbuf_ring(ifq, &reg);
+	if (ret)
+		goto err;
+
+	ifq->rq_entries = reg.rq_entries;
+	ifq->if_rxq = reg.if_rxq;
+
+	ring_sz = sizeof(struct io_uring);
+	rqes_sz = sizeof(struct io_uring_zcrx_rqe) * ifq->rq_entries;
+	reg.offsets.mmap_sz = ring_sz + rqes_sz;
+	reg.offsets.rqes = ring_sz;
+	reg.offsets.head = offsetof(struct io_uring, head);
+	reg.offsets.tail = offsetof(struct io_uring, tail);
+
+	if (copy_to_user(arg, &reg, sizeof(reg))) {
+		ret = -EFAULT;
+		goto err;
+	}
+
+	ctx->ifq = ifq;
+	return 0;
+err:
+	io_zcrx_ifq_free(ifq);
+	return ret;
+}
+
+void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
+{
+	struct io_zcrx_ifq *ifq = ctx->ifq;
+
+	lockdep_assert_held(&ctx->uring_lock);
+
+	if (!ifq)
+		return;
+
+	ctx->ifq = NULL;
+	io_zcrx_ifq_free(ifq);
+}
+
+void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
+{
+	lockdep_assert_held(&ctx->uring_lock);
+}
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
new file mode 100644
index 000000000000..1f76eecac5fd
--- /dev/null
+++ b/io_uring/zcrx.h
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef IOU_ZC_RX_H
+#define IOU_ZC_RX_H
+
+#include <linux/io_uring_types.h>
+
+struct io_zcrx_ifq {
+	struct io_ring_ctx		*ctx;
+	struct net_device		*dev;
+	struct io_uring			*rq_ring;
+	struct io_uring_zcrx_rqe 	*rqes;
+	u32				rq_entries;
+
+	unsigned short			n_rqe_pages;
+	struct page			**rqe_pages;
+
+	u32				if_rxq;
+};
+
+#if defined(CONFIG_IO_URING_ZCRX)
+int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
+			 struct io_uring_zcrx_ifq_reg __user *arg);
+void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx);
+void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx);
+#else
+static inline int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
+					struct io_uring_zcrx_ifq_reg __user *arg)
+{
+	return -EOPNOTSUPP;
+}
+static inline void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
+{
+}
+static inline void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
+{
+}
+#endif
+
+#endif
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (8 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 09/15] io_uring/zcrx: add interface queue and refill queue David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 15:35   ` Jens Axboe
  2024-10-16 18:52 ` [PATCH v6 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider David Wei
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: David Wei <[email protected]>

Add io_zcrx_area that represents a region of userspace memory that is
used for zero copy. During ifq registration, userspace passes in the
uaddr and len of userspace memory, which is then pinned by the kernel.
Each net_iov is mapped to one of these pages.

The freelist is a spinlock protected list that keeps track of all the
net_iovs/pages that aren't used.

For now, there is only one area per ifq and area registration happens
implicitly as part of ifq registration. There is no API for
adding/removing areas yet. The struct for area registration is there for
future extensibility once we support multiple areas and TCP devmem.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 include/uapi/linux/io_uring.h |  9 ++++
 io_uring/rsrc.c               |  2 +-
 io_uring/rsrc.h               |  1 +
 io_uring/zcrx.c               | 93 ++++++++++++++++++++++++++++++++++-
 io_uring/zcrx.h               | 16 ++++++
 5 files changed, 118 insertions(+), 3 deletions(-)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index d398e19f8eea..d43183264dcf 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -874,6 +874,15 @@ struct io_uring_zcrx_offsets {
 	__u64	__resv[2];
 };
 
+struct io_uring_zcrx_area_reg {
+	__u64	addr;
+	__u64	len;
+	__u64	rq_area_token;
+	__u32	flags;
+	__u32	__resv1;
+	__u64	__resv2[2];
+};
+
 /*
  * Argument for IORING_REGISTER_ZCRX_IFQ
  */
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 33a3d156a85b..4da644de8843 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -86,7 +86,7 @@ static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
 	return 0;
 }
 
-static int io_buffer_validate(struct iovec *iov)
+int io_buffer_validate(struct iovec *iov)
 {
 	unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1);
 
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 8ed588036210..0933dc99f41d 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -83,6 +83,7 @@ int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
 			    unsigned size, unsigned type);
 int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
 			unsigned int size, unsigned int type);
+int io_buffer_validate(struct iovec *iov);
 
 static inline void io_put_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
 {
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 4c53fd4f7bb3..a276572fe953 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -10,6 +10,7 @@
 #include "kbuf.h"
 #include "memmap.h"
 #include "zcrx.h"
+#include "rsrc.h"
 
 #define IO_RQ_MAX_ENTRIES		32768
 
@@ -38,6 +39,83 @@ static void io_free_rbuf_ring(struct io_zcrx_ifq *ifq)
 	ifq->rqes = NULL;
 }
 
+static void io_zcrx_free_area(struct io_zcrx_area *area)
+{
+	if (area->freelist)
+		kvfree(area->freelist);
+	if (area->nia.niovs)
+		kvfree(area->nia.niovs);
+	if (area->pages) {
+		unpin_user_pages(area->pages, area->nia.num_niovs);
+		kvfree(area->pages);
+	}
+	kfree(area);
+}
+
+static int io_zcrx_create_area(struct io_ring_ctx *ctx,
+			       struct io_zcrx_ifq *ifq,
+			       struct io_zcrx_area **res,
+			       struct io_uring_zcrx_area_reg *area_reg)
+{
+	struct io_zcrx_area *area;
+	int i, ret, nr_pages;
+	struct iovec iov;
+
+	if (area_reg->flags || area_reg->rq_area_token)
+		return -EINVAL;
+	if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1])
+		return -EINVAL;
+	if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK)
+		return -EINVAL;
+
+	iov.iov_base = u64_to_user_ptr(area_reg->addr);
+	iov.iov_len = area_reg->len;
+	ret = io_buffer_validate(&iov);
+	if (ret)
+		return ret;
+
+	ret = -ENOMEM;
+	area = kzalloc(sizeof(*area), GFP_KERNEL);
+	if (!area)
+		goto err;
+
+	area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len,
+				   &nr_pages);
+	if (IS_ERR(area->pages)) {
+		ret = PTR_ERR(area->pages);
+		area->pages = NULL;
+		goto err;
+	}
+	area->nia.num_niovs = nr_pages;
+
+	area->nia.niovs = kvmalloc_array(nr_pages, sizeof(area->nia.niovs[0]),
+					 GFP_KERNEL | __GFP_ZERO);
+	if (!area->nia.niovs)
+		goto err;
+
+	area->freelist = kvmalloc_array(nr_pages, sizeof(area->freelist[0]),
+					GFP_KERNEL | __GFP_ZERO);
+	if (!area->freelist)
+		goto err;
+
+	for (i = 0; i < nr_pages; i++) {
+		area->freelist[i] = i;
+	}
+
+	area->free_count = nr_pages;
+	area->ifq = ifq;
+	/* we're only supporting one area per ifq for now */
+	area->area_id = 0;
+	area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT;
+	spin_lock_init(&area->freelist_lock);
+	*res = area;
+	return 0;
+err:
+	if (area)
+		io_zcrx_free_area(area);
+	return ret;
+}
+
 static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
 {
 	struct io_zcrx_ifq *ifq;
@@ -53,6 +131,9 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
 
 static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq)
 {
+	if (ifq->area)
+		io_zcrx_free_area(ifq->area);
+
 	io_free_rbuf_ring(ifq);
 	kfree(ifq);
 }
@@ -60,6 +141,7 @@ static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq)
 int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 			  struct io_uring_zcrx_ifq_reg __user *arg)
 {
+	struct io_uring_zcrx_area_reg area;
 	struct io_uring_zcrx_ifq_reg reg;
 	struct io_zcrx_ifq *ifq;
 	size_t ring_sz, rqes_sz;
@@ -91,7 +173,7 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 	}
 	reg.rq_entries = roundup_pow_of_two(reg.rq_entries);
 
-	if (!reg.area_ptr)
+	if (copy_from_user(&area, u64_to_user_ptr(reg.area_ptr), sizeof(area)))
 		return -EFAULT;
 
 	ifq = io_zcrx_ifq_alloc(ctx);
@@ -102,6 +184,10 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 	if (ret)
 		goto err;
 
+	ret = io_zcrx_create_area(ctx, ifq, &ifq->area, &area);
+	if (ret)
+		goto err;
+
 	ifq->rq_entries = reg.rq_entries;
 	ifq->if_rxq = reg.if_rxq;
 
@@ -116,7 +202,10 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 		ret = -EFAULT;
 		goto err;
 	}
-
+	if (copy_to_user(u64_to_user_ptr(reg.area_ptr), &area, sizeof(area))) {
+		ret = -EFAULT;
+		goto err;
+	}
 	ctx->ifq = ifq;
 	return 0;
 err:
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 1f76eecac5fd..a8db61498c67 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -3,10 +3,26 @@
 #define IOU_ZC_RX_H
 
 #include <linux/io_uring_types.h>
+#include <net/page_pool/types.h>
+
+struct io_zcrx_area {
+	struct net_iov_area	nia;
+	struct io_zcrx_ifq	*ifq;
+
+	u16			area_id;
+	struct page		**pages;
+
+	/* freelist */
+	spinlock_t		freelist_lock ____cacheline_aligned_in_smp;
+	u32			free_count;
+	u32			*freelist;
+};
 
 struct io_zcrx_ifq {
 	struct io_ring_ctx		*ctx;
 	struct net_device		*dev;
+	struct io_zcrx_area		*area;
+
 	struct io_uring			*rq_ring;
 	struct io_uring_zcrx_rqe 	*rqes;
 	u32				rq_entries;
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (9 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 15:46   ` Jens Axboe
  2024-10-16 18:52 ` [PATCH v6 12/15] io_uring/zcrx: add io_recvzc request David Wei
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

Implement a page pool memory provider for io_uring to receieve in a
zero copy fashion. For that, the provider allocates user pages wrapped
around into struct net_iovs, that are stored in a previously registered
struct net_iov_area.

Unlike with traditional receives, for which pages from a page pool can
be deallocated right after the user receives data, e.g. via recv(2),
we extend the lifetime by recycling buffers only after the user space
acknowledges that it's done processing the data via the refill queue.
Before handing buffers to the user, we mark them by bumping the refcount
by a bias value IO_ZC_RX_UREF, which will be checked when the buffer is
returned back. When the corresponding io_uring instance and/or page pool
are destroyed, we'll force back all buffers that are currently in the
user space in ->io_pp_zc_scrub by clearing the bias.

Refcounting and lifetime:

Initially, all buffers are considered unallocated and stored in
->freelist, at which point they are not yet directly exposed to the core
page pool code and not accounted to page pool's pages_state_hold_cnt.
The ->alloc_netmems callback will allocate them by placing into the
page pool's cache, setting the refcount to 1 as usual and adjusting
pages_state_hold_cnt.

Then, either the buffer is dropped and returns back to the page pool
into the ->freelist via io_pp_zc_release_netmem, in which case the page
pool will match hold_cnt for us with ->pages_state_release_cnt. Or more
likely the buffer will go through the network/protocol stacks and end up
in the corresponding socket's receive queue. From there the user can get
it via an new io_uring request implemented in following patches. As
mentioned above, before giving a buffer to the user we bump the refcount
by IO_ZC_RX_UREF.

Once the user is done with the buffer processing, it must return it back
via the refill queue, from where our ->alloc_netmems implementation can
grab it, check references, put IO_ZC_RX_UREF, and recycle the buffer if
there are no more users left. As we place such buffers right back into
the page pools fast cache and they didn't go through the normal pp
release path, they are still considered "allocated" and no pp hold_cnt
is required. For the same reason we dma sync buffers for the device
in io_zc_add_pp_cache().

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 io_uring/zcrx.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++
 io_uring/zcrx.h |   5 ++
 2 files changed, 220 insertions(+)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index a276572fe953..aad35676207e 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -2,7 +2,12 @@
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/mm.h>
+#include <linux/nospec.h>
+#include <linux/netdevice.h>
 #include <linux/io_uring.h>
+#include <net/page_pool/helpers.h>
+#include <net/page_pool/memory_provider.h>
+#include <trace/events/page_pool.h>
 
 #include <uapi/linux/io_uring.h>
 
@@ -14,6 +19,16 @@
 
 #define IO_RQ_MAX_ENTRIES		32768
 
+__maybe_unused
+static const struct memory_provider_ops io_uring_pp_zc_ops;
+
+static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov)
+{
+	struct net_iov_area *owner = net_iov_owner(niov);
+
+	return container_of(owner, struct io_zcrx_area, nia);
+}
+
 static int io_allocate_rbuf_ring(struct io_zcrx_ifq *ifq,
 				 struct io_uring_zcrx_ifq_reg *reg)
 {
@@ -99,6 +114,9 @@ static int io_zcrx_create_area(struct io_ring_ctx *ctx,
 		goto err;
 
 	for (i = 0; i < nr_pages; i++) {
+		struct net_iov *niov = &area->nia.niovs[i];
+
+		niov->owner = &area->nia;
 		area->freelist[i] = i;
 	}
 
@@ -230,3 +248,200 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 {
 	lockdep_assert_held(&ctx->uring_lock);
 }
+
+static bool io_zcrx_niov_put(struct net_iov *niov, int nr)
+{
+	return atomic_long_sub_and_test(nr, &niov->pp_ref_count);
+}
+
+static bool io_zcrx_put_niov_uref(struct net_iov *niov)
+{
+	if (atomic_long_read(&niov->pp_ref_count) < IO_ZC_RX_UREF)
+		return false;
+
+	return io_zcrx_niov_put(niov, IO_ZC_RX_UREF);
+}
+
+static inline void io_zc_add_pp_cache(struct page_pool *pp,
+				      struct net_iov *niov)
+{
+}
+
+static inline u32 io_zcrx_rqring_entries(struct io_zcrx_ifq *ifq)
+{
+	u32 entries;
+
+	entries = smp_load_acquire(&ifq->rq_ring->tail) - ifq->cached_rq_head;
+	return min(entries, ifq->rq_entries);
+}
+
+static struct io_uring_zcrx_rqe *io_zcrx_get_rqe(struct io_zcrx_ifq *ifq,
+						 unsigned mask)
+{
+	unsigned int idx = ifq->cached_rq_head++ & mask;
+
+	return &ifq->rqes[idx];
+}
+
+static void io_zcrx_ring_refill(struct page_pool *pp,
+				struct io_zcrx_ifq *ifq)
+{
+	unsigned int entries = io_zcrx_rqring_entries(ifq);
+	unsigned int mask = ifq->rq_entries - 1;
+
+	entries = min_t(unsigned, entries, PP_ALLOC_CACHE_REFILL - pp->alloc.count);
+	if (unlikely(!entries))
+		return;
+
+	do {
+		struct io_uring_zcrx_rqe *rqe = io_zcrx_get_rqe(ifq, mask);
+		struct io_zcrx_area *area;
+		struct net_iov *niov;
+		unsigned niov_idx, area_idx;
+
+		area_idx = rqe->off >> IORING_ZCRX_AREA_SHIFT;
+		niov_idx = (rqe->off & ~IORING_ZCRX_AREA_MASK) / PAGE_SIZE;
+
+		if (unlikely(rqe->__pad || area_idx))
+			continue;
+		area = ifq->area;
+
+		if (unlikely(niov_idx >= area->nia.num_niovs))
+			continue;
+		niov_idx = array_index_nospec(niov_idx, area->nia.num_niovs);
+
+		niov = &area->nia.niovs[niov_idx];
+		if (!io_zcrx_put_niov_uref(niov))
+			continue;
+		page_pool_mp_return_in_cache(pp, net_iov_to_netmem(niov));
+	} while (--entries);
+
+	smp_store_release(&ifq->rq_ring->head, ifq->cached_rq_head);
+}
+
+static void io_zcrx_refill_slow(struct page_pool *pp, struct io_zcrx_ifq *ifq)
+{
+	struct io_zcrx_area *area = ifq->area;
+
+	spin_lock_bh(&area->freelist_lock);
+	while (area->free_count && pp->alloc.count < PP_ALLOC_CACHE_REFILL) {
+		struct net_iov *niov;
+		u32 pgid;
+
+		pgid = area->freelist[--area->free_count];
+		niov = &area->nia.niovs[pgid];
+
+		page_pool_mp_return_in_cache(pp, net_iov_to_netmem(niov));
+
+		pp->pages_state_hold_cnt++;
+		trace_page_pool_state_hold(pp, net_iov_to_netmem(niov),
+					   pp->pages_state_hold_cnt);
+	}
+	spin_unlock_bh(&area->freelist_lock);
+}
+
+static void io_zcrx_recycle_niov(struct net_iov *niov)
+{
+	struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);
+
+	spin_lock_bh(&area->freelist_lock);
+	area->freelist[area->free_count++] = net_iov_idx(niov);
+	spin_unlock_bh(&area->freelist_lock);
+}
+
+static netmem_ref io_pp_zc_alloc_netmems(struct page_pool *pp, gfp_t gfp)
+{
+	struct io_zcrx_ifq *ifq = pp->mp_priv;
+
+	/* pp should already be ensuring that */
+	if (unlikely(pp->alloc.count))
+		goto out_return;
+
+	io_zcrx_ring_refill(pp, ifq);
+	if (likely(pp->alloc.count))
+		goto out_return;
+
+	io_zcrx_refill_slow(pp, ifq);
+	if (!pp->alloc.count)
+		return 0;
+out_return:
+	return pp->alloc.cache[--pp->alloc.count];
+}
+
+static bool io_pp_zc_release_netmem(struct page_pool *pp, netmem_ref netmem)
+{
+	struct net_iov *niov;
+
+	if (WARN_ON_ONCE(!netmem_is_net_iov(netmem)))
+		return false;
+
+	niov = netmem_to_net_iov(netmem);
+
+	if (io_zcrx_niov_put(niov, 1))
+		io_zcrx_recycle_niov(niov);
+	return false;
+}
+
+static void io_pp_zc_scrub(struct page_pool *pp)
+{
+	struct io_zcrx_ifq *ifq = pp->mp_priv;
+	struct io_zcrx_area *area = ifq->area;
+	int i;
+
+	/* Reclaim back all buffers given to the user space. */
+	for (i = 0; i < area->nia.num_niovs; i++) {
+		struct net_iov *niov = &area->nia.niovs[i];
+		int count;
+
+		if (!io_zcrx_put_niov_uref(niov))
+			continue;
+		io_zcrx_recycle_niov(niov);
+
+		count = atomic_inc_return_relaxed(&pp->pages_state_release_cnt);
+		trace_page_pool_state_release(pp, net_iov_to_netmem(niov), count);
+	}
+}
+
+static int io_pp_zc_init(struct page_pool *pp)
+{
+	struct io_zcrx_ifq *ifq = pp->mp_priv;
+	struct io_zcrx_area *area = ifq->area;
+	int ret;
+
+	if (!ifq)
+		return -EINVAL;
+	if (pp->p.order != 0)
+		return -EINVAL;
+	if (!pp->p.napi)
+		return -EINVAL;
+
+	ret = page_pool_mp_init_paged_area(pp, &area->nia, area->pages);
+	if (ret)
+		return ret;
+
+	percpu_ref_get(&ifq->ctx->refs);
+	ifq->pp = pp;
+	return 0;
+}
+
+static void io_pp_zc_destroy(struct page_pool *pp)
+{
+	struct io_zcrx_ifq *ifq = pp->mp_priv;
+	struct io_zcrx_area *area = ifq->area;
+
+	page_pool_mp_release_area(pp, &ifq->area->nia);
+
+	ifq->pp = NULL;
+
+	if (WARN_ON_ONCE(area->free_count != area->nia.num_niovs))
+		return;
+	percpu_ref_put(&ifq->ctx->refs);
+}
+
+static const struct memory_provider_ops io_uring_pp_zc_ops = {
+	.alloc_netmems		= io_pp_zc_alloc_netmems,
+	.release_netmem		= io_pp_zc_release_netmem,
+	.init			= io_pp_zc_init,
+	.destroy		= io_pp_zc_destroy,
+	.scrub			= io_pp_zc_scrub,
+};
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index a8db61498c67..464b4bd89b64 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -5,6 +5,9 @@
 #include <linux/io_uring_types.h>
 #include <net/page_pool/types.h>
 
+#define IO_ZC_RX_UREF			0x10000
+#define IO_ZC_RX_KREF_MASK		(IO_ZC_RX_UREF - 1)
+
 struct io_zcrx_area {
 	struct net_iov_area	nia;
 	struct io_zcrx_ifq	*ifq;
@@ -22,10 +25,12 @@ struct io_zcrx_ifq {
 	struct io_ring_ctx		*ctx;
 	struct net_device		*dev;
 	struct io_zcrx_area		*area;
+	struct page_pool		*pp;
 
 	struct io_uring			*rq_ring;
 	struct io_uring_zcrx_rqe 	*rqes;
 	u32				rq_entries;
+	u32				cached_rq_head;
 
 	unsigned short			n_rqe_pages;
 	struct page			**rqe_pages;
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 12/15] io_uring/zcrx: add io_recvzc request
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (10 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 16:05   ` Jens Axboe
  2024-10-16 18:52 ` [PATCH v6 13/15] io_uring/zcrx: set pp memory provider for an rx queue David Wei
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

Add io_uring opcode OP_RECV_ZC for doing zero copy reads out of a
socket. Only the connection should be land on the specific rx queue set
up for zero copy, and the socket must be handled by the io_uring
instance that the rx queue was registered for zero copy with. That's
because neither net_iovs / buffers from our queue can be read by outside
applications, nor zero copy is possible if traffic for the zero copy
connection goes to another queue. This coordination is outside of the
scope of this patch series. Also, any traffic directed to the zero copy
enabled queue is immediately visible to the application, which is why
CAP_NET_ADMIN is required at the registeration step.

Of course, no data is actually read out of the socket, it has already
been copied by the netdev into userspace memory via DMA. OP_RECV_ZC
reads skbs out of the socket and checks that its frags are indeed
net_iovs that belong to io_uring. A cqe is queued for each one of these
frags.

Recall that each cqe is a big cqe, with the top half being an
io_uring_zcrx_cqe. The cqe res field contains the len or error. The
lower IORING_ZCRX_AREA_SHIFT bits of the struct io_uring_zcrx_cqe::off
field contain the offset relative to the start of the zero copy area.
The upper part of the off field is trivially zero, and will be used
to carry the area id.

For now, there is no limit as to how much work each OP_RECV_ZC request
does. It will attempt to drain a socket of all available data. This
request always operates in multishot mode.

Signed-off-by: David Wei <[email protected]>
---
 include/uapi/linux/io_uring.h |   2 +
 io_uring/io_uring.h           |  10 ++
 io_uring/net.c                |  71 +++++++++++++
 io_uring/opdef.c              |  16 +++
 io_uring/zcrx.c               | 181 +++++++++++++++++++++++++++++++++-
 io_uring/zcrx.h               |  11 +++
 6 files changed, 290 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index d43183264dcf..0dcb239ebc59 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -87,6 +87,7 @@ struct io_uring_sqe {
 	union {
 		__s32	splice_fd_in;
 		__u32	file_index;
+		__u32	zcrx_ifq_idx;
 		__u32	optlen;
 		struct {
 			__u16	addr_len;
@@ -259,6 +260,7 @@ enum io_uring_op {
 	IORING_OP_FTRUNCATE,
 	IORING_OP_BIND,
 	IORING_OP_LISTEN,
+	IORING_OP_RECV_ZC,
 
 	/* this goes last, obviously */
 	IORING_OP_LAST,
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 9d70b2cf7b1e..bb7f414e7835 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -176,6 +176,16 @@ static inline bool io_get_cqe(struct io_ring_ctx *ctx, struct io_uring_cqe **ret
 	return io_get_cqe_overflow(ctx, ret, false);
 }
 
+static inline bool io_defer_get_uncommited_cqe(struct io_ring_ctx *ctx,
+					       struct io_uring_cqe **cqe_ret)
+{
+	io_lockdep_assert_cq_locked(ctx);
+
+	ctx->cq_extra++;
+	ctx->submit_state.cq_flush = true;
+	return io_get_cqe(ctx, cqe_ret);
+}
+
 static __always_inline bool io_fill_cqe_req(struct io_ring_ctx *ctx,
 					    struct io_kiocb *req)
 {
diff --git a/io_uring/net.c b/io_uring/net.c
index 18507658a921..9716ecdcb570 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -16,6 +16,7 @@
 #include "net.h"
 #include "notif.h"
 #include "rsrc.h"
+#include "zcrx.h"
 
 #if defined(CONFIG_NET)
 struct io_shutdown {
@@ -89,6 +90,13 @@ struct io_sr_msg {
  */
 #define MULTISHOT_MAX_RETRY	32
 
+struct io_recvzc {
+	struct file			*file;
+	unsigned			msg_flags;
+	u16				flags;
+	struct io_zcrx_ifq		*ifq;
+};
+
 int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_shutdown *shutdown = io_kiocb_to_cmd(req, struct io_shutdown);
@@ -1202,6 +1210,69 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
 	return ret;
 }
 
+int io_recvzc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+{
+	struct io_recvzc *zc = io_kiocb_to_cmd(req, struct io_recvzc);
+	unsigned ifq_idx;
+
+	if (unlikely(sqe->file_index || sqe->addr2 || sqe->addr ||
+		     sqe->len || sqe->addr3))
+		return -EINVAL;
+
+	ifq_idx = READ_ONCE(sqe->zcrx_ifq_idx);
+	if (ifq_idx != 0)
+		return -EINVAL;
+	zc->ifq = req->ctx->ifq;
+	if (!zc->ifq)
+		return -EINVAL;
+
+	zc->flags = READ_ONCE(sqe->ioprio);
+	zc->msg_flags = READ_ONCE(sqe->msg_flags);
+	if (zc->msg_flags)
+		return -EINVAL;
+	if (zc->flags & ~(IORING_RECVSEND_POLL_FIRST | IORING_RECV_MULTISHOT))
+		return -EINVAL;
+	/* multishot required */
+	if (!(zc->flags & IORING_RECV_MULTISHOT))
+		return -EINVAL;
+	/* All data completions are posted as aux CQEs. */
+	req->flags |= REQ_F_APOLL_MULTISHOT;
+
+	return 0;
+}
+
+int io_recvzc(struct io_kiocb *req, unsigned int issue_flags)
+{
+	struct io_recvzc *zc = io_kiocb_to_cmd(req, struct io_recvzc);
+	struct socket *sock;
+	int ret;
+
+	if (!(req->flags & REQ_F_POLLED) &&
+	    (zc->flags & IORING_RECVSEND_POLL_FIRST))
+		return -EAGAIN;
+
+	sock = sock_from_file(req->file);
+	if (unlikely(!sock))
+		return -ENOTSOCK;
+
+	ret = io_zcrx_recv(req, zc->ifq, sock, zc->msg_flags | MSG_DONTWAIT);
+	if (unlikely(ret <= 0) && ret != -EAGAIN) {
+		if (ret == -ERESTARTSYS)
+			ret = -EINTR;
+
+		req_set_fail(req);
+		io_req_set_res(req, ret, 0);
+
+		if (issue_flags & IO_URING_F_MULTISHOT)
+			return IOU_STOP_MULTISHOT;
+		return IOU_OK;
+	}
+
+	if (issue_flags & IO_URING_F_MULTISHOT)
+		return IOU_ISSUE_SKIP_COMPLETE;
+	return -EAGAIN;
+}
+
 void io_send_zc_cleanup(struct io_kiocb *req)
 {
 	struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg);
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index a2be3bbca5ff..599eb3ea5ff4 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -36,6 +36,7 @@
 #include "waitid.h"
 #include "futex.h"
 #include "truncate.h"
+#include "zcrx.h"
 
 static int io_no_issue(struct io_kiocb *req, unsigned int issue_flags)
 {
@@ -513,6 +514,18 @@ const struct io_issue_def io_issue_defs[] = {
 		.async_size		= sizeof(struct io_async_msghdr),
 #else
 		.prep			= io_eopnotsupp_prep,
+#endif
+	},
+	[IORING_OP_RECV_ZC] = {
+		.needs_file		= 1,
+		.unbound_nonreg_file	= 1,
+		.pollin			= 1,
+		.ioprio			= 1,
+#if defined(CONFIG_NET)
+		.prep			= io_recvzc_prep,
+		.issue			= io_recvzc,
+#else
+		.prep			= io_eopnotsupp_prep,
 #endif
 	},
 };
@@ -742,6 +755,9 @@ const struct io_cold_def io_cold_defs[] = {
 	[IORING_OP_LISTEN] = {
 		.name			= "LISTEN",
 	},
+	[IORING_OP_RECV_ZC] = {
+		.name			= "RECV_ZC",
+	},
 };
 
 const char *io_uring_get_opcode(u8 opcode)
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index aad35676207e..477b0d1b7b91 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -8,6 +8,8 @@
 #include <net/page_pool/helpers.h>
 #include <net/page_pool/memory_provider.h>
 #include <trace/events/page_pool.h>
+#include <net/tcp.h>
+#include <net/rps.h>
 
 #include <uapi/linux/io_uring.h>
 
@@ -19,7 +21,12 @@
 
 #define IO_RQ_MAX_ENTRIES		32768
 
-__maybe_unused
+struct io_zcrx_args {
+	struct io_kiocb		*req;
+	struct io_zcrx_ifq	*ifq;
+	struct socket		*sock;
+};
+
 static const struct memory_provider_ops io_uring_pp_zc_ops;
 
 static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov)
@@ -249,6 +256,11 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 	lockdep_assert_held(&ctx->uring_lock);
 }
 
+static void io_zcrx_get_buf_uref(struct net_iov *niov)
+{
+	atomic_long_add(IO_ZC_RX_UREF, &niov->pp_ref_count);
+}
+
 static bool io_zcrx_niov_put(struct net_iov *niov, int nr)
 {
 	return atomic_long_sub_and_test(nr, &niov->pp_ref_count);
@@ -445,3 +457,170 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = {
 	.destroy		= io_pp_zc_destroy,
 	.scrub			= io_pp_zc_scrub,
 };
+
+static bool io_zcrx_queue_cqe(struct io_kiocb *req, struct net_iov *niov,
+			      struct io_zcrx_ifq *ifq, int off, int len)
+{
+	struct io_uring_zcrx_cqe *rcqe;
+	struct io_zcrx_area *area;
+	struct io_uring_cqe *cqe;
+	u64 offset;
+
+	if (!io_defer_get_uncommited_cqe(req->ctx, &cqe))
+		return false;
+
+	cqe->user_data = req->cqe.user_data;
+	cqe->res = len;
+	cqe->flags = IORING_CQE_F_MORE;
+
+	area = io_zcrx_iov_to_area(niov);
+	offset = off + (net_iov_idx(niov) << PAGE_SHIFT);
+	rcqe = (struct io_uring_zcrx_cqe *)(cqe + 1);
+	rcqe->off = offset + ((u64)area->area_id << IORING_ZCRX_AREA_SHIFT);
+	rcqe->__pad = 0;
+	return true;
+}
+
+static int io_zcrx_recv_frag(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
+			     const skb_frag_t *frag, int off, int len)
+{
+	struct net_iov *niov;
+
+	off += skb_frag_off(frag);
+
+	if (unlikely(!skb_frag_is_net_iov(frag)))
+		return -EOPNOTSUPP;
+
+	niov = netmem_to_net_iov(frag->netmem);
+	if (niov->pp->mp_ops != &io_uring_pp_zc_ops ||
+	    niov->pp->mp_priv != ifq)
+		return -EFAULT;
+
+	if (!io_zcrx_queue_cqe(req, niov, ifq, off, len))
+		return -ENOSPC;
+	io_zcrx_get_buf_uref(niov);
+	return len;
+}
+
+static int
+io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb,
+		 unsigned int offset, size_t len)
+{
+	struct io_zcrx_args *args = desc->arg.data;
+	struct io_zcrx_ifq *ifq = args->ifq;
+	struct io_kiocb *req = args->req;
+	struct sk_buff *frag_iter;
+	unsigned start, start_off;
+	int i, copy, end, off;
+	int ret = 0;
+
+	start = skb_headlen(skb);
+	start_off = offset;
+
+	if (offset < start)
+		return -EOPNOTSUPP;
+
+	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+		const skb_frag_t *frag;
+
+		if (WARN_ON(start > offset + len))
+			return -EFAULT;
+
+		frag = &skb_shinfo(skb)->frags[i];
+		end = start + skb_frag_size(frag);
+
+		if (offset < end) {
+			copy = end - offset;
+			if (copy > len)
+				copy = len;
+
+			off = offset - start;
+			ret = io_zcrx_recv_frag(req, ifq, frag, off, copy);
+			if (ret < 0)
+				goto out;
+
+			offset += ret;
+			len -= ret;
+			if (len == 0 || ret != copy)
+				goto out;
+		}
+		start = end;
+	}
+
+	skb_walk_frags(skb, frag_iter) {
+		if (WARN_ON(start > offset + len))
+			return -EFAULT;
+
+		end = start + frag_iter->len;
+		if (offset < end) {
+			copy = end - offset;
+			if (copy > len)
+				copy = len;
+
+			off = offset - start;
+			ret = io_zcrx_recv_skb(desc, frag_iter, off, copy);
+			if (ret < 0)
+				goto out;
+
+			offset += ret;
+			len -= ret;
+			if (len == 0 || ret != copy)
+				goto out;
+		}
+		start = end;
+	}
+
+out:
+	if (offset == start_off)
+		return ret;
+	return offset - start_off;
+}
+
+static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
+				struct sock *sk, int flags)
+{
+	struct io_zcrx_args args = {
+		.req = req,
+		.ifq = ifq,
+		.sock = sk->sk_socket,
+	};
+	read_descriptor_t rd_desc = {
+		.count = 1,
+		.arg.data = &args,
+	};
+	int ret;
+
+	lock_sock(sk);
+	ret = tcp_read_sock(sk, &rd_desc, io_zcrx_recv_skb);
+	if (ret <= 0) {
+		if (ret < 0 || sock_flag(sk, SOCK_DONE))
+			goto out;
+		if (sk->sk_err)
+			ret = sock_error(sk);
+		else if (sk->sk_shutdown & RCV_SHUTDOWN)
+			goto out;
+		else if (sk->sk_state == TCP_CLOSE)
+			ret = -ENOTCONN;
+		else
+			ret = -EAGAIN;
+	} else if (sock_flag(sk, SOCK_DONE)) {
+		/* Make it to retry until it finally gets 0. */
+		ret = -EAGAIN;
+	}
+out:
+	release_sock(sk);
+	return ret;
+}
+
+int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
+		 struct socket *sock, unsigned int flags)
+{
+	struct sock *sk = sock->sk;
+	const struct proto *prot = READ_ONCE(sk->sk_prot);
+
+	if (prot->recvmsg != tcp_recvmsg)
+		return -EPROTONOSUPPORT;
+
+	sock_rps_record_flow(sk);
+	return io_zcrx_tcp_recvmsg(req, ifq, sk, flags);
+}
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 464b4bd89b64..1f039ad45a63 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -3,6 +3,7 @@
 #define IOU_ZC_RX_H
 
 #include <linux/io_uring_types.h>
+#include <linux/socket.h>
 #include <net/page_pool/types.h>
 
 #define IO_ZC_RX_UREF			0x10000
@@ -43,6 +44,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 			 struct io_uring_zcrx_ifq_reg __user *arg);
 void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx);
 void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx);
+int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
+		 struct socket *sock, unsigned int flags);
 #else
 static inline int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 					struct io_uring_zcrx_ifq_reg __user *arg)
@@ -55,6 +58,14 @@ static inline void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
 static inline void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 {
 }
+static inline int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
+			       struct socket *sock, unsigned int flags)
+{
+	return -EOPNOTSUPP;
+}
 #endif
 
+int io_recvzc(struct io_kiocb *req, unsigned int issue_flags);
+int io_recvzc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
+
 #endif
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 13/15] io_uring/zcrx: set pp memory provider for an rx queue
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (11 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 12/15] io_uring/zcrx: add io_recvzc request David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 16:30   ` Jens Axboe
  2024-10-16 18:52 ` [PATCH v6 14/15] io_uring/zcrx: add copy fallback David Wei
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: David Wei <[email protected]>

Set the page pool memory provider for the rx queue configured for zero
copy to io_uring. Then the rx queue is reset using
netdev_rx_queue_restart() and netdev core + page pool will take care of
filling the rx queue from the io_uring zero copy memory provider.

For now, there is only one ifq so its destruction happens implicitly
during io_uring cleanup.

Signed-off-by: David Wei <[email protected]>
---
 io_uring/zcrx.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++--
 io_uring/zcrx.h |  2 ++
 2 files changed, 86 insertions(+), 2 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 477b0d1b7b91..3f4625730dbd 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -8,6 +8,7 @@
 #include <net/page_pool/helpers.h>
 #include <net/page_pool/memory_provider.h>
 #include <trace/events/page_pool.h>
+#include <net/netdev_rx_queue.h>
 #include <net/tcp.h>
 #include <net/rps.h>
 
@@ -36,6 +37,65 @@ static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *nio
 	return container_of(owner, struct io_zcrx_area, nia);
 }
 
+static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx)
+{
+	struct netdev_rx_queue *rxq;
+	struct net_device *dev = ifq->dev;
+	int ret;
+
+	ASSERT_RTNL();
+
+	if (ifq_idx >= dev->num_rx_queues)
+		return -EINVAL;
+	ifq_idx = array_index_nospec(ifq_idx, dev->num_rx_queues);
+
+	rxq = __netif_get_rx_queue(ifq->dev, ifq_idx);
+	if (rxq->mp_params.mp_priv)
+		return -EEXIST;
+
+	ifq->if_rxq = ifq_idx;
+	rxq->mp_params.mp_ops = &io_uring_pp_zc_ops;
+	rxq->mp_params.mp_priv = ifq;
+	ret = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq);
+	if (ret)
+		goto fail;
+	return 0;
+fail:
+	rxq->mp_params.mp_ops = NULL;
+	rxq->mp_params.mp_priv = NULL;
+	ifq->if_rxq = -1;
+	return ret;
+}
+
+static void io_close_zc_rxq(struct io_zcrx_ifq *ifq)
+{
+	struct netdev_rx_queue *rxq;
+	int err;
+
+	if (ifq->if_rxq == -1)
+		return;
+
+	rtnl_lock();
+	if (WARN_ON_ONCE(ifq->if_rxq >= ifq->dev->num_rx_queues)) {
+		rtnl_unlock();
+		return;
+	}
+
+	rxq = __netif_get_rx_queue(ifq->dev, ifq->if_rxq);
+
+	WARN_ON_ONCE(rxq->mp_params.mp_priv != ifq);
+
+	rxq->mp_params.mp_ops = NULL;
+	rxq->mp_params.mp_priv = NULL;
+
+	err = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq);
+	if (err)
+		pr_devel("io_uring: can't restart a queue on zcrx close\n");
+
+	rtnl_unlock();
+	ifq->if_rxq = -1;
+}
+
 static int io_allocate_rbuf_ring(struct io_zcrx_ifq *ifq,
 				 struct io_uring_zcrx_ifq_reg *reg)
 {
@@ -156,9 +216,12 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
 
 static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq)
 {
+	io_close_zc_rxq(ifq);
+
 	if (ifq->area)
 		io_zcrx_free_area(ifq->area);
-
+	if (ifq->dev)
+		netdev_put(ifq->dev, &ifq->netdev_tracker);
 	io_free_rbuf_ring(ifq);
 	kfree(ifq);
 }
@@ -214,7 +277,18 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 		goto err;
 
 	ifq->rq_entries = reg.rq_entries;
-	ifq->if_rxq = reg.if_rxq;
+
+	ret = -ENODEV;
+	rtnl_lock();
+	ifq->dev = netdev_get_by_index(current->nsproxy->net_ns, reg.if_idx,
+				       &ifq->netdev_tracker, GFP_KERNEL);
+	if (!ifq->dev)
+		goto err_rtnl_unlock;
+
+	ret = io_open_zc_rxq(ifq, reg.if_rxq);
+	if (ret)
+		goto err_rtnl_unlock;
+	rtnl_unlock();
 
 	ring_sz = sizeof(struct io_uring);
 	rqes_sz = sizeof(struct io_uring_zcrx_rqe) * ifq->rq_entries;
@@ -224,15 +298,20 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 	reg.offsets.tail = offsetof(struct io_uring, tail);
 
 	if (copy_to_user(arg, &reg, sizeof(reg))) {
+		io_close_zc_rxq(ifq);
 		ret = -EFAULT;
 		goto err;
 	}
 	if (copy_to_user(u64_to_user_ptr(reg.area_ptr), &area, sizeof(area))) {
+		io_close_zc_rxq(ifq);
 		ret = -EFAULT;
 		goto err;
 	}
 	ctx->ifq = ifq;
 	return 0;
+
+err_rtnl_unlock:
+	rtnl_unlock();
 err:
 	io_zcrx_ifq_free(ifq);
 	return ret;
@@ -254,6 +333,9 @@ void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
 void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 {
 	lockdep_assert_held(&ctx->uring_lock);
+
+	if (ctx->ifq)
+		io_close_zc_rxq(ctx->ifq);
 }
 
 static void io_zcrx_get_buf_uref(struct net_iov *niov)
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 1f039ad45a63..d3f6b6cdd647 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -5,6 +5,7 @@
 #include <linux/io_uring_types.h>
 #include <linux/socket.h>
 #include <net/page_pool/types.h>
+#include <net/net_trackers.h>
 
 #define IO_ZC_RX_UREF			0x10000
 #define IO_ZC_RX_KREF_MASK		(IO_ZC_RX_UREF - 1)
@@ -37,6 +38,7 @@ struct io_zcrx_ifq {
 	struct page			**rqe_pages;
 
 	u32				if_rxq;
+	netdevice_tracker		netdev_tracker;
 };
 
 #if defined(CONFIG_IO_URING_ZCRX)
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 14/15] io_uring/zcrx: add copy fallback
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (12 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 13/15] io_uring/zcrx: set pp memory provider for an rx queue David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 14:40   ` Paolo Abeni
  2024-10-16 18:52 ` [PATCH v6 15/15] io_uring/zcrx: throttle receive requests David Wei
  2024-10-21 15:27 ` [PATCH v6 00/15] io_uring zero copy rx Jens Axboe
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

There are scenarios in which the zerocopy path might get a normal
in-kernel buffer, it could be a mis-steered packet or simply the linear
part of an skb. Another use case is to allow the driver to allocate
kernel pages when it's out of zc buffers, which makes it more resilient
to spikes in load and allow the user to choose the balance between the
amount of memory provided and performance.

At the moment we fail such requests. Instead, grab a buffer from the
page pool, copy data there, and return back to user in the usual way.
Because the refill ring is private to the napi our page pool is running
from, it's done by stopping the napi via napi_execute() helper. It grabs
only one buffer, which is inefficient, and improving it is left for
follow up patches.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 io_uring/zcrx.c | 133 +++++++++++++++++++++++++++++++++++++++++++++---
 io_uring/zcrx.h |   1 +
 2 files changed, 127 insertions(+), 7 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 3f4625730dbd..1f4db70e3370 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -5,6 +5,8 @@
 #include <linux/nospec.h>
 #include <linux/netdevice.h>
 #include <linux/io_uring.h>
+#include <linux/skbuff_ref.h>
+#include <net/busy_poll.h>
 #include <net/page_pool/helpers.h>
 #include <net/page_pool/memory_provider.h>
 #include <trace/events/page_pool.h>
@@ -28,6 +30,11 @@ struct io_zcrx_args {
 	struct socket		*sock;
 };
 
+struct io_zc_refill_data {
+	struct io_zcrx_ifq *ifq;
+	struct net_iov *niov;
+};
+
 static const struct memory_provider_ops io_uring_pp_zc_ops;
 
 static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov)
@@ -37,6 +44,13 @@ static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *nio
 	return container_of(owner, struct io_zcrx_area, nia);
 }
 
+static inline struct page *io_zcrx_iov_page(const struct net_iov *niov)
+{
+	struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);
+
+	return area->pages[net_iov_idx(niov)];
+}
+
 static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx)
 {
 	struct netdev_rx_queue *rxq;
@@ -59,6 +73,13 @@ static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx)
 	ret = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq);
 	if (ret)
 		goto fail;
+
+	if (WARN_ON_ONCE(!ifq->pp)) {
+		ret = -EFAULT;
+		goto fail;
+	}
+	/* grab napi_id while still under rtnl */
+	ifq->napi_id = ifq->pp->p.napi->napi_id;
 	return 0;
 fail:
 	rxq->mp_params.mp_ops = NULL;
@@ -526,6 +547,7 @@ static void io_pp_zc_destroy(struct page_pool *pp)
 	page_pool_mp_release_area(pp, &ifq->area->nia);
 
 	ifq->pp = NULL;
+	ifq->napi_id = 0;
 
 	if (WARN_ON_ONCE(area->free_count != area->nia.num_niovs))
 		return;
@@ -540,6 +562,34 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = {
 	.scrub			= io_pp_zc_scrub,
 };
 
+static void io_napi_refill(void *data)
+{
+	struct io_zc_refill_data *rd = data;
+	struct io_zcrx_ifq *ifq = rd->ifq;
+	netmem_ref netmem;
+
+	if (WARN_ON_ONCE(!ifq->pp))
+		return;
+
+	netmem = page_pool_alloc_netmem(ifq->pp, GFP_ATOMIC | __GFP_NOWARN);
+	if (!netmem)
+		return;
+	if (WARN_ON_ONCE(!netmem_is_net_iov(netmem)))
+		return;
+
+	rd->niov = netmem_to_net_iov(netmem);
+}
+
+static struct net_iov *io_zc_get_buf_task_safe(struct io_zcrx_ifq *ifq)
+{
+	struct io_zc_refill_data rd = {
+		.ifq = ifq,
+	};
+
+	napi_execute(ifq->napi_id, io_napi_refill, &rd);
+	return rd.niov;
+}
+
 static bool io_zcrx_queue_cqe(struct io_kiocb *req, struct net_iov *niov,
 			      struct io_zcrx_ifq *ifq, int off, int len)
 {
@@ -563,6 +613,45 @@ static bool io_zcrx_queue_cqe(struct io_kiocb *req, struct net_iov *niov,
 	return true;
 }
 
+static ssize_t io_zcrx_copy_chunk(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
+				  void *data, unsigned int offset, size_t len)
+{
+	size_t copy_size, copied = 0;
+	int ret = 0, off = 0;
+	struct page *page;
+	u8 *vaddr;
+
+	do {
+		struct net_iov *niov;
+
+		niov = io_zc_get_buf_task_safe(ifq);
+		if (!niov) {
+			ret = -ENOMEM;
+			break;
+		}
+
+		page = io_zcrx_iov_page(niov);
+		vaddr = kmap_local_page(page);
+		copy_size = min_t(size_t, PAGE_SIZE, len);
+		memcpy(vaddr, data + offset, copy_size);
+		kunmap_local(vaddr);
+
+		if (!io_zcrx_queue_cqe(req, niov, ifq, off, copy_size)) {
+			napi_pp_put_page(net_iov_to_netmem(niov));
+			return -ENOSPC;
+		}
+
+		io_zcrx_get_buf_uref(niov);
+		napi_pp_put_page(net_iov_to_netmem(niov));
+
+		offset += copy_size;
+		len -= copy_size;
+		copied += copy_size;
+	} while (offset < len);
+
+	return copied ? copied : ret;
+}
+
 static int io_zcrx_recv_frag(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
 			     const skb_frag_t *frag, int off, int len)
 {
@@ -570,8 +659,24 @@ static int io_zcrx_recv_frag(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
 
 	off += skb_frag_off(frag);
 
-	if (unlikely(!skb_frag_is_net_iov(frag)))
-		return -EOPNOTSUPP;
+	if (unlikely(!skb_frag_is_net_iov(frag))) {
+		struct page *page = skb_frag_page(frag);
+		u32 p_off, p_len, t, copied = 0;
+		u8 *vaddr;
+		int ret = 0;
+
+		skb_frag_foreach_page(frag, off, len,
+				      page, p_off, p_len, t) {
+			vaddr = kmap_local_page(page);
+			ret = io_zcrx_copy_chunk(req, ifq, vaddr, p_off, p_len);
+			kunmap_local(vaddr);
+
+			if (ret < 0)
+				return copied ? copied : ret;
+			copied += ret;
+		}
+		return copied;
+	}
 
 	niov = netmem_to_net_iov(frag->netmem);
 	if (niov->pp->mp_ops != &io_uring_pp_zc_ops ||
@@ -592,15 +697,29 @@ io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb,
 	struct io_zcrx_ifq *ifq = args->ifq;
 	struct io_kiocb *req = args->req;
 	struct sk_buff *frag_iter;
-	unsigned start, start_off;
+	unsigned start, start_off = offset;
 	int i, copy, end, off;
 	int ret = 0;
 
-	start = skb_headlen(skb);
-	start_off = offset;
+	if (unlikely(offset < skb_headlen(skb))) {
+		ssize_t copied;
+		size_t to_copy;
 
-	if (offset < start)
-		return -EOPNOTSUPP;
+		to_copy = min_t(size_t, skb_headlen(skb) - offset, len);
+		copied = io_zcrx_copy_chunk(req, ifq, skb->data, offset, to_copy);
+		if (copied < 0) {
+			ret = copied;
+			goto out;
+		}
+		offset += copied;
+		len -= copied;
+		if (!len)
+			goto out;
+		if (offset != skb_headlen(skb))
+			goto out;
+	}
+
+	start = skb_headlen(skb);
 
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
 		const skb_frag_t *frag;
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index d3f6b6cdd647..5d7920972e95 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -39,6 +39,7 @@ struct io_zcrx_ifq {
 
 	u32				if_rxq;
 	netdevice_tracker		netdev_tracker;
+	unsigned			napi_id;
 };
 
 #if defined(CONFIG_IO_URING_ZCRX)
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 15/15] io_uring/zcrx: throttle receive requests
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (13 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 14/15] io_uring/zcrx: add copy fallback David Wei
@ 2024-10-16 18:52 ` David Wei
  2024-10-21 16:36   ` Jens Axboe
  2024-10-21 15:27 ` [PATCH v6 00/15] io_uring zero copy rx Jens Axboe
  15 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-16 18:52 UTC (permalink / raw)
  To: io-uring, netdev
  Cc: David Wei, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

From: Pavel Begunkov <[email protected]>

io_zc_rx_tcp_recvmsg() continues until it fails or there is nothing to
receive. If the other side sends fast enough, we might get stuck in
io_zc_rx_tcp_recvmsg() producing more and more CQEs but not letting the
user to handle them leading to unbound latencies.

Break out of it based on an arbitrarily chosen limit, the upper layer
will either return to userspace or requeue the request.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
 io_uring/net.c  |  5 ++++-
 io_uring/zcrx.c | 17 ++++++++++++++---
 io_uring/zcrx.h |  6 ++++--
 3 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/io_uring/net.c b/io_uring/net.c
index 9716ecdcb570..27966dfa2938 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -1255,10 +1255,13 @@ int io_recvzc(struct io_kiocb *req, unsigned int issue_flags)
 	if (unlikely(!sock))
 		return -ENOTSOCK;
 
-	ret = io_zcrx_recv(req, zc->ifq, sock, zc->msg_flags | MSG_DONTWAIT);
+	ret = io_zcrx_recv(req, zc->ifq, sock, zc->msg_flags | MSG_DONTWAIT,
+			   issue_flags);
 	if (unlikely(ret <= 0) && ret != -EAGAIN) {
 		if (ret == -ERESTARTSYS)
 			ret = -EINTR;
+		if (ret == IOU_REQUEUE)
+			return IOU_REQUEUE;
 
 		req_set_fail(req);
 		io_req_set_res(req, ret, 0);
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 1f4db70e3370..a2c753e8e46e 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -24,10 +24,13 @@
 
 #define IO_RQ_MAX_ENTRIES		32768
 
+#define IO_SKBS_PER_CALL_LIMIT	20
+
 struct io_zcrx_args {
 	struct io_kiocb		*req;
 	struct io_zcrx_ifq	*ifq;
 	struct socket		*sock;
+	unsigned		nr_skbs;
 };
 
 struct io_zc_refill_data {
@@ -701,6 +704,9 @@ io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb,
 	int i, copy, end, off;
 	int ret = 0;
 
+	if (unlikely(args->nr_skbs++ > IO_SKBS_PER_CALL_LIMIT))
+		return -EAGAIN;
+
 	if (unlikely(offset < skb_headlen(skb))) {
 		ssize_t copied;
 		size_t to_copy;
@@ -778,7 +784,8 @@ io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb,
 }
 
 static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
-				struct sock *sk, int flags)
+				struct sock *sk, int flags,
+				unsigned int issue_flags)
 {
 	struct io_zcrx_args args = {
 		.req = req,
@@ -804,6 +811,9 @@ static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
 			ret = -ENOTCONN;
 		else
 			ret = -EAGAIN;
+	} else if (unlikely(args.nr_skbs > IO_SKBS_PER_CALL_LIMIT) &&
+		   (issue_flags & IO_URING_F_MULTISHOT)) {
+		ret = IOU_REQUEUE;
 	} else if (sock_flag(sk, SOCK_DONE)) {
 		/* Make it to retry until it finally gets 0. */
 		ret = -EAGAIN;
@@ -814,7 +824,8 @@ static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
 }
 
 int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
-		 struct socket *sock, unsigned int flags)
+		 struct socket *sock, unsigned int flags,
+		 unsigned int issue_flags)
 {
 	struct sock *sk = sock->sk;
 	const struct proto *prot = READ_ONCE(sk->sk_prot);
@@ -823,5 +834,5 @@ int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
 		return -EPROTONOSUPPORT;
 
 	sock_rps_record_flow(sk);
-	return io_zcrx_tcp_recvmsg(req, ifq, sk, flags);
+	return io_zcrx_tcp_recvmsg(req, ifq, sk, flags, issue_flags);
 }
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 5d7920972e95..45485bdce61a 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -48,7 +48,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx);
 void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx);
 int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
-		 struct socket *sock, unsigned int flags);
+		 struct socket *sock, unsigned int flags,
+		 unsigned int issue_flags);
 #else
 static inline int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 					struct io_uring_zcrx_ifq_reg __user *arg)
@@ -62,7 +63,8 @@ static inline void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 {
 }
 static inline int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
-			       struct socket *sock, unsigned int flags)
+				struct socket *sock, unsigned int flags,
+				unsigned int issue_flags)
 {
 	return -EOPNOTSUPP;
 }
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/15] net: add helper executing custom callback from napi
  2024-10-16 18:52 ` [PATCH v6 08/15] net: add helper executing custom callback from napi David Wei
@ 2024-10-21 14:25   ` Paolo Abeni
  2024-10-21 15:49     ` Jens Axboe
  2024-10-21 17:16     ` Pavel Begunkov
  0 siblings, 2 replies; 44+ messages in thread
From: Paolo Abeni @ 2024-10-21 14:25 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Jens Axboe, Pavel Begunkov, Jakub Kicinski, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

Hi,

On 10/16/24 20:52, David Wei wrote:
> @@ -6503,6 +6511,41 @@ void napi_busy_loop(unsigned int napi_id,
>  }
>  EXPORT_SYMBOL(napi_busy_loop);
>  
> +void napi_execute(unsigned napi_id,
> +		  void (*cb)(void *), void *cb_arg)
> +{
> +	struct napi_struct *napi;
> +	void *have_poll_lock = NULL;

Minor nit: please respect the reverse x-mas tree order.

> +
> +	guard(rcu)();

Since this will land into net core code, please use the explicit RCU
read lock/unlock:

https://elixir.bootlin.com/linux/v6.12-rc3/source/Documentation/process/maintainer-netdev.rst#L387

> +	napi = napi_by_id(napi_id);
> +	if (!napi)
> +		return;
> +
> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> +		preempt_disable();
> +
> +	for (;;) {
> +		local_bh_disable();
> +
> +		if (napi_state_start_busy_polling(napi, 0)) {
> +			have_poll_lock = netpoll_poll_lock(napi);
> +			cb(cb_arg);
> +			local_bh_enable();
> +			busy_poll_stop(napi, have_poll_lock, 0, 1);
> +			break;
> +		}
> +
> +		local_bh_enable();
> +		if (unlikely(need_resched()))
> +			break;
> +		cpu_relax();

Don't you need a 'loop_end' condition here?

Side notes not specifically related to this patch: I likely got lost in
previous revision, but it's unclear to me which is the merge plan here,
could you please (re-)word it?

Thanks!

Paolo


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 14/15] io_uring/zcrx: add copy fallback
  2024-10-16 18:52 ` [PATCH v6 14/15] io_uring/zcrx: add copy fallback David Wei
@ 2024-10-21 14:40   ` Paolo Abeni
  2024-10-21 18:31     ` David Wei
  0 siblings, 1 reply; 44+ messages in thread
From: Paolo Abeni @ 2024-10-21 14:40 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Jens Axboe, Pavel Begunkov, Jakub Kicinski, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/16/24 20:52, David Wei wrote:
> @@ -540,6 +562,34 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = {
>  	.scrub			= io_pp_zc_scrub,
>  };
>  
> +static void io_napi_refill(void *data)
> +{
> +	struct io_zc_refill_data *rd = data;
> +	struct io_zcrx_ifq *ifq = rd->ifq;
> +	netmem_ref netmem;
> +
> +	if (WARN_ON_ONCE(!ifq->pp))
> +		return;
> +
> +	netmem = page_pool_alloc_netmem(ifq->pp, GFP_ATOMIC | __GFP_NOWARN);
> +	if (!netmem)
> +		return;
> +	if (WARN_ON_ONCE(!netmem_is_net_iov(netmem)))
> +		return;
> +
> +	rd->niov = netmem_to_net_iov(netmem);
> +}
> +
> +static struct net_iov *io_zc_get_buf_task_safe(struct io_zcrx_ifq *ifq)
> +{
> +	struct io_zc_refill_data rd = {
> +		.ifq = ifq,
> +	};
> +
> +	napi_execute(ifq->napi_id, io_napi_refill, &rd);

Under UDP flood the above has unbounded/unlimited execution time, unless
you set NAPI_STATE_PREFER_BUSY_POLL. Is the allocation schema here
somehow preventing such unlimited wait?

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 00/15] io_uring zero copy rx
  2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
                   ` (14 preceding siblings ...)
  2024-10-16 18:52 ` [PATCH v6 15/15] io_uring/zcrx: throttle receive requests David Wei
@ 2024-10-21 15:27 ` Jens Axboe
  15 siblings, 0 replies; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 15:27 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

Hi,

Ran some quick testing with the updated series, on top of 6.12-rc4 and
netdev-next. Still works fine for me, didn't see any hickups and
performance is still where it was with the previous series (eg perfectly
stellar).

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 09/15] io_uring/zcrx: add interface queue and refill queue
  2024-10-16 18:52 ` [PATCH v6 09/15] io_uring/zcrx: add interface queue and refill queue David Wei
@ 2024-10-21 15:33   ` Jens Axboe
  0 siblings, 0 replies; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 15:33 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/16/24 12:52 PM, David Wei wrote:
> From: David Wei <[email protected]>
> 
> Add a new object called an interface queue (ifq) that represents a net
> rx queue that has been configured for zero copy. Each ifq is registered
> using a new registration opcode IORING_REGISTER_ZCRX_IFQ.
> 
> The refill queue is allocated by the kernel and mapped by userspace
> using a new offset IORING_OFF_RQ_RING, in a similar fashion to the main
> SQ/CQ. It is used by userspace to return buffers that it is done with,
> which will then be re-used by the netdev again.
> 
> The main CQ ring is used to notify userspace of received data by using
> the upper 16 bytes of a big CQE as a new struct io_uring_zcrx_cqe. Each
> entry contains the offset + len to the data.

Looks nicer now, I like the Kconfig symbol changes.

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area
  2024-10-16 18:52 ` [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area David Wei
@ 2024-10-21 15:35   ` Jens Axboe
  2024-10-21 16:28     ` Pavel Begunkov
  0 siblings, 1 reply; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 15:35 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/16/24 12:52 PM, David Wei wrote:
> +static int io_zcrx_create_area(struct io_ring_ctx *ctx,
> +			       struct io_zcrx_ifq *ifq,
> +			       struct io_zcrx_area **res,
> +			       struct io_uring_zcrx_area_reg *area_reg)
> +{
> +	struct io_zcrx_area *area;
> +	int i, ret, nr_pages;
> +	struct iovec iov;
> +
> +	if (area_reg->flags || area_reg->rq_area_token)
> +		return -EINVAL;
> +	if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1])
> +		return -EINVAL;
> +	if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK)
> +		return -EINVAL;
> +
> +	iov.iov_base = u64_to_user_ptr(area_reg->addr);
> +	iov.iov_len = area_reg->len;
> +	ret = io_buffer_validate(&iov);
> +	if (ret)
> +		return ret;
> +
> +	ret = -ENOMEM;
> +	area = kzalloc(sizeof(*area), GFP_KERNEL);
> +	if (!area)
> +		goto err;
> +
> +	area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len,
> +				   &nr_pages);
> +	if (IS_ERR(area->pages)) {
> +		ret = PTR_ERR(area->pages);
> +		area->pages = NULL;
> +		goto err;
> +	}
> +	area->nia.num_niovs = nr_pages;
> +
> +	area->nia.niovs = kvmalloc_array(nr_pages, sizeof(area->nia.niovs[0]),
> +					 GFP_KERNEL | __GFP_ZERO);
> +	if (!area->nia.niovs)
> +		goto err;
> +
> +	area->freelist = kvmalloc_array(nr_pages, sizeof(area->freelist[0]),
> +					GFP_KERNEL | __GFP_ZERO);
> +	if (!area->freelist)
> +		goto err;
> +
> +	for (i = 0; i < nr_pages; i++) {
> +		area->freelist[i] = i;
> +	}
> +
> +	area->free_count = nr_pages;
> +	area->ifq = ifq;
> +	/* we're only supporting one area per ifq for now */
> +	area->area_id = 0;
> +	area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT;
> +	spin_lock_init(&area->freelist_lock);
> +	*res = area;
> +	return 0;
> +err:
> +	if (area)
> +		io_zcrx_free_area(area);
> +	return ret;
> +}

Minor nit, but I think this would be nicer returning area and just using
ERR_PTR() for the errors.

Outside of that:

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider
  2024-10-16 18:52 ` [PATCH v6 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider David Wei
@ 2024-10-21 15:46   ` Jens Axboe
  0 siblings, 0 replies; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 15:46 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

Looks fine to me:

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/15] net: add helper executing custom callback from napi
  2024-10-21 14:25   ` Paolo Abeni
@ 2024-10-21 15:49     ` Jens Axboe
  2024-10-21 16:34       ` Pavel Begunkov
  2024-10-21 17:16     ` Pavel Begunkov
  1 sibling, 1 reply; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 15:49 UTC (permalink / raw)
  To: Paolo Abeni, David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/21/24 8:25 AM, Paolo Abeni wrote:
> Side notes not specifically related to this patch: I likely got lost in
> previous revision, but it's unclear to me which is the merge plan here,
> could you please (re-)word it?

In the past when there's been dependencies like this, what we've done
is:

Someone, usually Jakub, sets up a branch with the net bits only. Both
the io_uring tree and netdev-next pulls that in.

With that, then the io_uring tree can apply the io_uring specific
patches on top. And either side can send a pull, won't impact the other
tree.

I like that way of doing it, as it keeps things separate, yet still easy
to deal with for the side that needs further work/patches on top.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 12/15] io_uring/zcrx: add io_recvzc request
  2024-10-16 18:52 ` [PATCH v6 12/15] io_uring/zcrx: add io_recvzc request David Wei
@ 2024-10-21 16:05   ` Jens Axboe
  0 siblings, 0 replies; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 16:05 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

Looks good to me:

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area
  2024-10-21 15:35   ` Jens Axboe
@ 2024-10-21 16:28     ` Pavel Begunkov
  2024-10-21 16:29       ` Jens Axboe
  0 siblings, 1 reply; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-21 16:28 UTC (permalink / raw)
  To: Jens Axboe, David Wei, io-uring, netdev
  Cc: Jakub Kicinski, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/21/24 16:35, Jens Axboe wrote:
> On 10/16/24 12:52 PM, David Wei wrote:
>> +static int io_zcrx_create_area(struct io_ring_ctx *ctx,
>> +			       struct io_zcrx_ifq *ifq,
>> +			       struct io_zcrx_area **res,
>> +			       struct io_uring_zcrx_area_reg *area_reg)
>> +{
>> +	struct io_zcrx_area *area;
>> +	int i, ret, nr_pages;
>> +	struct iovec iov;
>> +
>> +	if (area_reg->flags || area_reg->rq_area_token)
>> +		return -EINVAL;
>> +	if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1])
>> +		return -EINVAL;
>> +	if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK)
>> +		return -EINVAL;
>> +
>> +	iov.iov_base = u64_to_user_ptr(area_reg->addr);
>> +	iov.iov_len = area_reg->len;
>> +	ret = io_buffer_validate(&iov);
>> +	if (ret)
>> +		return ret;
>> +
>> +	ret = -ENOMEM;
>> +	area = kzalloc(sizeof(*area), GFP_KERNEL);
>> +	if (!area)
>> +		goto err;
>> +
>> +	area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len,
>> +				   &nr_pages);
>> +	if (IS_ERR(area->pages)) {
>> +		ret = PTR_ERR(area->pages);
>> +		area->pages = NULL;
>> +		goto err;
>> +	}
>> +	area->nia.num_niovs = nr_pages;
>> +
>> +	area->nia.niovs = kvmalloc_array(nr_pages, sizeof(area->nia.niovs[0]),
>> +					 GFP_KERNEL | __GFP_ZERO);
>> +	if (!area->nia.niovs)
>> +		goto err;
>> +
>> +	area->freelist = kvmalloc_array(nr_pages, sizeof(area->freelist[0]),
>> +					GFP_KERNEL | __GFP_ZERO);
>> +	if (!area->freelist)
>> +		goto err;
>> +
>> +	for (i = 0; i < nr_pages; i++) {
>> +		area->freelist[i] = i;
>> +	}
>> +
>> +	area->free_count = nr_pages;
>> +	area->ifq = ifq;
>> +	/* we're only supporting one area per ifq for now */
>> +	area->area_id = 0;
>> +	area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT;
>> +	spin_lock_init(&area->freelist_lock);
>> +	*res = area;
>> +	return 0;
>> +err:
>> +	if (area)
>> +		io_zcrx_free_area(area);
>> +	return ret;
>> +}
> 
> Minor nit, but I think this would be nicer returning area and just using
> ERR_PTR() for the errors.

I'd rather avoid it. Too often null vs IS_ERR checking gets
messed up down the road and the compiler doesn't help with it
at all.

Not related to the patch, but would be nice to have a type safer
way for that, e.g. returning some new type not directly
cast'able to the pointer.

> 
> Outside of that:
> 
> Reviewed-by: Jens Axboe <[email protected]>
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area
  2024-10-21 16:28     ` Pavel Begunkov
@ 2024-10-21 16:29       ` Jens Axboe
  2024-10-21 16:39         ` Pavel Begunkov
  0 siblings, 1 reply; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 16:29 UTC (permalink / raw)
  To: Pavel Begunkov, David Wei, io-uring, netdev
  Cc: Jakub Kicinski, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/21/24 10:28 AM, Pavel Begunkov wrote:
> On 10/21/24 16:35, Jens Axboe wrote:
>> On 10/16/24 12:52 PM, David Wei wrote:
>>> +static int io_zcrx_create_area(struct io_ring_ctx *ctx,
>>> +                   struct io_zcrx_ifq *ifq,
>>> +                   struct io_zcrx_area **res,
>>> +                   struct io_uring_zcrx_area_reg *area_reg)
>>> +{
>>> +    struct io_zcrx_area *area;
>>> +    int i, ret, nr_pages;
>>> +    struct iovec iov;
>>> +
>>> +    if (area_reg->flags || area_reg->rq_area_token)
>>> +        return -EINVAL;
>>> +    if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1])
>>> +        return -EINVAL;
>>> +    if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK)
>>> +        return -EINVAL;
>>> +
>>> +    iov.iov_base = u64_to_user_ptr(area_reg->addr);
>>> +    iov.iov_len = area_reg->len;
>>> +    ret = io_buffer_validate(&iov);
>>> +    if (ret)
>>> +        return ret;
>>> +
>>> +    ret = -ENOMEM;
>>> +    area = kzalloc(sizeof(*area), GFP_KERNEL);
>>> +    if (!area)
>>> +        goto err;
>>> +
>>> +    area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len,
>>> +                   &nr_pages);
>>> +    if (IS_ERR(area->pages)) {
>>> +        ret = PTR_ERR(area->pages);
>>> +        area->pages = NULL;
>>> +        goto err;
>>> +    }
>>> +    area->nia.num_niovs = nr_pages;
>>> +
>>> +    area->nia.niovs = kvmalloc_array(nr_pages, sizeof(area->nia.niovs[0]),
>>> +                     GFP_KERNEL | __GFP_ZERO);
>>> +    if (!area->nia.niovs)
>>> +        goto err;
>>> +
>>> +    area->freelist = kvmalloc_array(nr_pages, sizeof(area->freelist[0]),
>>> +                    GFP_KERNEL | __GFP_ZERO);
>>> +    if (!area->freelist)
>>> +        goto err;
>>> +
>>> +    for (i = 0; i < nr_pages; i++) {
>>> +        area->freelist[i] = i;
>>> +    }
>>> +
>>> +    area->free_count = nr_pages;
>>> +    area->ifq = ifq;
>>> +    /* we're only supporting one area per ifq for now */
>>> +    area->area_id = 0;
>>> +    area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT;
>>> +    spin_lock_init(&area->freelist_lock);
>>> +    *res = area;
>>> +    return 0;
>>> +err:
>>> +    if (area)
>>> +        io_zcrx_free_area(area);
>>> +    return ret;
>>> +}
>>
>> Minor nit, but I think this would be nicer returning area and just using
>> ERR_PTR() for the errors.
> 
> I'd rather avoid it. Too often null vs IS_ERR checking gets
> messed up down the road and the compiler doesn't help with it
> at all.

The main issue imho is when people mix NULL and ERR_PTR, the pure "valid
pointer or non-null error pointer" seem to be OK in terms of
maintainability. But like I said, not a huge deal, and it's not hot path
material so doesn't matter in terms of that.

> Not related to the patch, but would be nice to have a type safer
> way for that, e.g. returning some new type not directly
> cast'able to the pointer.

Definitely, room for improvement in the infrastructure for this.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 13/15] io_uring/zcrx: set pp memory provider for an rx queue
  2024-10-16 18:52 ` [PATCH v6 13/15] io_uring/zcrx: set pp memory provider for an rx queue David Wei
@ 2024-10-21 16:30   ` Jens Axboe
  0 siblings, 0 replies; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 16:30 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

Looks good to me:

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/15] net: add helper executing custom callback from napi
  2024-10-21 15:49     ` Jens Axboe
@ 2024-10-21 16:34       ` Pavel Begunkov
  0 siblings, 0 replies; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-21 16:34 UTC (permalink / raw)
  To: Jens Axboe, Paolo Abeni, David Wei, io-uring, netdev
  Cc: Jakub Kicinski, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/21/24 16:49, Jens Axboe wrote:
> On 10/21/24 8:25 AM, Paolo Abeni wrote:
>> Side notes not specifically related to this patch: I likely got lost in
>> previous revision, but it's unclear to me which is the merge plan here,
>> could you please (re-)word it?
> 
> In the past when there's been dependencies like this, what we've done
> is:
> 
> Someone, usually Jakub, sets up a branch with the net bits only. Both
> the io_uring tree and netdev-next pulls that in.
> 
> With that, then the io_uring tree can apply the io_uring specific
> patches on top. And either side can send a pull, won't impact the other
> tree.
> 
> I like that way of doing it, as it keeps things separate, yet still easy
> to deal with for the side that needs further work/patches on top.

Yep, I outlined in one of the comments same thing (should put it into
the cover letter). We'll get a branch with net/ patches on the common
base, so that it can be pulled into net and io-uring trees. Then, we
can deal with io_uring patches ourselves.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 15/15] io_uring/zcrx: throttle receive requests
  2024-10-16 18:52 ` [PATCH v6 15/15] io_uring/zcrx: throttle receive requests David Wei
@ 2024-10-21 16:36   ` Jens Axboe
  0 siblings, 0 replies; 44+ messages in thread
From: Jens Axboe @ 2024-10-21 16:36 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Pavel Begunkov, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

Looks fine to me:

Reviewed-by: Jens Axboe <[email protected]>

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area
  2024-10-21 16:29       ` Jens Axboe
@ 2024-10-21 16:39         ` Pavel Begunkov
  0 siblings, 0 replies; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-21 16:39 UTC (permalink / raw)
  To: Jens Axboe, David Wei, io-uring, netdev
  Cc: Jakub Kicinski, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/21/24 17:29, Jens Axboe wrote:
> On 10/21/24 10:28 AM, Pavel Begunkov wrote:
>> On 10/21/24 16:35, Jens Axboe wrote:
>>> On 10/16/24 12:52 PM, David Wei wrote:
>>>> +static int io_zcrx_create_area(struct io_ring_ctx *ctx,
>>>> +                   struct io_zcrx_ifq *ifq,
>>>> +                   struct io_zcrx_area **res,
>>>> +                   struct io_uring_zcrx_area_reg *area_reg)
>>>> +{
>>>> +    struct io_zcrx_area *area;
>>>> +    int i, ret, nr_pages;
>>>> +    struct iovec iov;
>>>> +
>>>> +    if (area_reg->flags || area_reg->rq_area_token)
>>>> +        return -EINVAL;
>>>> +    if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1])
>>>> +        return -EINVAL;
>>>> +    if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK)
>>>> +        return -EINVAL;
>>>> +
>>>> +    iov.iov_base = u64_to_user_ptr(area_reg->addr);
>>>> +    iov.iov_len = area_reg->len;
>>>> +    ret = io_buffer_validate(&iov);
>>>> +    if (ret)
>>>> +        return ret;
>>>> +
>>>> +    ret = -ENOMEM;
>>>> +    area = kzalloc(sizeof(*area), GFP_KERNEL);
>>>> +    if (!area)
>>>> +        goto err;
>>>> +
>>>> +    area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len,
>>>> +                   &nr_pages);
>>>> +    if (IS_ERR(area->pages)) {
>>>> +        ret = PTR_ERR(area->pages);
>>>> +        area->pages = NULL;
>>>> +        goto err;
>>>> +    }
>>>> +    area->nia.num_niovs = nr_pages;
>>>> +
>>>> +    area->nia.niovs = kvmalloc_array(nr_pages, sizeof(area->nia.niovs[0]),
>>>> +                     GFP_KERNEL | __GFP_ZERO);
>>>> +    if (!area->nia.niovs)
>>>> +        goto err;
>>>> +
>>>> +    area->freelist = kvmalloc_array(nr_pages, sizeof(area->freelist[0]),
>>>> +                    GFP_KERNEL | __GFP_ZERO);
>>>> +    if (!area->freelist)
>>>> +        goto err;
>>>> +
>>>> +    for (i = 0; i < nr_pages; i++) {
>>>> +        area->freelist[i] = i;
>>>> +    }
>>>> +
>>>> +    area->free_count = nr_pages;
>>>> +    area->ifq = ifq;
>>>> +    /* we're only supporting one area per ifq for now */
>>>> +    area->area_id = 0;
>>>> +    area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT;
>>>> +    spin_lock_init(&area->freelist_lock);
>>>> +    *res = area;
>>>> +    return 0;
>>>> +err:
>>>> +    if (area)
>>>> +        io_zcrx_free_area(area);
>>>> +    return ret;
>>>> +}
>>>
>>> Minor nit, but I think this would be nicer returning area and just using
>>> ERR_PTR() for the errors.
>>
>> I'd rather avoid it. Too often null vs IS_ERR checking gets
>> messed up down the road and the compiler doesn't help with it
>> at all.
> 
> The main issue imho is when people mix NULL and ERR_PTR, the pure "valid
> pointer or non-null error pointer" seem to be OK in terms of

Right, I meant it in general, mixing normal pointer types with
the implicit type that can have an error.

> maintainability. But like I said, not a huge deal, and it's not hot path
> material so doesn't matter in terms of that.

I agree it's maintainable, but this way I don't even need to think
about it.

>> Not related to the patch, but would be nice to have a type safer
>> way for that, e.g. returning some new type not directly
>> cast'able to the pointer.
> 
> Definitely, room for improvement in the infrastructure for this.
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/15] net: add helper executing custom callback from napi
  2024-10-21 14:25   ` Paolo Abeni
  2024-10-21 15:49     ` Jens Axboe
@ 2024-10-21 17:16     ` Pavel Begunkov
  2024-10-22  7:47       ` Paolo Abeni
  1 sibling, 1 reply; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-21 17:16 UTC (permalink / raw)
  To: Paolo Abeni, David Wei, io-uring, netdev
  Cc: Jens Axboe, Jakub Kicinski, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/21/24 15:25, Paolo Abeni wrote:
> Hi,
> 
> On 10/16/24 20:52, David Wei wrote:
>> @@ -6503,6 +6511,41 @@ void napi_busy_loop(unsigned int napi_id,
>>   }
>>   EXPORT_SYMBOL(napi_busy_loop);
>>   
>> +void napi_execute(unsigned napi_id,
>> +		  void (*cb)(void *), void *cb_arg)
>> +{
>> +	struct napi_struct *napi;
>> +	void *have_poll_lock = NULL;
> 
> Minor nit: please respect the reverse x-mas tree order.
> 
>> +
>> +	guard(rcu)();
> 
> Since this will land into net core code, please use the explicit RCU
> read lock/unlock:
> 
> https://elixir.bootlin.com/linux/v6.12-rc3/source/Documentation/process/maintainer-netdev.rst#L387

I missed the doc update, will change it, thanks


>> +	napi = napi_by_id(napi_id);
>> +	if (!napi)
>> +		return;
>> +
>> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
>> +		preempt_disable();
>> +
>> +	for (;;) {
>> +		local_bh_disable();
>> +
>> +		if (napi_state_start_busy_polling(napi, 0)) {
>> +			have_poll_lock = netpoll_poll_lock(napi);
>> +			cb(cb_arg);
>> +			local_bh_enable();
>> +			busy_poll_stop(napi, have_poll_lock, 0, 1);
>> +			break;
>> +		}
>> +
>> +		local_bh_enable();
>> +		if (unlikely(need_resched()))
>> +			break;
>> +		cpu_relax();
> 
> Don't you need a 'loop_end' condition here?

As you mentioned in 14/15, it can indeed spin for long and is bound only
by need_resched(). Do you think it's reasonable to wait for it without a
time limit with NAPI_STATE_PREFER_BUSY_POLL? softirq should yield napi
after it exhausts the budget, it should limit it well enough, what do
you think?

The only ugly part is that I don't want it to mess with the
NAPI_F_PREFER_BUSY_POLL in busy_poll_stop()

busy_poll_stop() {
	...
	clear_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state);
	if (flags & NAPI_F_PREFER_BUSY_POLL) {
		napi->defer_hard_irqs_count = READ_ONCE(napi->dev->napi_defer_hard_irqs);
		timeout = READ_ONCE(napi->dev->gro_flush_timeout);
		if (napi->defer_hard_irqs_count && timeout) {
			hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
			skip_schedule = true;
		}
	}
}

Is it fine to set PREFER_BUSY_POLL but do the stop call without? E.g.

set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state);
...
busy_poll_stop(napi, flags=0);


-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 14/15] io_uring/zcrx: add copy fallback
  2024-10-21 14:40   ` Paolo Abeni
@ 2024-10-21 18:31     ` David Wei
  2024-10-22  7:48       ` Paolo Abeni
  0 siblings, 1 reply; 44+ messages in thread
From: David Wei @ 2024-10-21 18:31 UTC (permalink / raw)
  To: Paolo Abeni, io-uring, netdev
  Cc: Jens Axboe, Pavel Begunkov, Jakub Kicinski, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 2024-10-21 07:40, Paolo Abeni wrote:
> On 10/16/24 20:52, David Wei wrote:
>> @@ -540,6 +562,34 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = {
>>  	.scrub			= io_pp_zc_scrub,
>>  };
>>  
>> +static void io_napi_refill(void *data)
>> +{
>> +	struct io_zc_refill_data *rd = data;
>> +	struct io_zcrx_ifq *ifq = rd->ifq;
>> +	netmem_ref netmem;
>> +
>> +	if (WARN_ON_ONCE(!ifq->pp))
>> +		return;
>> +
>> +	netmem = page_pool_alloc_netmem(ifq->pp, GFP_ATOMIC | __GFP_NOWARN);
>> +	if (!netmem)
>> +		return;
>> +	if (WARN_ON_ONCE(!netmem_is_net_iov(netmem)))
>> +		return;
>> +
>> +	rd->niov = netmem_to_net_iov(netmem);
>> +}
>> +
>> +static struct net_iov *io_zc_get_buf_task_safe(struct io_zcrx_ifq *ifq)
>> +{
>> +	struct io_zc_refill_data rd = {
>> +		.ifq = ifq,
>> +	};
>> +
>> +	napi_execute(ifq->napi_id, io_napi_refill, &rd);
> 
> Under UDP flood the above has unbounded/unlimited execution time, unless
> you set NAPI_STATE_PREFER_BUSY_POLL. Is the allocation schema here
> somehow preventing such unlimited wait?

Hi Paolo. Do you mean that under UDP flood, napi_execute() will have
unbounded execution time because napi_state_start_busy_polling() and
need_resched() will always return false? My understanding is that
need_resched() will eventually kick the caller task out of
napi_execute().

> 
> Thanks,
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/15] net: add helper executing custom callback from napi
  2024-10-21 17:16     ` Pavel Begunkov
@ 2024-10-22  7:47       ` Paolo Abeni
  2024-10-22 15:36         ` Pavel Begunkov
  0 siblings, 1 reply; 44+ messages in thread
From: Paolo Abeni @ 2024-10-22  7:47 UTC (permalink / raw)
  To: Pavel Begunkov, David Wei
  Cc: Jens Axboe, Jakub Kicinski, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, io-uring, netdev

Hi,

On 10/21/24 19:16, Pavel Begunkov wrote:
> On 10/21/24 15:25, Paolo Abeni wrote:
>> On 10/16/24 20:52, David Wei wrote:

[...]
>>> +	napi = napi_by_id(napi_id);
>>> +	if (!napi)
>>> +		return;
>>> +
>>> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
>>> +		preempt_disable();
>>> +
>>> +	for (;;) {
>>> +		local_bh_disable();
>>> +
>>> +		if (napi_state_start_busy_polling(napi, 0)) {
>>> +			have_poll_lock = netpoll_poll_lock(napi);
>>> +			cb(cb_arg);
>>> +			local_bh_enable();
>>> +			busy_poll_stop(napi, have_poll_lock, 0, 1);
>>> +			break;
>>> +		}
>>> +
>>> +		local_bh_enable();
>>> +		if (unlikely(need_resched()))
>>> +			break;
>>> +		cpu_relax();
>>
>> Don't you need a 'loop_end' condition here?
> 
> As you mentioned in 14/15, it can indeed spin for long and is bound only
> by need_resched(). Do you think it's reasonable to wait for it without a
> time limit with NAPI_STATE_PREFER_BUSY_POLL? softirq should yield napi
> after it exhausts the budget, it should limit it well enough, what do
> you think?
> 
> The only ugly part is that I don't want it to mess with the
> NAPI_F_PREFER_BUSY_POLL in busy_poll_stop()
> 
> busy_poll_stop() {
> 	...
> 	clear_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state);
> 	if (flags & NAPI_F_PREFER_BUSY_POLL) {
> 		napi->defer_hard_irqs_count = READ_ONCE(napi->dev->napi_defer_hard_irqs);
> 		timeout = READ_ONCE(napi->dev->gro_flush_timeout);
> 		if (napi->defer_hard_irqs_count && timeout) {
> 			hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
> 			skip_schedule = true;
> 		}
> 	}
> }

Why do you want to avoid such branch? It will do any action only when
the user-space explicitly want to leverage the hrtimer to check for
incoming packets. In such case, I think the kernel should try to respect
the user configuration.

> Is it fine to set PREFER_BUSY_POLL but do the stop call without? E.g.
> 
> set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state);
> ...
> busy_poll_stop(napi, flags=0);

My preference is for using NAPI_STATE_PREFER_BUSY_POLL consistently. It
should ensure a reasonably low latency for napi_execute() and consistent
infra behavior. Unless I'm missing some dangerous side effect ;)

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 14/15] io_uring/zcrx: add copy fallback
  2024-10-21 18:31     ` David Wei
@ 2024-10-22  7:48       ` Paolo Abeni
  0 siblings, 0 replies; 44+ messages in thread
From: Paolo Abeni @ 2024-10-22  7:48 UTC (permalink / raw)
  To: David Wei, io-uring, netdev
  Cc: Jens Axboe, Pavel Begunkov, Jakub Kicinski, David S. Miller,
	Eric Dumazet, Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela

On 10/21/24 20:31, David Wei wrote:
> On 2024-10-21 07:40, Paolo Abeni wrote:
>> On 10/16/24 20:52, David Wei wrote:
>>> @@ -540,6 +562,34 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = {
>>>  	.scrub			= io_pp_zc_scrub,
>>>  };
>>>  
>>> +static void io_napi_refill(void *data)
>>> +{
>>> +	struct io_zc_refill_data *rd = data;
>>> +	struct io_zcrx_ifq *ifq = rd->ifq;
>>> +	netmem_ref netmem;
>>> +
>>> +	if (WARN_ON_ONCE(!ifq->pp))
>>> +		return;
>>> +
>>> +	netmem = page_pool_alloc_netmem(ifq->pp, GFP_ATOMIC | __GFP_NOWARN);
>>> +	if (!netmem)
>>> +		return;
>>> +	if (WARN_ON_ONCE(!netmem_is_net_iov(netmem)))
>>> +		return;
>>> +
>>> +	rd->niov = netmem_to_net_iov(netmem);
>>> +}
>>> +
>>> +static struct net_iov *io_zc_get_buf_task_safe(struct io_zcrx_ifq *ifq)
>>> +{
>>> +	struct io_zc_refill_data rd = {
>>> +		.ifq = ifq,
>>> +	};
>>> +
>>> +	napi_execute(ifq->napi_id, io_napi_refill, &rd);
>>
>> Under UDP flood the above has unbounded/unlimited execution time, unless
>> you set NAPI_STATE_PREFER_BUSY_POLL. Is the allocation schema here
>> somehow preventing such unlimited wait?
> 
> Hi Paolo. Do you mean that under UDP flood, napi_execute() will have
> unbounded execution time because napi_state_start_busy_polling() and
> need_resched() will always return false? My understanding is that
> need_resched() will eventually kick the caller task out of
> napi_execute().

Sorry for the short reply. Let's try to consolidate this discussion on
patch 8, which is strictly related had has the relevant code more handy.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/15] net: add helper executing custom callback from napi
  2024-10-22  7:47       ` Paolo Abeni
@ 2024-10-22 15:36         ` Pavel Begunkov
  0 siblings, 0 replies; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-22 15:36 UTC (permalink / raw)
  To: Paolo Abeni, David Wei
  Cc: Jens Axboe, Jakub Kicinski, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, io-uring, netdev

On 10/22/24 08:47, Paolo Abeni wrote:
> Hi,
> 
> On 10/21/24 19:16, Pavel Begunkov wrote:
>> On 10/21/24 15:25, Paolo Abeni wrote:
>>> On 10/16/24 20:52, David Wei wrote:
> 
> [...]
>>>> +	napi = napi_by_id(napi_id);
>>>> +	if (!napi)
>>>> +		return;
>>>> +
>>>> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
>>>> +		preempt_disable();
>>>> +
>>>> +	for (;;) {
>>>> +		local_bh_disable();
>>>> +
>>>> +		if (napi_state_start_busy_polling(napi, 0)) {
>>>> +			have_poll_lock = netpoll_poll_lock(napi);
>>>> +			cb(cb_arg);
>>>> +			local_bh_enable();
>>>> +			busy_poll_stop(napi, have_poll_lock, 0, 1);
>>>> +			break;
>>>> +		}
>>>> +
>>>> +		local_bh_enable();
>>>> +		if (unlikely(need_resched()))
>>>> +			break;
>>>> +		cpu_relax();
>>>
>>> Don't you need a 'loop_end' condition here?
>>
>> As you mentioned in 14/15, it can indeed spin for long and is bound only
>> by need_resched(). Do you think it's reasonable to wait for it without a
>> time limit with NAPI_STATE_PREFER_BUSY_POLL? softirq should yield napi
>> after it exhausts the budget, it should limit it well enough, what do
>> you think?
>>
>> The only ugly part is that I don't want it to mess with the
>> NAPI_F_PREFER_BUSY_POLL in busy_poll_stop()
>>
>> busy_poll_stop() {
>> 	...
>> 	clear_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state);
>> 	if (flags & NAPI_F_PREFER_BUSY_POLL) {
>> 		napi->defer_hard_irqs_count = READ_ONCE(napi->dev->napi_defer_hard_irqs);
>> 		timeout = READ_ONCE(napi->dev->gro_flush_timeout);
>> 		if (napi->defer_hard_irqs_count && timeout) {
>> 			hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
>> 			skip_schedule = true;
>> 		}
>> 	}
>> }
> 
> Why do you want to avoid such branch? It will do any action only when
> the user-space explicitly want to leverage the hrtimer to check for
> incoming packets. In such case, I think the kernel should try to respect
> the user configuration.

It should be fine to pass the flag here, it just doesn't feel right.
napi_execute() is not interested in polling, but IIRC this chunk delays
the moment when softirq kicks in when there are no napi pollers. I.e.
IMO ideally it shouldn't affect napi polling timings...

>> Is it fine to set PREFER_BUSY_POLL but do the stop call without? E.g.
>>
>> set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state);
>> ...
>> busy_poll_stop(napi, flags=0);
> 
> My preference is for using NAPI_STATE_PREFER_BUSY_POLL consistently. It
> should ensure a reasonably low latency for napi_execute() and consistent
> infra behavior. Unless I'm missing some dangerous side effect ;)

... but let's just set it then. It only affects the zerocopy private
queue.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-16 18:52 ` [PATCH v6 02/15] net: generalise net_iov chunk owners David Wei
@ 2024-10-23  7:20   ` Christoph Hellwig
  2024-10-23 14:34     ` Pavel Begunkov
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2024-10-23  7:20 UTC (permalink / raw)
  To: David Wei
  Cc: io-uring, netdev, Jens Axboe, Pavel Begunkov, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On Wed, Oct 16, 2024 at 11:52:39AM -0700, David Wei wrote:
> From: Pavel Begunkov <[email protected]>
> 
> Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner,
> which serves as a useful abstraction to share data and provide a
> context. However, it's too devmem specific, and we want to reuse it for
> other memory providers, and for that we need to decouple net_iov from
> devmem. Make net_iov to point to a new base structure called
> net_iov_area, which dmabuf_genpool_chunk_owner extends.

We've been there before.  Instead of reinventing your own memory
provider please enhance dmabufs for your use case.  We don't really
need to build memory buffer abstraction over memory buffer abstraction.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-23  7:20   ` Christoph Hellwig
@ 2024-10-23 14:34     ` Pavel Begunkov
  2024-10-24  9:23       ` Christoph Hellwig
  0 siblings, 1 reply; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-23 14:34 UTC (permalink / raw)
  To: Christoph Hellwig, David Wei
  Cc: io-uring, netdev, Jens Axboe, Jakub Kicinski, Paolo Abeni,
	David S. Miller, Eric Dumazet, Jesper Dangaard Brouer,
	David Ahern, Mina Almasry, Stanislav Fomichev, Joe Damato,
	Pedro Tammela, Sumit Semwal, Christian König, linux-media,
	dri-devel, linaro-mm-sig

On 10/23/24 08:20, Christoph Hellwig wrote:
> On Wed, Oct 16, 2024 at 11:52:39AM -0700, David Wei wrote:
>> From: Pavel Begunkov <[email protected]>
>>
>> Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner,
>> which serves as a useful abstraction to share data and provide a
>> context. However, it's too devmem specific, and we want to reuse it for
>> other memory providers, and for that we need to decouple net_iov from
>> devmem. Make net_iov to point to a new base structure called
>> net_iov_area, which dmabuf_genpool_chunk_owner extends.
> 
> We've been there before.  Instead of reinventing your own memory
> provider please enhance dmabufs for your use case.  We don't really
> need to build memory buffer abstraction over memory buffer abstraction.

It doesn't care much what kind of memory it is, nor it's important
for internals how it's imported, it's user addresses -> pages for
user convenience sake. All the net_iov setup code is in the page pool
core code. What it does, however, is implementing the user API, so
There is no relevance with dmabufs.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-23 14:34     ` Pavel Begunkov
@ 2024-10-24  9:23       ` Christoph Hellwig
  2024-10-24 14:23         ` Pavel Begunkov
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2024-10-24  9:23 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Christoph Hellwig, David Wei, io-uring, netdev, Jens Axboe,
	Jakub Kicinski, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On Wed, Oct 23, 2024 at 03:34:53PM +0100, Pavel Begunkov wrote:
> It doesn't care much what kind of memory it is, nor it's important
> for internals how it's imported, it's user addresses -> pages for
> user convenience sake. All the net_iov setup code is in the page pool
> core code. What it does, however, is implementing the user API, so

That's not what this series does.  It adds the new memory_provider_ops
set of hooks, with once implementation for dmabufs, and one for
io_uring zero copy.

So you are precluding zero copy RX into anything but your magic
io_uring buffers, and using an odd abstraction for that.

The right way would be to support zero copy RX into every
designated dmabuf, and make io_uring work with udmabuf or if
absolutely needed it's own kind of dmabuf.  Instead we create
a maze of incompatible abstractions here.  The use case of e.g.
doing zero copy receive into a NVMe CMB using PCIe P2P transactions
is every but made up, so this does create a problem.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-24  9:23       ` Christoph Hellwig
@ 2024-10-24 14:23         ` Pavel Begunkov
  2024-10-24 16:06           ` Christoph Hellwig
  0 siblings, 1 reply; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-24 14:23 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: David Wei, io-uring, netdev, Jens Axboe, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On 10/24/24 10:23, Christoph Hellwig wrote:
> On Wed, Oct 23, 2024 at 03:34:53PM +0100, Pavel Begunkov wrote:
>> It doesn't care much what kind of memory it is, nor it's important
>> for internals how it's imported, it's user addresses -> pages for
>> user convenience sake. All the net_iov setup code is in the page pool
>> core code. What it does, however, is implementing the user API, so
> 
> That's not what this series does.  It adds the new memory_provider_ops
> set of hooks, with once implementation for dmabufs, and one for
> io_uring zero copy.

First, it's not a _new_ abstraction over a buffer as you called it
before, the abstraction (net_iov) is already merged.

Second, you mention devmem TCP, and it's not just a page pool with
"dmabufs", it's a user API to use it and other memory agnostic
allocation logic. And yes, dmabufs there is the least technically
important part. Just having a dmabuf handle solves absolutely nothing.

> So you are precluding zero copy RX into anything but your magic
> io_uring buffers, and using an odd abstraction for that.

Right io_uring zero copy RX API expects transfer to happen into io_uring
controlled buffers, and that's the entire idea. Buffers that are based
on an existing network specific abstraction, which are not restricted to
pages or anything specific in the long run, but the flow of which from
net stack to user and back is controlled by io_uring. If you worry about
abuse, io_uring can't even sanely initialise those buffers itself and
therefore asking the page pool code to do that.

> The right way would be to support zero copy RX into every
> designated dmabuf, and make io_uring work with udmabuf or if

I have no idea what you mean, but shoving dmabufs into every single
place regardless whether it makes sense or not is hardly a good
way forward.

> absolutely needed it's own kind of dmabuf.  Instead we create

I'm even more confused how that would help. The user API has to
be implemented and adding a new dmabuf gives nothing, not even
mentioning it's not clear what semantics of that beast is
supposed to be.

> a maze of incompatible abstractions here.  The use case of e.g.
> doing zero copy receive into a NVMe CMB using PCIe P2P transactions
> is every but made up, so this does create a problem.

That's some kind of a confusion again, there is no reason why
it can't be supported, transparently to the non-setup code at
that. That's left out as other bits to further iterations to
keep this set simpler.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-24 14:23         ` Pavel Begunkov
@ 2024-10-24 16:06           ` Christoph Hellwig
  2024-10-24 16:40             ` Pavel Begunkov
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2024-10-24 16:06 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Christoph Hellwig, David Wei, io-uring, netdev, Jens Axboe,
	Jakub Kicinski, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
> > That's not what this series does.  It adds the new memory_provider_ops
> > set of hooks, with once implementation for dmabufs, and one for
> > io_uring zero copy.
> 
> First, it's not a _new_ abstraction over a buffer as you called it
> before, the abstraction (net_iov) is already merged.

Umm, it is a new ops vector.

> Second, you mention devmem TCP, and it's not just a page pool with
> "dmabufs", it's a user API to use it and other memory agnostic
> allocation logic. And yes, dmabufs there is the least technically
> important part. Just having a dmabuf handle solves absolutely nothing.

It solves a lot, becaue it provides a proper abstraction.

> > So you are precluding zero copy RX into anything but your magic
> > io_uring buffers, and using an odd abstraction for that.
> 
> Right io_uring zero copy RX API expects transfer to happen into io_uring
> controlled buffers, and that's the entire idea. Buffers that are based
> on an existing network specific abstraction, which are not restricted to
> pages or anything specific in the long run, but the flow of which from
> net stack to user and back is controlled by io_uring. If you worry about
> abuse, io_uring can't even sanely initialise those buffers itself and
> therefore asking the page pool code to do that.

No, I worry about trying to io_uring for not good reason. This
pre-cludes in-kernel uses which would be extremly useful for
network storage drivers, and it precludes device memory of all
kinds.

> I'm even more confused how that would help. The user API has to
> be implemented and adding a new dmabuf gives nothing, not even
> mentioning it's not clear what semantics of that beast is
> supposed to be.
>

The dma-buf maintainers already explained to you last time
that there is absolutely no need to use the dmabuf UAPI, you
can use dma-bufs through in-kernel interfaces just fine.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-24 16:06           ` Christoph Hellwig
@ 2024-10-24 16:40             ` Pavel Begunkov
  2024-10-28 12:11               ` Christoph Hellwig
  0 siblings, 1 reply; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-24 16:40 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: David Wei, io-uring, netdev, Jens Axboe, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On 10/24/24 17:06, Christoph Hellwig wrote:
> On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
>>> That's not what this series does.  It adds the new memory_provider_ops
>>> set of hooks, with once implementation for dmabufs, and one for
>>> io_uring zero copy.
>>
>> First, it's not a _new_ abstraction over a buffer as you called it
>> before, the abstraction (net_iov) is already merged.
> 
> Umm, it is a new ops vector.

I don't understand what you mean. Callback?

>> Second, you mention devmem TCP, and it's not just a page pool with
>> "dmabufs", it's a user API to use it and other memory agnostic
>> allocation logic. And yes, dmabufs there is the least technically
>> important part. Just having a dmabuf handle solves absolutely nothing.
> 
> It solves a lot, becaue it provides a proper abstraction.

Then please go ahead and take a look at the patchset in question
and see how much of dmabuf handling is there comparing to pure
networking changes. The point that it's a new set of API and lots
of changes not related directly to dmabufs stand. dmabufs is useful
there as an abstraction there, but it's a very long stretch saying
that the series is all about it.

> 
>>> So you are precluding zero copy RX into anything but your magic
>>> io_uring buffers, and using an odd abstraction for that.
>>
>> Right io_uring zero copy RX API expects transfer to happen into io_uring
>> controlled buffers, and that's the entire idea. Buffers that are based
>> on an existing network specific abstraction, which are not restricted to
>> pages or anything specific in the long run, but the flow of which from
>> net stack to user and back is controlled by io_uring. If you worry about
>> abuse, io_uring can't even sanely initialise those buffers itself and
>> therefore asking the page pool code to do that.
> 
> No, I worry about trying to io_uring for not good reason. This

It sounds that the argument is that you just don't want any
io_uring APIs, I don't think you'd be able to help you with
that.

> pre-cludes in-kernel uses which would be extremly useful for

Uses of what? devmem TCP is merged, I'm not removing it,
and the net_iov abstraction is in there, which can be potentially
be reused by other in-kernel users if that'd even make sense.

> network storage drivers, and it precludes device memory of all
> kinds.

You can't use page pools to allocate for a storage device, it's
a network specific allocator. You can get a dmabuf around that
device's memory and zero copy into it, but there is no problem
with that. Either use devmem TCP or wait until io_uring adds
support for dmabufs, which is, again, trivial.

>> I'm even more confused how that would help. The user API has to
>> be implemented and adding a new dmabuf gives nothing, not even
>> mentioning it's not clear what semantics of that beast is
>> supposed to be.
>>
> 
> The dma-buf maintainers already explained to you last time
> that there is absolutely no need to use the dmabuf UAPI, you
> can use dma-bufs through in-kernel interfaces just fine.

You can, even though it's not needed and I don't see how
it'd be useful, but you're missing the point. A new dmabuf
implementation doesn't implement the uapi we need nor it
helps to talk to the net layer.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-24 16:40             ` Pavel Begunkov
@ 2024-10-28 12:11               ` Christoph Hellwig
  2024-10-29 16:35                 ` Pavel Begunkov
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2024-10-28 12:11 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Christoph Hellwig, David Wei, io-uring, netdev, Jens Axboe,
	Jakub Kicinski, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On Thu, Oct 24, 2024 at 05:40:02PM +0100, Pavel Begunkov wrote:
> On 10/24/24 17:06, Christoph Hellwig wrote:
> > On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
> > > > That's not what this series does.  It adds the new memory_provider_ops
> > > > set of hooks, with once implementation for dmabufs, and one for
> > > > io_uring zero copy.
> > > 
> > > First, it's not a _new_ abstraction over a buffer as you called it
> > > before, the abstraction (net_iov) is already merged.
> > 
> > Umm, it is a new ops vector.
> 
> I don't understand what you mean. Callback?

struct memory_provider_ops.  It's a method table or ops vetor, no
callbacks involved.

> Then please go ahead and take a look at the patchset in question
> and see how much of dmabuf handling is there comparing to pure
> networking changes. The point that it's a new set of API and lots
> of changes not related directly to dmabufs stand. dmabufs is useful
> there as an abstraction there, but it's a very long stretch saying
> that the series is all about it.

I did take a look, that's why I replied.

> > > on an existing network specific abstraction, which are not restricted to
> > > pages or anything specific in the long run, but the flow of which from
> > > net stack to user and back is controlled by io_uring. If you worry about
> > > abuse, io_uring can't even sanely initialise those buffers itself and
> > > therefore asking the page pool code to do that.
> > 
> > No, I worry about trying to io_uring for not good reason. This
> 
> It sounds that the argument is that you just don't want any
> io_uring APIs, I don't think you'd be able to help you with
> that.

No, that's complete misinterpreting what I'm saying.  Of course an
io_uring API is fine.  But tying low-level implementation details to
to is not.

> > pre-cludes in-kernel uses which would be extremly useful for
> 
> Uses of what? devmem TCP is merged, I'm not removing it,
> and the net_iov abstraction is in there, which can be potentially
> be reused by other in-kernel users if that'd even make sense.

How when you are hardcoding io uring memory registrations instead
of making them a generic dmabuf?  Which btw would also really help
with pre-registering the memry with the iommu to get good performance
in IOMMU-enabled setups.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-28 12:11               ` Christoph Hellwig
@ 2024-10-29 16:35                 ` Pavel Begunkov
  2024-10-30 14:57                   ` Christoph Hellwig
  0 siblings, 1 reply; 44+ messages in thread
From: Pavel Begunkov @ 2024-10-29 16:35 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: David Wei, io-uring, netdev, Jens Axboe, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On 10/28/24 12:11, Christoph Hellwig wrote:
> On Thu, Oct 24, 2024 at 05:40:02PM +0100, Pavel Begunkov wrote:
>> On 10/24/24 17:06, Christoph Hellwig wrote:
>>> On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
>>>>> That's not what this series does.  It adds the new memory_provider_ops
>>>>> set of hooks, with once implementation for dmabufs, and one for
>>>>> io_uring zero copy.
>>>>
>>>> First, it's not a _new_ abstraction over a buffer as you called it
>>>> before, the abstraction (net_iov) is already merged.
>>>
>>> Umm, it is a new ops vector.
>>
>> I don't understand what you mean. Callback?
> 
> struct memory_provider_ops.  It's a method table or ops vetor, no
> callbacks involved.

I see, the reply is about your phrase about additional memory
abstractions:

"... don't really need to build memory buffer abstraction over
memory buffer abstraction."

>> Then please go ahead and take a look at the patchset in question
>> and see how much of dmabuf handling is there comparing to pure
>> networking changes. The point that it's a new set of API and lots
>> of changes not related directly to dmabufs stand. dmabufs is useful
>> there as an abstraction there, but it's a very long stretch saying
>> that the series is all about it.
> 
> I did take a look, that's why I replied.
> 
>>>> on an existing network specific abstraction, which are not restricted to
>>>> pages or anything specific in the long run, but the flow of which from
>>>> net stack to user and back is controlled by io_uring. If you worry about
>>>> abuse, io_uring can't even sanely initialise those buffers itself and
>>>> therefore asking the page pool code to do that.
>>>
>>> No, I worry about trying to io_uring for not good reason. This
>>
>> It sounds that the argument is that you just don't want any
>> io_uring APIs, I don't think you'd be able to help you with
>> that.
> 
> No, that's complete misinterpreting what I'm saying.  Of course an
> io_uring API is fine.  But tying low-level implementation details to
> to is not.

It works with low level concepts, i.e. private NIC queues, but it does
that through well established abstractions (page pool) already extended
for such cases. There is no directly going into a driver / hardware and
hard coding queue allocation, some memory injection or anything similar.
The user api has to embrace the hardware limitations, right, there is no
way around it without completely changing the approach and performance
and/or applicability. And queues as first class citizens is not a new
concept in general.

>>> pre-cludes in-kernel uses which would be extremly useful for
>>
>> Uses of what? devmem TCP is merged, I'm not removing it,
>> and the net_iov abstraction is in there, which can be potentially
>> be reused by other in-kernel users if that'd even make sense.
> 
> How when you are hardcoding io uring memory registrations instead
> of making them a generic dmabuf?  Which btw would also really help

If you mean internals, making up a dmabuf that has never existed in the
picture in the first place is not cleaner or easier in any way. If that
changes, e.g. there is more code to reuse in the future, we can unify it
then.

If that's about user api, you've just mentioned before that it can be
pages / user pointers. As to why it goes through io_uring, I explained
it before, but in short, it gives a better api for io_uring users, we
can avoid creating a yet another file (netlink socket) and keeping it
around, that way we don't need to synchronise with the nl socket and/or
trying to steal memory from it, and the devmem api is also too
monolithic for such purposes, so even that would need to change, i.e.
splitting queue and memory registration.

> with pre-registering the memry with the iommu to get good performance
> in IOMMU-enabled setups.

The page pool already does that just like it handles the normal
path without providers.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/15] net: generalise net_iov chunk owners
  2024-10-29 16:35                 ` Pavel Begunkov
@ 2024-10-30 14:57                   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2024-10-30 14:57 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Christoph Hellwig, David Wei, io-uring, netdev, Jens Axboe,
	Jakub Kicinski, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jesper Dangaard Brouer, David Ahern, Mina Almasry,
	Stanislav Fomichev, Joe Damato, Pedro Tammela, Sumit Semwal,
	Christian König, linux-media, dri-devel, linaro-mm-sig

On Tue, Oct 29, 2024 at 04:35:16PM +0000, Pavel Begunkov wrote:
> I see, the reply is about your phrase about additional memory
> abstractions:
> 
> "... don't really need to build memory buffer abstraction over
> memory buffer abstraction."

Yes, over the exsting memory buffer abstraction (dma_buf).

> If you mean internals, making up a dmabuf that has never existed in the
> picture in the first place is not cleaner or easier in any way. If that
> changes, e.g. there is more code to reuse in the future, we can unify it
> then.

I'm not sure what "making up" means here, they are all made up :)

> > with pre-registering the memry with the iommu to get good performance
> > in IOMMU-enabled setups.
> 
> The page pool already does that just like it handles the normal
> path without providers.

In which case is basically is a dma-buf.  If you'd expose it as such
we could actually use to communicate between subsystems in the
kernel.


^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2024-10-30 14:57 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-16 18:52 [PATCH v6 00/15] io_uring zero copy rx David Wei
2024-10-16 18:52 ` [PATCH v6 01/15] net: prefix devmem specific helpers David Wei
2024-10-16 18:52 ` [PATCH v6 02/15] net: generalise net_iov chunk owners David Wei
2024-10-23  7:20   ` Christoph Hellwig
2024-10-23 14:34     ` Pavel Begunkov
2024-10-24  9:23       ` Christoph Hellwig
2024-10-24 14:23         ` Pavel Begunkov
2024-10-24 16:06           ` Christoph Hellwig
2024-10-24 16:40             ` Pavel Begunkov
2024-10-28 12:11               ` Christoph Hellwig
2024-10-29 16:35                 ` Pavel Begunkov
2024-10-30 14:57                   ` Christoph Hellwig
2024-10-16 18:52 ` [PATCH v6 03/15] net: page_pool: create hooks for custom page providers David Wei
2024-10-16 18:52 ` [PATCH v6 04/15] net: prepare for non devmem TCP memory providers David Wei
2024-10-16 18:52 ` [PATCH v6 05/15] net: page_pool: add ->scrub mem provider callback David Wei
2024-10-16 18:52 ` [PATCH v6 06/15] net: page pool: add helper creating area from pages David Wei
2024-10-16 18:52 ` [PATCH v6 07/15] net: page_pool: introduce page_pool_mp_return_in_cache David Wei
2024-10-16 18:52 ` [PATCH v6 08/15] net: add helper executing custom callback from napi David Wei
2024-10-21 14:25   ` Paolo Abeni
2024-10-21 15:49     ` Jens Axboe
2024-10-21 16:34       ` Pavel Begunkov
2024-10-21 17:16     ` Pavel Begunkov
2024-10-22  7:47       ` Paolo Abeni
2024-10-22 15:36         ` Pavel Begunkov
2024-10-16 18:52 ` [PATCH v6 09/15] io_uring/zcrx: add interface queue and refill queue David Wei
2024-10-21 15:33   ` Jens Axboe
2024-10-16 18:52 ` [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area David Wei
2024-10-21 15:35   ` Jens Axboe
2024-10-21 16:28     ` Pavel Begunkov
2024-10-21 16:29       ` Jens Axboe
2024-10-21 16:39         ` Pavel Begunkov
2024-10-16 18:52 ` [PATCH v6 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider David Wei
2024-10-21 15:46   ` Jens Axboe
2024-10-16 18:52 ` [PATCH v6 12/15] io_uring/zcrx: add io_recvzc request David Wei
2024-10-21 16:05   ` Jens Axboe
2024-10-16 18:52 ` [PATCH v6 13/15] io_uring/zcrx: set pp memory provider for an rx queue David Wei
2024-10-21 16:30   ` Jens Axboe
2024-10-16 18:52 ` [PATCH v6 14/15] io_uring/zcrx: add copy fallback David Wei
2024-10-21 14:40   ` Paolo Abeni
2024-10-21 18:31     ` David Wei
2024-10-22  7:48       ` Paolo Abeni
2024-10-16 18:52 ` [PATCH v6 15/15] io_uring/zcrx: throttle receive requests David Wei
2024-10-21 16:36   ` Jens Axboe
2024-10-21 15:27 ` [PATCH v6 00/15] io_uring zero copy rx Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox