From: Pavel Begunkov <asml.silence@gmail.com>
To: io-uring@vger.kernel.org
Cc: asml.silence@gmail.com, axboe@kernel.dk, netdev@vger.kernel.org
Subject: [PATCH review-only 4/4] io_uring/zcrx: implement device-less mode for zcrx
Date: Tue, 17 Feb 2026 10:58:55 +0000 [thread overview]
Message-ID: <4ae848e20e008a2496c4dd2710c50b15035a92d0.1771325198.git.asml.silence@gmail.com> (raw)
In-Reply-To: <cover.1771325198.git.asml.silence@gmail.com>
Allow creating a zcrx instance without attaching it to a net device.
All data will be copied through the fallback path. The user is also
expected to use ZCRX_CTRL_FLUSH_RQ to handle overflows as it normally
should even with a netdev, but it becomes even more relevant as there
will likely be no one to automatically pick up buffers.
Apart from that, it follows the zcrx uapi for the I/O path, and is
useful for testing, experimentation, and potentially for the copy
recieve path in the future if improved.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
include/uapi/linux/io_uring/zcrx.h | 9 ++++++-
io_uring/zcrx.c | 41 ++++++++++++++++++++----------
io_uring/zcrx.h | 2 +-
3 files changed, 36 insertions(+), 16 deletions(-)
diff --git a/include/uapi/linux/io_uring/zcrx.h b/include/uapi/linux/io_uring/zcrx.h
index 3163a4b8aeb0..103d65e690eb 100644
--- a/include/uapi/linux/io_uring/zcrx.h
+++ b/include/uapi/linux/io_uring/zcrx.h
@@ -49,7 +49,14 @@ struct io_uring_zcrx_area_reg {
};
enum zcrx_reg_flags {
- ZCRX_REG_IMPORT = 1,
+ ZCRX_REG_IMPORT = 1,
+
+ /*
+ * Register a zcrx instance without a net device. All data will be
+ * copied. The refill queue entries might not be automatically
+ * consmumed and need to be flushed, see ZCRX_CTRL_FLUSH_RQ.
+ */
+ ZCRX_REG_NODEV = 2,
};
enum zcrx_features {
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 4db3df6d7658..3d377523ff7e 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -127,10 +127,10 @@ static int io_import_dmabuf(struct io_zcrx_ifq *ifq,
int dmabuf_fd = area_reg->dmabuf_fd;
int i, ret;
+ if (!ifq->dev)
+ return -EINVAL;
if (off)
return -EINVAL;
- if (WARN_ON_ONCE(!ifq->dev))
- return -EFAULT;
if (!IS_ENABLED(CONFIG_DMA_SHARED_BUFFER))
return -EINVAL;
@@ -211,11 +211,13 @@ static int io_import_umem(struct io_zcrx_ifq *ifq,
if (ret)
goto out_err;
- ret = dma_map_sgtable(ifq->dev, &mem->page_sg_table,
- DMA_FROM_DEVICE, IO_DMA_ATTR);
- if (ret < 0)
- goto out_err;
- mapped = true;
+ if (ifq->dev) {
+ ret = dma_map_sgtable(ifq->dev, &mem->page_sg_table,
+ DMA_FROM_DEVICE, IO_DMA_ATTR);
+ if (ret < 0)
+ goto out_err;
+ mapped = true;
+ }
mem->account_pages = io_count_account_pages(pages, nr_pages);
ret = io_account_mem(ifq->user, ifq->mm_account, mem->account_pages);
@@ -446,7 +448,8 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
ret = io_import_area(ifq, &area->mem, area_reg);
if (ret)
goto err;
- area->is_mapped = true;
+ if (ifq->dev)
+ area->is_mapped = true;
if (buf_size_shift > io_area_max_shift(&area->mem)) {
ret = -ERANGE;
@@ -482,9 +485,11 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
niov->type = NET_IOV_IOURING;
}
- ret = io_populate_area_dma(ifq, area);
- if (ret)
- goto err;
+ if (ifq->dev) {
+ ret = io_populate_area_dma(ifq, area);
+ if (ret)
+ goto err;
+ }
area->free_count = nr_iovs;
/* we're only supporting one area per ifq for now */
@@ -816,6 +821,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
return -EFAULT;
if (reg.if_rxq == -1 || !reg.rq_entries)
return -EINVAL;
+ if ((reg.if_rxq || reg.if_idx) && (reg.flags & ZCRX_REG_NODEV))
+ return -EINVAL;
if (reg.rq_entries > IO_RQ_MAX_ENTRIES) {
if (!(ctx->flags & IORING_SETUP_CLAMP))
return -EINVAL;
@@ -851,9 +858,15 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
if (ret)
goto err;
- ret = zcrx_register_netdev(ifq, ®, &area);
- if (ret)
- goto err;
+ if (!(reg.flags & ZCRX_REG_NODEV)) {
+ ret = zcrx_register_netdev(ifq, ®, &area);
+ if (ret)
+ goto err;
+ } else {
+ ret = io_zcrx_create_area(ifq, &area, ®);
+ if (ret)
+ goto err;
+ }
reg.zcrx_id = id;
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 0ddcf0ee8861..db427f4a55b6 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -8,7 +8,7 @@
#include <net/page_pool/types.h>
#include <net/net_trackers.h>
-#define ZCRX_SUPPORTED_REG_FLAGS (ZCRX_REG_IMPORT)
+#define ZCRX_SUPPORTED_REG_FLAGS (ZCRX_REG_IMPORT | ZCRX_REG_NODEV)
#define ZCRX_FEATURES (ZCRX_FEATURE_RX_PAGE_SIZE)
struct io_zcrx_mem {
--
2.52.0
next prev parent reply other threads:[~2026-02-17 10:59 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-17 10:58 [RFC io_uring review-only 0/4] zcrx mapping cleanups and device-less instances Pavel Begunkov
2026-02-17 10:58 ` [PATCH review-only 1/4] io_uring/zcrx: fully clean area on error in io_import_umem() Pavel Begunkov
2026-02-17 10:58 ` [PATCH review-only 2/4] io_uring/zcrx: always dma map in advance Pavel Begunkov
2026-02-17 10:58 ` [PATCH review-only 3/4] io_uring/zcrx: extract netdev+area init into a helper Pavel Begunkov
2026-02-17 10:58 ` Pavel Begunkov [this message]
2026-02-17 16:12 ` [RFC io_uring review-only 0/4] zcrx mapping cleanups and device-less instances Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4ae848e20e008a2496c4dd2710c50b15035a92d0.1771325198.git.asml.silence@gmail.com \
--to=asml.silence@gmail.com \
--cc=axboe@kernel.dk \
--cc=io-uring@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox