From: Pavel Begunkov <asml.silence@gmail.com>
To: io-uring@vger.kernel.org
Cc: asml.silence@gmail.com
Subject: [zcrx-next 06/10] io_uring/zcrx: unify allocation dma sync
Date: Sun, 17 Aug 2025 23:43:32 +0100 [thread overview]
Message-ID: <b0d53ed2e576134dda0a3bde1099c7e1edf96edd.1755467432.git.asml.silence@gmail.com> (raw)
In-Reply-To: <cover.1755467432.git.asml.silence@gmail.com>
First, I want niov dma sync'ing during page pool allocation out of
spinlocked sections, i.e. rq_lock for ring refilling and freelist_lock
for slow path allocation. Move it all to a common section, which can
further optimise by checking dma_dev_need_sync() only once per batch.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/zcrx.c | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index d8dd4624f8f8..555d4d9ff479 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -292,21 +292,6 @@ static int io_zcrx_map_area(struct io_zcrx_ifq *ifq, struct io_zcrx_area *area)
return ret;
}
-static void io_zcrx_sync_for_device(const struct page_pool *pool,
- struct net_iov *niov)
-{
-#if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC)
- dma_addr_t dma_addr;
-
- if (!dma_dev_need_sync(pool->p.dev))
- return;
-
- dma_addr = page_pool_get_dma_addr_netmem(net_iov_to_netmem(niov));
- __dma_sync_single_for_device(pool->p.dev, dma_addr + pool->p.offset,
- PAGE_SIZE, pool->p.dma_dir);
-#endif
-}
-
#define IO_RQ_MAX_ENTRIES 32768
#define IO_SKBS_PER_CALL_LIMIT 20
@@ -791,7 +776,6 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
continue;
}
- io_zcrx_sync_for_device(pp, niov);
net_mp_netmem_place_in_cache(pp, netmem);
} while (--entries);
@@ -806,15 +790,31 @@ static void io_zcrx_refill_slow(struct page_pool *pp, struct io_zcrx_ifq *ifq)
spin_lock_bh(&area->freelist_lock);
while (area->free_count && pp->alloc.count < PP_ALLOC_CACHE_REFILL) {
struct net_iov *niov = __io_zcrx_get_free_niov(area);
- netmem_ref netmem = net_iov_to_netmem(niov);
net_mp_niov_set_page_pool(pp, niov);
- io_zcrx_sync_for_device(pp, niov);
- net_mp_netmem_place_in_cache(pp, netmem);
+ net_mp_netmem_place_in_cache(pp, net_iov_to_netmem(niov));
}
spin_unlock_bh(&area->freelist_lock);
}
+static void io_sync_allocated_niovs(struct page_pool *pp)
+{
+#if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC)
+ int i;
+
+ if (!dma_dev_need_sync(pp->p.dev))
+ return;
+
+ for (i = 0; i < pp->alloc.count; i++) {
+ netmem_ref netmem = pp->alloc.cache[i];
+ dma_addr_t dma_addr = page_pool_get_dma_addr_netmem(netmem);
+
+ __dma_sync_single_for_device(pp->p.dev, dma_addr + pp->p.offset,
+ PAGE_SIZE, pp->p.dma_dir);
+ }
+#endif
+}
+
static netmem_ref io_pp_zc_alloc_netmems(struct page_pool *pp, gfp_t gfp)
{
struct io_zcrx_ifq *ifq = io_pp_to_ifq(pp);
@@ -831,6 +831,7 @@ static netmem_ref io_pp_zc_alloc_netmems(struct page_pool *pp, gfp_t gfp)
if (!pp->alloc.count)
return 0;
out_return:
+ io_sync_allocated_niovs(pp);
return pp->alloc.cache[--pp->alloc.count];
}
--
2.49.0
next prev parent reply other threads:[~2025-08-17 22:42 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 01/10] io_uring/zcrx: replace memchar_inv with is_zero Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 02/10] io_uring/zcrx: use page_pool_unref_and_test() Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 03/10] io_uring/zcrx: remove extra io_zcrx_drop_netdev Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 04/10] io_uring/zcrx: rename dma lock Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 05/10] io_uring/zcrx: protect netdev with pp_lock Pavel Begunkov
2025-08-17 22:43 ` Pavel Begunkov [this message]
2025-08-17 22:43 ` [zcrx-next 07/10] io_uring/zcrx: reduce netmem scope in refill Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 08/10] io_uring/zcrx: use guards for the refill lock Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 09/10] io_uring/zcrx: don't adjust free cache space Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 10/10] io_uring/zcrx: rely on cache size truncation on refill Pavel Begunkov
2025-08-20 18:20 ` [zcrx-next 00/10] next zcrx cleanups Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b0d53ed2e576134dda0a3bde1099c7e1edf96edd.1755467432.git.asml.silence@gmail.com \
--to=asml.silence@gmail.com \
--cc=io-uring@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox