From: David Wei <[email protected]>
To: [email protected], [email protected]
Cc: Jens Axboe <[email protected]>,
Pavel Begunkov <[email protected]>,
Jakub Kicinski <[email protected]>, Paolo Abeni <[email protected]>,
"David S. Miller" <[email protected]>,
Eric Dumazet <[email protected]>,
Jesper Dangaard Brouer <[email protected]>,
David Ahern <[email protected]>,
Mina Almasry <[email protected]>,
Willem de Bruijn <[email protected]>,
Dragos Tatulea <[email protected]>
Subject: [PATCH 10/20] io_uring: delay ZC pool destruction
Date: Tue, 7 Nov 2023 13:40:35 -0800 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
At a point in time, a ZC buf may be in:
* Rx queue
* Socket
* One of the ifq ringbufs
* Userspace
The ZC pool region and the pool itself cannot be destroyed until all
bufs have been returned.
This patch changes the ZC pool destruction to be delayed work, waiting
for up to 10 seconds for bufs to be returned before unconditionally
destroying the pool.
Co-developed-by: Pavel Begunkov <[email protected]>
Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: David Wei <[email protected]>
---
io_uring/zc_rx.c | 51 ++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 45 insertions(+), 6 deletions(-)
diff --git a/io_uring/zc_rx.c b/io_uring/zc_rx.c
index 59f279486e9a..bebcd637c893 100644
--- a/io_uring/zc_rx.c
+++ b/io_uring/zc_rx.c
@@ -30,6 +30,10 @@ struct io_zc_rx_pool {
u32 cache_count;
u32 cache[POOL_CACHE_SIZE];
+ /* delayed destruction */
+ unsigned long delay_end;
+ struct delayed_work destroy_work;
+
/* freelist */
spinlock_t freelist_lock;
u32 free_count;
@@ -224,20 +228,57 @@ static int io_zc_rx_create_pool(struct io_ring_ctx *ctx,
return ret;
}
-static void io_zc_rx_destroy_pool(struct io_zc_rx_pool *pool)
+static void io_zc_rx_destroy_ifq(struct io_zc_rx_ifq *ifq)
+{
+ if (ifq->dev)
+ dev_put(ifq->dev);
+ io_free_rbuf_ring(ifq);
+ kfree(ifq);
+}
+
+static void io_zc_rx_destroy_pool_work(struct work_struct *work)
{
+ struct io_zc_rx_pool *pool = container_of(
+ to_delayed_work(work), struct io_zc_rx_pool, destroy_work);
struct device *dev = netdev2dev(pool->ifq->dev);
struct io_zc_rx_buf *buf;
+ int i, refc, count;
- for (int i = 0; i < pool->nr_pages; i++) {
+ for (i = 0; i < pool->nr_pages; i++) {
buf = &pool->bufs[i];
+ refc = atomic_read(&buf->refcount) & IO_ZC_RX_KREF_MASK;
+ if (refc) {
+ if (time_before(jiffies, pool->delay_end)) {
+ schedule_delayed_work(&pool->destroy_work, HZ);
+ return;
+ }
+ count++;
+ }
+ }
+
+ if (count) {
+ pr_debug("freeing pool with %d/%d outstanding pages\n",
+ count, pool->nr_pages);
+ return;
+ }
+ for (i = 0; i < pool->nr_pages; i++) {
+ buf = &pool->bufs[i];
io_zc_rx_unmap_buf(dev, buf);
}
+
+ io_zc_rx_destroy_ifq(pool->ifq);
kvfree(pool->bufs);
kvfree(pool);
}
+static void io_zc_rx_destroy_pool(struct io_zc_rx_pool *pool)
+{
+ pool->delay_end = jiffies + HZ * 10;
+ INIT_DELAYED_WORK(&pool->destroy_work, io_zc_rx_destroy_pool_work);
+ schedule_delayed_work(&pool->destroy_work, 0);
+}
+
static struct io_zc_rx_ifq *io_zc_rx_ifq_alloc(struct io_ring_ctx *ctx)
{
struct io_zc_rx_ifq *ifq;
@@ -258,10 +299,8 @@ static void io_zc_rx_ifq_free(struct io_zc_rx_ifq *ifq)
io_close_zc_rxq(ifq);
if (ifq->pool)
io_zc_rx_destroy_pool(ifq->pool);
- if (ifq->dev)
- dev_put(ifq->dev);
- io_free_rbuf_ring(ifq);
- kfree(ifq);
+ else
+ io_zc_rx_destroy_ifq(ifq);
}
int io_register_zc_rx_ifq(struct io_ring_ctx *ctx,
--
2.39.3
next prev parent reply other threads:[~2023-11-07 21:41 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-07 21:40 [RFC PATCH v2 00/20] Zero copy Rx using io_uring David Wei
2023-11-07 21:40 ` [PATCH 01/20] io_uring: add interface queue David Wei
2023-11-07 21:40 ` [PATCH 02/20] io_uring: add mmap support for shared ifq ringbuffers David Wei
2023-11-07 21:40 ` [PATCH 03/20] netdev: add XDP_SETUP_ZC_RX command David Wei
2023-11-07 21:40 ` [PATCH 04/20] io_uring: setup ZC for an Rx queue when registering an ifq David Wei
2023-11-07 21:40 ` [PATCH 05/20] io_uring/zcrx: implement socket registration David Wei
2023-11-07 21:40 ` [PATCH 06/20] io_uring: add ZC buf and pool David Wei
2023-11-07 21:40 ` [PATCH 07/20] io_uring: add ZC pool API David Wei
2023-11-07 21:40 ` [PATCH 08/20] skbuff: add SKBFL_FIXED_FRAG and skb_fixed() David Wei
2023-11-07 21:40 ` [PATCH 09/20] io_uring: allocate a uarg for freeing zero copy skbs David Wei
2023-11-07 21:40 ` David Wei [this message]
2023-11-07 21:40 ` [PATCH 11/20] net: add data pool David Wei
2023-11-07 21:40 ` [PATCH 12/20] io_uring: add io_recvzc request David Wei
2023-11-07 21:40 ` [PATCH 13/20] io_uring/zcrx: propagate ifq down the stack David Wei
2023-11-07 21:40 ` [PATCH 14/20] io_uring/zcrx: introduce io_zc_get_rbuf_cqe David Wei
2023-11-07 21:40 ` [PATCH 15/20] io_uring/zcrx: add copy fallback David Wei
2023-11-07 21:40 ` [PATCH 16/20] net: execute custom callback from napi David Wei
2023-11-07 21:40 ` [PATCH 17/20] io_uring/zcrx: copy fallback to ring buffers David Wei
2023-11-07 21:40 ` [PATCH 18/20] veth: add support for io_uring zc rx David Wei
2023-11-07 21:40 ` [PATCH 19/20] bnxt: use data pool David Wei
2023-11-07 21:40 ` [PATCH 20/20] io_uring/zcrx: add multi socket support per Rx queue David Wei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox