public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
From: Pavel Begunkov <asml.silence@gmail.com>
To: David Wei <dw@davidwei.uk>,
	io-uring@vger.kernel.org, netdev@vger.kernel.org
Cc: Jens Axboe <axboe@kernel.dk>
Subject: Re: [PATCH v4 7/8] io_uring/zcrx: add refcount to ifq and remove ifq->ctx
Date: Wed, 29 Oct 2025 15:22:52 +0000	[thread overview]
Message-ID: <810d45da-7d60-460a-a250-eacf07f3d005@gmail.com> (raw)
In-Reply-To: <20251028174639.1244592-8-dw@davidwei.uk>

On 10/28/25 17:46, David Wei wrote:
> Add a refcount to struct io_zcrx_ifq to track the number of rings that
> share it. For now, this is only ever 1 i.e. not shared.
> 
> This refcount replaces the ref that the ifq holds on ctx->refs via the
> page pool memory provider. This was used to keep the ifq around until
> the ring ctx is being freed i.e. ctx->refs fall to 0. But with ifq now
> being refcounted directly by the ring, and ifq->ctx removed, this is no
> longer necessary.
> 
> Since ifqs now no longer hold refs to ring ctx, there isn't a need to
> split the cleanup of ifqs into two: io_shutdown_zcrx_ifqs() in
> io_ring_exit_work() while waiting for ctx->refs to drop to 0, and
> io_unregister_zcrx_ifqs() after. Remove io_shutdown_zcrx_ifqs().
> 
> So an ifq now behaves like a normal refcounted object; the last ref from
> a ring will free the ifq.
> 
> Signed-off-by: David Wei <dw@davidwei.uk>
> ---
>   io_uring/io_uring.c |  5 -----
>   io_uring/zcrx.c     | 24 +++++-------------------
>   io_uring/zcrx.h     |  6 +-----
>   3 files changed, 6 insertions(+), 29 deletions(-)
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 7d42748774f8..8af5efda9c11 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -3042,11 +3042,6 @@ static __cold void io_ring_exit_work(struct work_struct *work)
>   			io_cqring_overflow_kill(ctx);
>   			mutex_unlock(&ctx->uring_lock);
>   		}
> -		if (!xa_empty(&ctx->zcrx_ctxs)) {
> -			mutex_lock(&ctx->uring_lock);
> -			io_shutdown_zcrx_ifqs(ctx);
> -			mutex_unlock(&ctx->uring_lock);
> -		}
>   
>   		if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
>   			io_move_task_work_from_local(ctx);
> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
> index b3f3d55d2f63..6324dfa61ce0 100644
> --- a/io_uring/zcrx.c
> +++ b/io_uring/zcrx.c
> @@ -479,7 +479,6 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
>   		return NULL;
>   
>   	ifq->if_rxq = -1;
> -	ifq->ctx = ctx;
>   	spin_lock_init(&ifq->rq_lock);
>   	mutex_init(&ifq->pp_lock);
>   	return ifq;
> @@ -592,6 +591,7 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
>   	ifq = io_zcrx_ifq_alloc(ctx);
>   	if (!ifq)
>   		return -ENOMEM;
> +	refcount_set(&ifq->refs, 1);
>   	if (ctx->user) {
>   		get_uid(ctx->user);
>   		ifq->user = ctx->user;
> @@ -714,19 +714,6 @@ static void io_zcrx_scrub(struct io_zcrx_ifq *ifq)
>   	}
>   }
>   
> -void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
> -{
> -	struct io_zcrx_ifq *ifq;
> -	unsigned long index;
> -
> -	lockdep_assert_held(&ctx->uring_lock);
> -
> -	xa_for_each(&ctx->zcrx_ctxs, index, ifq) {
> -		io_zcrx_scrub(ifq);
> -		io_close_queue(ifq);
> -	}
> -}
> -
>   void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
>   {
>   	struct io_zcrx_ifq *ifq;
> @@ -743,7 +730,10 @@ void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
>   		}
>   		if (!ifq)
>   			break;
> -		io_zcrx_ifq_free(ifq);
> +		if (refcount_dec_and_test(&ifq->refs)) {
> +			io_zcrx_scrub(ifq);
> +			io_zcrx_ifq_free(ifq);
> +		}
>   	}
>   
>   	xa_destroy(&ctx->zcrx_ctxs);
> @@ -894,15 +884,11 @@ static int io_pp_zc_init(struct page_pool *pp)
>   	if (ret)
>   		return ret;
>   
> -	percpu_ref_get(&ifq->ctx->refs);
>   	return 0;

refcount_inc();

>   }
>   
>   static void io_pp_zc_destroy(struct page_pool *pp)
>   {
> -	struct io_zcrx_ifq *ifq = io_pp_to_ifq(pp);
> -
> -	percpu_ref_put(&ifq->ctx->refs);

refcount_dec_and_test + destroy. Otherwise, seems like
nothing protects it from going away under pp.

-- 
Pavel Begunkov


  reply	other threads:[~2025-10-29 15:22 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-28 17:46 [PATCH v4 0/8] io_uring zcrx ifq sharing David Wei
2025-10-28 17:46 ` [PATCH v4 1/8] io_uring/memmap: remove unneeded io_ring_ctx arg David Wei
2025-10-28 17:46 ` [PATCH v4 2/8] io_uring/memmap: refactor io_free_region() to take user_struct param David Wei
2025-10-28 17:46 ` [PATCH v4 3/8] io_uring/rsrc: refactor io_{un}account_mem() to take {user,mm}_struct param David Wei
2025-10-28 17:46 ` [PATCH v4 4/8] io_uring/zcrx: add io_zcrx_ifq arg to io_zcrx_free_area() David Wei
2025-10-28 17:46 ` [PATCH v4 5/8] io_uring/zcrx: add user_struct and mm_struct to io_zcrx_ifq David Wei
2025-10-28 17:46 ` [PATCH v4 6/8] io_uring/zcrx: move io_unregister_zcrx_ifqs() down David Wei
2025-10-28 17:46 ` [PATCH v4 7/8] io_uring/zcrx: add refcount to ifq and remove ifq->ctx David Wei
2025-10-29 15:22   ` Pavel Begunkov [this message]
2025-10-29 16:16     ` Pavel Begunkov
2025-10-30 15:24       ` Pavel Begunkov
2025-10-28 17:46 ` [PATCH v4 8/8] io_uring/zcrx: share an ifq between rings David Wei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=810d45da-7d60-460a-a250-eacf07f3d005@gmail.com \
    --to=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=dw@davidwei.uk \
    --cc=io-uring@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox