public inbox for [email protected]
 help / color / mirror / Atom feed
From: Mina Almasry <[email protected]>
To: David Wei <[email protected]>
Cc: [email protected], [email protected],
	 Jens Axboe <[email protected]>,
	Pavel Begunkov <[email protected]>,
	 Jakub Kicinski <[email protected]>,
	Paolo Abeni <[email protected]>,
	 "David S. Miller" <[email protected]>,
	Eric Dumazet <[email protected]>,
	 Jesper Dangaard Brouer <[email protected]>,
	David Ahern <[email protected]>
Subject: Re: [RFC PATCH v3 19/20] net: page pool: generalise ppiov dma address get
Date: Thu, 21 Dec 2023 11:51:09 -0800	[thread overview]
Message-ID: <CAHS8izOjeb-DMJNAgQaqv2dJaSHsLPSAeMPNWeViLhhHVouSnw@mail.gmail.com> (raw)
In-Reply-To: <[email protected]>

On Tue, Dec 19, 2023 at 1:04 PM David Wei <[email protected]> wrote:
>
> From: Pavel Begunkov <[email protected]>
>
> io_uring pp memory provider doesn't have contiguous dma addresses,
> implement page_pool_iov_dma_addr() via callbacks.
>
> Note: it might be better to stash dma address into struct page_pool_iov.
>

This is the approach already taken in v1 & RFC v5. I suspect you'd be
able to take advantage when you rebase.

> Signed-off-by: Pavel Begunkov <[email protected]>
> Signed-off-by: David Wei <[email protected]>
> ---
>  include/net/page_pool/helpers.h | 5 +----
>  include/net/page_pool/types.h   | 2 ++
>  io_uring/zc_rx.c                | 8 ++++++++
>  net/core/page_pool.c            | 9 +++++++++
>  4 files changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
> index aca3a52d0e22..10dba1f2aa0c 100644
> --- a/include/net/page_pool/helpers.h
> +++ b/include/net/page_pool/helpers.h
> @@ -105,10 +105,7 @@ static inline unsigned int page_pool_iov_idx(const struct page_pool_iov *ppiov)
>  static inline dma_addr_t
>  page_pool_iov_dma_addr(const struct page_pool_iov *ppiov)
>  {
> -       struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov);
> -
> -       return owner->base_dma_addr +
> -              ((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT);
> +       return ppiov->pp->mp_ops->ppiov_dma_addr(ppiov);
>  }
>
>  static inline unsigned long
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index f54ee759e362..1b9266835ab6 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -125,6 +125,7 @@ struct page_pool_stats {
>  #endif
>
>  struct mem_provider;
> +struct page_pool_iov;
>
>  enum pp_memory_provider_type {
>         __PP_MP_NONE, /* Use system allocator directly */
> @@ -138,6 +139,7 @@ struct pp_memory_provider_ops {
>         void (*scrub)(struct page_pool *pool);
>         struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
>         bool (*release_page)(struct page_pool *pool, struct page *page);
> +       dma_addr_t (*ppiov_dma_addr)(const struct page_pool_iov *ppiov);
>  };
>
>  extern const struct pp_memory_provider_ops dmabuf_devmem_ops;
> diff --git a/io_uring/zc_rx.c b/io_uring/zc_rx.c
> index f7d99d569885..20fb89e6bad7 100644
> --- a/io_uring/zc_rx.c
> +++ b/io_uring/zc_rx.c
> @@ -600,12 +600,20 @@ static void io_pp_zc_destroy(struct page_pool *pp)
>         percpu_ref_put(&ifq->ctx->refs);
>  }
>
> +static dma_addr_t io_pp_zc_ppiov_dma_addr(const struct page_pool_iov *ppiov)
> +{
> +       struct io_zc_rx_buf *buf = io_iov_to_buf((struct page_pool_iov *)ppiov);
> +
> +       return buf->dma;
> +}
> +
>  const struct pp_memory_provider_ops io_uring_pp_zc_ops = {
>         .alloc_pages            = io_pp_zc_alloc_pages,
>         .release_page           = io_pp_zc_release_page,
>         .init                   = io_pp_zc_init,
>         .destroy                = io_pp_zc_destroy,
>         .scrub                  = io_pp_zc_scrub,
> +       .ppiov_dma_addr         = io_pp_zc_ppiov_dma_addr,
>  };
>  EXPORT_SYMBOL(io_uring_pp_zc_ops);
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index ebf5ff009d9d..6586631ecc2e 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -1105,10 +1105,19 @@ static bool mp_dmabuf_devmem_release_page(struct page_pool *pool,
>         return true;
>  }
>
> +static dma_addr_t mp_dmabuf_devmem_ppiov_dma_addr(const struct page_pool_iov *ppiov)
> +{
> +       struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov);
> +
> +       return owner->base_dma_addr +
> +              ((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT);
> +}
> +
>  const struct pp_memory_provider_ops dmabuf_devmem_ops = {
>         .init                   = mp_dmabuf_devmem_init,
>         .destroy                = mp_dmabuf_devmem_destroy,
>         .alloc_pages            = mp_dmabuf_devmem_alloc_pages,
>         .release_page           = mp_dmabuf_devmem_release_page,
> +       .ppiov_dma_addr         = mp_dmabuf_devmem_ppiov_dma_addr,
>  };
>  EXPORT_SYMBOL(dmabuf_devmem_ops);
> --
> 2.39.3
>


-- 
Thanks,
Mina

  reply	other threads:[~2023-12-21 19:51 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-19 21:03 [RFC PATCH v3 00/20] Zero copy Rx using io_uring David Wei
2023-12-19 21:03 ` [RFC PATCH v3 01/20] net: page_pool: add ppiov mangling helper David Wei
2023-12-19 23:22   ` Mina Almasry
2023-12-19 23:59     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 02/20] tcp: don't allow non-devmem originated ppiov David Wei
2023-12-19 23:24   ` Mina Almasry
2023-12-20  1:29     ` Pavel Begunkov
2024-01-02 16:11       ` Mina Almasry
2023-12-19 21:03 ` [RFC PATCH v3 03/20] net: page pool: rework ppiov life cycle David Wei
2023-12-19 23:35   ` Mina Almasry
2023-12-20  0:49     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 04/20] net: enable napi_pp_put_page for ppiov David Wei
2023-12-19 21:03 ` [RFC PATCH v3 05/20] net: page_pool: add ->scrub mem provider callback David Wei
2023-12-19 21:03 ` [RFC PATCH v3 06/20] io_uring: separate header for exported net bits David Wei
2023-12-20 16:01   ` Jens Axboe
2023-12-19 21:03 ` [RFC PATCH v3 07/20] io_uring: add interface queue David Wei
2023-12-20 16:13   ` Jens Axboe
2023-12-20 16:23     ` Pavel Begunkov
2023-12-21  1:44     ` David Wei
2023-12-21 17:57   ` Willem de Bruijn
2023-12-30 16:25     ` Pavel Begunkov
2023-12-31 22:25       ` Willem de Bruijn
2023-12-19 21:03 ` [RFC PATCH v3 08/20] io_uring: add mmap support for shared ifq ringbuffers David Wei
2023-12-20 16:13   ` Jens Axboe
2023-12-19 21:03 ` [RFC PATCH v3 09/20] netdev: add XDP_SETUP_ZC_RX command David Wei
2023-12-19 21:03 ` [RFC PATCH v3 10/20] io_uring: setup ZC for an Rx queue when registering an ifq David Wei
2023-12-20 16:06   ` Jens Axboe
2023-12-20 16:24     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 11/20] io_uring/zcrx: implement socket registration David Wei
2023-12-19 21:03 ` [RFC PATCH v3 12/20] io_uring: add ZC buf and pool David Wei
2023-12-19 21:03 ` [RFC PATCH v3 13/20] io_uring: implement pp memory provider for zc rx David Wei
2023-12-19 23:44   ` Mina Almasry
2023-12-20  0:39     ` Pavel Begunkov
2023-12-21 19:36   ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 14/20] net: page pool: add io_uring memory provider David Wei
2023-12-19 23:39   ` Mina Almasry
2023-12-20  0:04     ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 15/20] io_uring: add io_recvzc request David Wei
2023-12-20 16:27   ` Jens Axboe
2023-12-20 17:04     ` Pavel Begunkov
2023-12-20 18:09       ` Jens Axboe
2023-12-21 18:59         ` Pavel Begunkov
2023-12-21 21:32           ` Jens Axboe
2023-12-30 21:15             ` Pavel Begunkov
2023-12-19 21:03 ` [RFC PATCH v3 16/20] net: execute custom callback from napi David Wei
2023-12-19 21:03 ` [RFC PATCH v3 17/20] io_uring/zcrx: add copy fallback David Wei
2023-12-19 21:03 ` [RFC PATCH v3 18/20] veth: add support for io_uring zc rx David Wei
2023-12-19 21:03 ` [RFC PATCH v3 19/20] net: page pool: generalise ppiov dma address get David Wei
2023-12-21 19:51   ` Mina Almasry [this message]
2023-12-19 21:03 ` [RFC PATCH v3 20/20] bnxt: enable io_uring zc page pool David Wei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHS8izOjeb-DMJNAgQaqv2dJaSHsLPSAeMPNWeViLhhHVouSnw@mail.gmail.com \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox