From: Pavel Begunkov <asml.silence@gmail.com>
To: netdev@vger.kernel.org
Cc: "David S . Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
Michael Chan <michael.chan@broadcom.com>,
Pavan Chebbi <pavan.chebbi@broadcom.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Stanislav Fomichev <sdf@fomichev.me>,
Pavel Begunkov <asml.silence@gmail.com>,
Jens Axboe <axboe@kernel.dk>, Simon Horman <horms@kernel.org>,
linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org,
io-uring@vger.kernel.org, dtatulea@nvidia.com
Subject: [PATCH net-next v6 6/8] eth: bnxt: allow providers to set rx buf size
Date: Thu, 27 Nov 2025 20:44:19 +0000 [thread overview]
Message-ID: <9342d7dd5e663d3d44419229d6c9971b67e3f059.1764264798.git.asml.silence@gmail.com> (raw)
In-Reply-To: <cover.1764264798.git.asml.silence@gmail.com>
Implement NDO_QUEUE_RX_BUF_SIZE and take the rx buf size from the memory
providers.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 34 +++++++++++++++++++++++
drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 +
2 files changed, 35 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index fc88566779a4..698ed30b1dcc 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -15906,16 +15906,46 @@ static const struct netdev_stat_ops bnxt_stat_ops = {
.get_base_stats = bnxt_get_base_stats,
};
+static ssize_t bnxt_get_rx_buf_size(struct bnxt *bp, int rxq_idx)
+{
+ struct netdev_rx_queue *rxq = __netif_get_rx_queue(bp->dev, rxq_idx);
+ size_t rx_buf_size;
+
+ rx_buf_size = rxq->mp_params.rx_buf_len;
+ if (!rx_buf_size)
+ return BNXT_RX_PAGE_SIZE;
+
+ /* Older chips need MSS calc so rx_buf_len is not supported,
+ * but we don't set queue ops for them so we should never get here.
+ */
+ if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS))
+ return -EINVAL;
+
+ if (!is_power_of_2(rx_buf_size))
+ return -ERANGE;
+
+ if (rx_buf_size < BNXT_RX_PAGE_SIZE ||
+ rx_buf_size > BNXT_MAX_RX_PAGE_SIZE)
+ return -ERANGE;
+
+ return rx_buf_size;
+}
+
static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
{
struct bnxt_rx_ring_info *rxr, *clone;
struct bnxt *bp = netdev_priv(dev);
struct bnxt_ring_struct *ring;
+ ssize_t rx_buf_size;
int rc;
if (!bp->rx_ring)
return -ENETDOWN;
+ rx_buf_size = bnxt_get_rx_buf_size(bp, idx);
+ if (rx_buf_size < 0)
+ return rx_buf_size;
+
rxr = &bp->rx_ring[idx];
clone = qmem;
memcpy(clone, rxr, sizeof(*rxr));
@@ -15927,6 +15957,7 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
clone->rx_sw_agg_prod = 0;
clone->rx_next_cons = 0;
clone->need_head_pool = false;
+ clone->rx_page_size = rx_buf_size;
rc = bnxt_alloc_rx_page_pool(bp, clone, rxr->page_pool->p.nid);
if (rc)
@@ -16053,6 +16084,8 @@ static void bnxt_copy_rx_ring(struct bnxt *bp,
src_ring = &src->rx_agg_ring_struct;
src_rmem = &src_ring->ring_mem;
+ dst->rx_page_size = src->rx_page_size;
+
WARN_ON(dst_rmem->nr_pages != src_rmem->nr_pages);
WARN_ON(dst_rmem->page_size != src_rmem->page_size);
WARN_ON(dst_rmem->flags != src_rmem->flags);
@@ -16205,6 +16238,7 @@ static const struct netdev_queue_mgmt_ops bnxt_queue_mgmt_ops = {
.ndo_queue_mem_free = bnxt_queue_mem_free,
.ndo_queue_start = bnxt_queue_start,
.ndo_queue_stop = bnxt_queue_stop,
+ .supported_params = NDO_QUEUE_RX_BUF_SIZE,
};
static void bnxt_remove_one(struct pci_dev *pdev)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index bfe36ea1e7c5..b59b8e4984f4 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -759,6 +759,7 @@ struct nqe_cn {
#endif
#define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT)
+#define BNXT_MAX_RX_PAGE_SIZE (1 << 15)
#define BNXT_MAX_MTU 9500
--
2.52.0
next prev parent reply other threads:[~2025-11-27 20:44 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-27 20:44 [PATCH net-next v6 0/8][pull request] Add support for providers with large rx buffer Pavel Begunkov
2025-11-27 20:44 ` [PATCH net-next v6 1/8] net: page_pool: sanitise allocation order Pavel Begunkov
2025-11-27 20:44 ` [PATCH net-next v6 2/8] net: memzero mp params when closing a queue Pavel Begunkov
2025-11-27 20:44 ` [PATCH net-next v6 3/8] net: let pp memory provider to specify rx buf len Pavel Begunkov
2025-11-27 20:44 ` [PATCH net-next v6 4/8] eth: bnxt: store rx buffer size per queue Pavel Begunkov
2025-11-27 20:44 ` [PATCH net-next v6 5/8] eth: bnxt: adjust the fill level of agg queues with larger buffers Pavel Begunkov
2025-11-27 20:44 ` Pavel Begunkov [this message]
2025-11-27 20:44 ` [PATCH net-next v6 7/8] io_uring/zcrx: document area chunking parameter Pavel Begunkov
2025-11-27 20:44 ` [PATCH net-next v6 8/8] selftests: iou-zcrx: test large chunk sizes Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9342d7dd5e663d3d44419229d6c9971b67e3f059.1764264798.git.asml.silence@gmail.com \
--to=asml.silence@gmail.com \
--cc=andrew+netdev@lunn.ch \
--cc=axboe@kernel.dk \
--cc=corbet@lwn.net \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=io-uring@vger.kernel.org \
--cc=kuba@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pavan.chebbi@broadcom.com \
--cc=sdf@fomichev.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox