From: Paolo Abeni <pabeni@redhat.com>
To: Pavel Begunkov <asml.silence@gmail.com>, netdev@vger.kernel.org
Cc: "David S . Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>,
Jonathan Corbet <corbet@lwn.net>,
Michael Chan <michael.chan@broadcom.com>,
Pavan Chebbi <pavan.chebbi@broadcom.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Jesper Dangaard Brouer <hawk@kernel.org>,
John Fastabend <john.fastabend@gmail.com>,
Joshua Washington <joshwash@google.com>,
Harshitha Ramamurthy <hramamurthy@google.com>,
Saeed Mahameed <saeedm@nvidia.com>,
Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Alexander Duyck <alexanderduyck@fb.com>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Shuah Khan <shuah@kernel.org>,
Willem de Bruijn <willemb@google.com>,
Ankit Garg <nktgrg@google.com>,
Tim Hostetler <thostet@google.com>,
Alok Tiwari <alok.a.tiwari@oracle.com>,
Ziwei Xiao <ziweixiao@google.com>,
John Fraker <jfraker@google.com>,
Praveen Kaligineedi <pkaligineedi@google.com>,
Mohsin Bashir <mohsin.bashr@gmail.com>, Joe Damato <joe@dama.to>,
Mina Almasry <almasrymina@google.com>,
Dimitri Daskalakis <dimitri.daskalakis1@gmail.com>,
Stanislav Fomichev <sdf@fomichev.me>,
Kuniyuki Iwashima <kuniyu@google.com>,
Samiullah Khawaja <skhawaja@google.com>,
Ahmed Zaki <ahmed.zaki@intel.com>,
Alexander Lobakin <aleksander.lobakin@intel.com>,
David Wei <dw@davidwei.uk>, Yue Haibing <yuehaibing@huawei.com>,
Haiyue Wang <haiyuewa@163.com>, Jens Axboe <axboe@kernel.dk>,
Simon Horman <horms@kernel.org>,
Vishwanath Seshagiri <vishs@fb.com>,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
bpf@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-kselftest@vger.kernel.org, dtatulea@nvidia.com,
io-uring@vger.kernel.org
Subject: Re: [PATCH net-next v8 6/9] eth: bnxt: adjust the fill level of agg queues with larger buffers
Date: Tue, 13 Jan 2026 11:41:27 +0100 [thread overview]
Message-ID: <0eab9112-eedf-4425-9ce9-be0a59191d8d@redhat.com> (raw)
In-Reply-To: <4db44c27-4654-46f9-be41-93bcf06302b2@redhat.com>
On 1/13/26 11:27 AM, Paolo Abeni wrote:
> On 1/9/26 12:28 PM, Pavel Begunkov wrote:
>> From: Jakub Kicinski <kuba@kernel.org>
>>
>> The driver tries to provision more agg buffers than header buffers
>> since multiple agg segments can reuse the same header. The calculation
>> / heuristic tries to provide enough pages for 65k of data for each header
>> (or 4 frags per header if the result is too big). This calculation is
>> currently global to the adapter. If we increase the buffer sizes 8x
>> we don't want 8x the amount of memory sitting on the rings.
>> Luckily we don't have to fill the rings completely, adjust
>> the fill level dynamically in case particular queue has buffers
>> larger than the global size.
>>
>> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
>> [pavel: rebase on top of agg_size_fac, assert agg_size_fac]
>> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
>> ---
>> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 28 +++++++++++++++++++----
>> 1 file changed, 24 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> index 8f42885a7c86..137e348d2b9c 100644
>> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> @@ -3816,16 +3816,34 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
>> }
>> }
>>
>> +static int bnxt_rx_agg_ring_fill_level(struct bnxt *bp,
>> + struct bnxt_rx_ring_info *rxr)
>> +{
>> + /* User may have chosen larger than default rx_page_size,
>> + * we keep the ring sizes uniform and also want uniform amount
>> + * of bytes consumed per ring, so cap how much of the rings we fill.
>> + */
>> + int fill_level = bp->rx_agg_ring_size;
>> +
>> + if (rxr->rx_page_size > BNXT_RX_PAGE_SIZE)
>> + fill_level /= rxr->rx_page_size / BNXT_RX_PAGE_SIZE;
>
> According to the check in bnxt_alloc_rx_page_pool() it's theoretically
> possible for `rxr->rx_page_size / BNXT_RX_PAGE_SIZE` being zero. If so
> the above would crash.
>
> Side note: this looks like something AI review could/should catch. The
> fact it didn't makes me think I'm missing something...
I see the next patch rejects too small `rx_page_size` values; so
possibly the better option is to drop the confusing check in
bnxt_alloc_rx_page_pool().
/P
next prev parent reply other threads:[~2026-01-13 10:41 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-09 11:28 [PATCH net-next v8 0/9] Add support for providers with large rx buffer Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 1/9] net: memzero mp params when closing a queue Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 2/9] net: reduce indent of struct netdev_queue_mgmt_ops members Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 3/9] net: add bare bone queue configs Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 4/9] net: pass queue rx page size from memory provider Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 5/9] eth: bnxt: store rx buffer size per queue Pavel Begunkov
2026-01-13 10:19 ` Paolo Abeni
2026-01-13 10:46 ` Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 6/9] eth: bnxt: adjust the fill level of agg queues with larger buffers Pavel Begunkov
2026-01-13 10:27 ` Paolo Abeni
2026-01-13 10:41 ` Paolo Abeni [this message]
2026-01-13 10:42 ` Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 7/9] eth: bnxt: support qcfg provided rx page size Pavel Begunkov
2026-01-14 3:36 ` Jakub Kicinski
2026-01-15 17:10 ` Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 8/9] selftests: iou-zcrx: test large chunk sizes Pavel Begunkov
2026-01-13 10:34 ` Paolo Abeni
2026-01-13 10:48 ` Pavel Begunkov
2026-01-09 11:28 ` [PATCH net-next v8 9/9] io_uring/zcrx: document area chunking parameter Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0eab9112-eedf-4425-9ce9-be0a59191d8d@redhat.com \
--to=pabeni@redhat.com \
--cc=ahmed.zaki@intel.com \
--cc=aleksander.lobakin@intel.com \
--cc=alexanderduyck@fb.com \
--cc=almasrymina@google.com \
--cc=alok.a.tiwari@oracle.com \
--cc=andrew+netdev@lunn.ch \
--cc=asml.silence@gmail.com \
--cc=ast@kernel.org \
--cc=axboe@kernel.dk \
--cc=bpf@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dimitri.daskalakis1@gmail.com \
--cc=dtatulea@nvidia.com \
--cc=dw@davidwei.uk \
--cc=edumazet@google.com \
--cc=haiyuewa@163.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=hramamurthy@google.com \
--cc=ilias.apalodimas@linaro.org \
--cc=io-uring@vger.kernel.org \
--cc=jfraker@google.com \
--cc=joe@dama.to \
--cc=john.fastabend@gmail.com \
--cc=joshwash@google.com \
--cc=kuba@kernel.org \
--cc=kuniyu@google.com \
--cc=leon@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mbloch@nvidia.com \
--cc=michael.chan@broadcom.com \
--cc=mohsin.bashr@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=nktgrg@google.com \
--cc=pavan.chebbi@broadcom.com \
--cc=pkaligineedi@google.com \
--cc=saeedm@nvidia.com \
--cc=sdf@fomichev.me \
--cc=shuah@kernel.org \
--cc=skhawaja@google.com \
--cc=tariqt@nvidia.com \
--cc=thostet@google.com \
--cc=vishs@fb.com \
--cc=willemb@google.com \
--cc=yuehaibing@huawei.com \
--cc=ziweixiao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox