public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: davem@davemloft.net
Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com,
	andrew+netdev@lunn.ch, horms@kernel.org, dw@davidwei.uk,
	jdamato@fastly.com, asml.silence@gmail.com,
	io-uring@vger.kernel.org, Jakub Kicinski <kuba@kernel.org>,
	shuah@kernel.org, linux-kselftest@vger.kernel.org
Subject: [PATCH net-next 1/3] selftests: drv-net: iou-zcrx: wait for memory provider cleanup
Date: Fri, 27 Feb 2026 09:13:03 -0800	[thread overview]
Message-ID: <20260227171305.2848240-2-kuba@kernel.org> (raw)
In-Reply-To: <20260227171305.2848240-1-kuba@kernel.org>

io_uring defers zcrx context teardown to the iou_exit workqueue.

  # ps aux | grep iou
  ...    07:58   0:00 [kworker/u19:0-iou_exit]
  ... 07:58   0:00 [kworker/u18:2-iou_exit]

When the test's receiver process exits, bkg() returns but the memory
provider may still be attached to the rx queue. The subsequent defer()
that restores tcp-data-split then fails:

  # Exception while handling defer / cleanup (callback 3 of 3)!
  # Defer Exception| net.ynl.pyynl.lib.ynl.NlError:
      Netlink error: can't disable tcp-data-split while device has
                     memory provider enabled: Invalid argument
  not ok 1 iou-zcrx.test_zcrx.single

Add a helper that polls netdev queue-get until no rx queue reports
the io-uring memory provider attribute. Register it as a defer()
just before tcp-data-split is restored as a "barrier".

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: shuah@kernel.org
CC: dw@davidwei.uk
CC: jdamato@fastly.com
CC: linux-kselftest@vger.kernel.org
---
 .../selftests/drivers/net/hw/iou-zcrx.py       | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/drivers/net/hw/iou-zcrx.py b/tools/testing/selftests/drivers/net/hw/iou-zcrx.py
index c63d6d6450d2..c27c2064701d 100755
--- a/tools/testing/selftests/drivers/net/hw/iou-zcrx.py
+++ b/tools/testing/selftests/drivers/net/hw/iou-zcrx.py
@@ -2,14 +2,27 @@
 # SPDX-License-Identifier: GPL-2.0
 
 import re
+import time
 from os import path
 from lib.py import ksft_run, ksft_exit, KsftSkipEx, ksft_variants, KsftNamedVariant
 from lib.py import NetDrvEpEnv
 from lib.py import bkg, cmd, defer, ethtool, rand_port, wait_port_listen
-from lib.py import EthtoolFamily
+from lib.py import EthtoolFamily, NetdevFamily
 
 SKIP_CODE = 42
 
+
+def mp_clear_wait(cfg):
+    """Wait for io_uring memory providers to clear from all device queues."""
+    deadline = time.time() + 5
+    while time.time() < deadline:
+        queues = cfg.netnl.queue_get({'ifindex': cfg.ifindex}, dump=True)
+        if not any('io-uring' in q for q in queues):
+            return
+        time.sleep(0.1)
+    raise TimeoutError("Timed out waiting for memory provider to clear")
+
+
 def create_rss_ctx(cfg):
     output = ethtool(f"-X {cfg.ifname} context new start {cfg.target} equal 1").stdout
     values = re.search(r'New RSS context is (\d+)', output).group(1)
@@ -46,6 +59,7 @@ SKIP_CODE = 42
                                 'tcp-data-split': 'unknown',
                                 'hds-thresh': hds_thresh,
                                 'rx': rx_rings})
+    defer(mp_clear_wait, cfg)
 
     cfg.target = channels - 1
     ethtool(f"-X {cfg.ifname} equal {cfg.target}")
@@ -73,6 +87,7 @@ SKIP_CODE = 42
                                 'tcp-data-split': 'unknown',
                                 'hds-thresh': hds_thresh,
                                 'rx': rx_rings})
+    defer(mp_clear_wait, cfg)
 
     cfg.target = channels - 1
     ethtool(f"-X {cfg.ifname} equal {cfg.target}")
@@ -159,6 +174,7 @@ SKIP_CODE = 42
         cfg.bin_remote = cfg.remote.deploy(cfg.bin_local)
 
         cfg.ethnl = EthtoolFamily()
+        cfg.netnl = NetdevFamily()
         cfg.port = rand_port()
         ksft_run(globs=globals(), cases=[test_zcrx, test_zcrx_oneshot], args=(cfg, ))
     ksft_exit()
-- 
2.53.0


  reply	other threads:[~2026-02-27 17:13 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-27 17:13 [PATCH net-next 0/3] selftests: drv-net: iou-zcrx: improve stability and make the large chunk test work Jakub Kicinski
2026-02-27 17:13 ` Jakub Kicinski [this message]
2026-02-27 17:13 ` [PATCH net-next 2/3] selftests: drv-net: iou-zcrx: rework large chunks test to use common setup Jakub Kicinski
2026-02-27 17:13 ` [PATCH net-next 3/3] selftests: drv-net: iou-zcrx: allocate hugepages for large chunks test Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260227171305.2848240-2-kuba@kernel.org \
    --to=kuba@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=asml.silence@gmail.com \
    --cc=davem@davemloft.net \
    --cc=dw@davidwei.uk \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=io-uring@vger.kernel.org \
    --cc=jdamato@fastly.com \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=shuah@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox