* [PATCH 1/1] io_uring/zcrx: fix user_ref race between scrub and refill paths
@ 2026-02-18 17:36 Pavel Begunkov
2026-02-18 17:44 ` Jens Axboe
0 siblings, 1 reply; 2+ messages in thread
From: Pavel Begunkov @ 2026-02-18 17:36 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, axboe, netdev, Kai Aizen
From: Kai Aizen <kai@snailsploit.com>
The io_zcrx_put_niov_uref() function uses a non-atomic
check-then-decrement pattern (atomic_read followed by separate
atomic_dec) to manipulate user_refs. This is serialized against other
callers by rq_lock, but io_zcrx_scrub() modifies the same counter with
atomic_xchg() WITHOUT holding rq_lock.
On SMP systems, the following race exists:
CPU0 (refill, holds rq_lock) CPU1 (scrub, no rq_lock)
put_niov_uref:
atomic_read(uref) - 1
// window opens
atomic_xchg(uref, 0) - 1
return_niov_freelist(niov) [PUSH #1]
// window closes
atomic_dec(uref) - wraps to -1
returns true
return_niov(niov)
return_niov_freelist(niov) [PUSH #2: DOUBLE-FREE]
The same niov is pushed to the freelist twice, causing free_count to
exceed nr_iovs. Subsequent freelist pushes then perform an out-of-bounds
write (a u32 value) past the kvmalloc'd freelist array into the adjacent
slab object.
Fix this by replacing the non-atomic read-then-dec in
io_zcrx_put_niov_uref() with an atomic_try_cmpxchg loop that atomically
tests and decrements user_refs. This makes the operation safe against
concurrent atomic_xchg from scrub without requiring scrub to acquire
rq_lock.
Fixes: 34a3e60821ab ("io_uring/zcrx: implement zerocopy receive pp memory provider")
Cc: stable@vger.kernel.org
Signed-off-by: Kai Aizen <kai@snailsploit.com>
[pavel: removed a warning and a comment]
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
io_uring/zcrx.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 3d377523ff7e..0c9bf540b12b 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -341,10 +341,14 @@ static inline atomic_t *io_get_user_counter(struct net_iov *niov)
static bool io_zcrx_put_niov_uref(struct net_iov *niov)
{
atomic_t *uref = io_get_user_counter(niov);
+ int old;
+
+ old = atomic_read(uref);
+ do {
+ if (unlikely(old == 0))
+ return false;
+ } while (!atomic_try_cmpxchg(uref, &old, old - 1));
- if (unlikely(!atomic_read(uref)))
- return false;
- atomic_dec(uref);
return true;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH 1/1] io_uring/zcrx: fix user_ref race between scrub and refill paths
2026-02-18 17:36 [PATCH 1/1] io_uring/zcrx: fix user_ref race between scrub and refill paths Pavel Begunkov
@ 2026-02-18 17:44 ` Jens Axboe
0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2026-02-18 17:44 UTC (permalink / raw)
To: io-uring, Pavel Begunkov; +Cc: netdev, Kai Aizen
On Wed, 18 Feb 2026 17:36:41 +0000, Pavel Begunkov wrote:
> The io_zcrx_put_niov_uref() function uses a non-atomic
> check-then-decrement pattern (atomic_read followed by separate
> atomic_dec) to manipulate user_refs. This is serialized against other
> callers by rq_lock, but io_zcrx_scrub() modifies the same counter with
> atomic_xchg() WITHOUT holding rq_lock.
>
> On SMP systems, the following race exists:
>
> [...]
Applied, thanks!
[1/1] io_uring/zcrx: fix user_ref race between scrub and refill paths
commit: 003049b1c4fb8aabb93febb7d1e49004f6ad653b
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-02-18 17:44 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-18 17:36 [PATCH 1/1] io_uring/zcrx: fix user_ref race between scrub and refill paths Pavel Begunkov
2026-02-18 17:44 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox