From: Jens Axboe <axboe@kernel.dk>
To: Pavel Begunkov <asml.silence@gmail.com>,
Yuhao Jiang <danisjiang@gmail.com>
Cc: io-uring@vger.kernel.org, linux-kernel@vger.kernel.org,
stable@vger.kernel.org
Subject: Re: [PATCH v2] io_uring/rsrc: fix RLIMIT_MEMLOCK bypass by removing cross-buffer accounting
Date: Fri, 23 Jan 2026 08:04:49 -0700 [thread overview]
Message-ID: <654fe339-5a2b-4c38-9d2d-28cfc306b307@kernel.dk> (raw)
In-Reply-To: <fc8664bb-7769-48a2-b470-71fb81828e26@kernel.dk>
On 1/23/26 7:50 AM, Jens Axboe wrote:
> On 1/23/26 7:26 AM, Pavel Begunkov wrote:
>> On 1/22/26 21:51, Pavel Begunkov wrote:
>> ...
>>>>>> I already briefly touched on that earlier, for sure not going to be of
>>>>>> any practical concern.
>>>>>
>>>>> Modest 16 GB can give 1M entries. Assuming 50ns-100ns per entry for the
>>>>> xarray business, that's 50-100ms. It's all serialised, so multiply by
>>>>> the number of CPUs/threads, e.g. 10-100, that's 0.5-10s. Account sky
>>>>> high spinlock contention, and it jumps again, and there can be more
>>>>> memory / CPUs / numa nodes. Not saying that it's worse than the
>>>>> current O(n^2), I have a test program that borderline hangs the
>>>>> system.
>>>>
>>>> It's definitely not worse than the existing system, which is why I don't
>>>> think it's a big deal. Nobody has ever complained about time to register
>>>> buffers. It's inherently a slow path, and quite slow at that depending
>>>> on the use case. Out of curiosity, I ran some stilly testing on
>>>> registering 16GB of memory, with 1..32 threads. Each will do 16GB, so
>>>> 512GB registered in total for the 32 case. Before is the current kernel,
>>>> after is with per-user xarray accounting:
>>>>
>>>> before
>>>>
>>>> nthreads 1: 646 msec
>>>> nthreads 2: 888 msec
>>>> nthreads 4: 864 msec
>>>> nthreads 8: 1450 msec
>>>> nthreads 16: 2890 msec
>>>> nthreads 32: 4410 msec
>>>>
>>>> after
>>>>
>>>> nthreads 1: 650 msec
>>>> nthreads 2: 888 msec
>>>> nthreads 4: 892 msec
>>>> nthreads 8: 1270 msec
>>>> nthreads 16: 2430 msec
>>>> nthreads 32: 4160 msec
>>>>
>>>> This includes both registering buffers, cloning all of them to another
>>>> ring, and unregistering times, and nowhere is locking scalability an
>>>> issue for the xarray manipulation. The box has 32 nodes and 512 CPUs. So
>>>> no, I strongly believe this isn't an issue.
>>>>
>>>> IOW, accurate accounting is cheaper than the stuff we have now. None of
>>>> them are super cheap. Does it matter? I really don't think so, or people
>>>> would've complained already. The only complaint I got on these kinds of
>>>> things was for cloning, which did get fixed up some releases ago.
>>>
>>> You need compound pages
>>>
>>> always > /sys/kernel/mm/transparent_hugepage/hugepages-16kB/enabled
>>>
>>> And use update() instead of register() as accounting dedup for
>>> registration is broken-disabled. For the current kernel:
>>>
>>> Single threaded:
>>> 1x1G: 7.5s
>>> 2x1G: 45s
>>> 4x1G: 190s
>>>
>>> 16x should be ~3000s, not going to run it. Uninterruptible and no
>>> cond_resched, so spawn NR_CPUS threads and the system is completely
>>> unresponsive (I guess it depends on the preemption mode).
>> The program is below for reference, but it's trivial. THP setting
>> is done inside for convenience. There are ways to make the runtime
>> even worse, but that should be enough.
>
> Thanks for sending that. Ran it on the same box, on current -git and
> with user_struct xarray accounting. Modified it so that 2nd arg is
> number of threads, for easy running:
Should've tried 32x32 as well, that ends up going deep into "this sucks"
territory:
git
good luck
git + user_struct
axboe@r7625 ~> time ./ppage 32 32
register 32 GB, num threads 32
________________________________________________________
Executed in 16.34 secs fish external
usr time 0.54 secs 497.00 micros 0.54 secs
sys time 451.94 secs 55.00 micros 451.94 secs
--
Jens Axboe
next prev parent reply other threads:[~2026-01-23 15:04 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-19 7:10 [PATCH v2] io_uring/rsrc: fix RLIMIT_MEMLOCK bypass by removing cross-buffer accounting Yuhao Jiang
2026-01-19 17:03 ` Jens Axboe
2026-01-19 23:34 ` Yuhao Jiang
2026-01-19 23:40 ` Jens Axboe
2026-01-20 7:05 ` Yuhao Jiang
2026-01-20 12:04 ` Jens Axboe
2026-01-20 12:05 ` Pavel Begunkov
2026-01-20 17:03 ` Jens Axboe
2026-01-20 21:45 ` Pavel Begunkov
2026-01-21 14:58 ` Jens Axboe
2026-01-22 11:43 ` Pavel Begunkov
2026-01-22 17:47 ` Jens Axboe
2026-01-22 21:51 ` Pavel Begunkov
2026-01-23 14:26 ` Pavel Begunkov
2026-01-23 14:50 ` Jens Axboe
2026-01-23 15:04 ` Jens Axboe [this message]
2026-01-23 16:52 ` Jens Axboe
2026-01-24 11:04 ` Pavel Begunkov
2026-01-24 15:14 ` Jens Axboe
2026-01-24 15:55 ` Jens Axboe
2026-01-24 16:30 ` Pavel Begunkov
2026-01-24 18:44 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=654fe339-5a2b-4c38-9d2d-28cfc306b307@kernel.dk \
--to=axboe@kernel.dk \
--cc=asml.silence@gmail.com \
--cc=danisjiang@gmail.com \
--cc=io-uring@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox