public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* Relationship between io-uring asynchronous idioms and mmap/LRU paging.
@ 2025-07-11 18:12 Steve
  2025-07-17 14:50 ` Jens Axboe
  0 siblings, 1 reply; 2+ messages in thread
From: Steve @ 2025-07-11 18:12 UTC (permalink / raw)
  To: io-uring

I hope my post is appropriate for this list. Relative to other recent 
posts on this list, my interests are high-level.

I want to develop efficient, scalable, low-latency, asynchronous 
services in user-space. I've dabbled with liburing in the context of an 
experimental service involving network request/responses.  For the 
purpose of this post, assume calculating responses, to requests, 
requires looking-up pages in a huge read-only file.  In order to reap 
all the performance benefits of io-uring, I know I should avoid blocking 
calls in my event loop.

If I were to use a multithreaded (c.f. asynchronous) paradigm... my 
strategy, to look-up pages, would have been to mmap 
<https://man7.org/linux/man-pages/man2/mmap.2.html> the huge file and 
rely upon the kernel LRU cache. Cache misses, relative to the memory 
mapped file will result in a page fault and a blocked thread. This could 
be OK, if cache-misses are rare events... but, while cache hits are 
expected to be frequent, I can't assume cache misses will be rare.

Options I have considered:

   1. Introduce a thread-pool, with task-request and task-response 
queues... using tasks to de-couple reading requests from writing 
responses... the strategy would be to avoid the io-uring event loop 
thread interacting with the memory mapped file. Intuitively, this seems 
cumbersome - compared with using a 'more asynchronous' idiom to avoid 
having to depend upon multithreaded concurrency and thread synchronisation.

   2. Implement an explicit application-layer page cache. Pages could be 
retrieved, into explicitly allocated memory, asynchronously... using 
io-uring read requests. I could suspend request/response processing on 
any cache miss... then resume processing when the io-uring completion 
queue informs that each page has been loaded.  A C++20 coroutine, for 
example, could allow this asynchronous suspension and resumption of 
calculation of responses to requests. This approach seems to undermine 
resource-use cooperation between processes. A single page on disk could 
end-up cached separately by each process instance (inefficient) and 
there would be difficulties efficiently managing appropriate sizes for 
application layer caches.

In an ideal world, I would like to fuse the benefits of mmap's 
kernel-managed cache, with the advantages of an io-uring asynchronous 
idiom.  I find myself wishing there were kernel-level APIs to:

   * Determine if a page, at a virtual address, is already cached in 
RAM. [ Perhaps mincore() 
<https://man7.org/linux/man-pages/man2/mincore.2.html> could be adequate? ]
   * Submit an asynchronous io-uring request with comparable (but 
non-blocking) effect to a page-fault for the virtual address whose page 
was not in core.
   * Receive notification, on the io-uring completion queue, that an 
requested page has now been cached.

If such facilities were to exist, I can imagine a process, using 
io-uring asynchronous idioms, that retains the memory management 
advantages associated with mmap... without introducing dependence upon 
threads.  I've not found any documentation to suggest that my imagined 
io-uring features exist.  Am I overlooking something? Are there plans to 
implement asynchronous features involving the kernel page-cache and 
io-uring scheduling?  Would io-uring experts consider option 1 a 
sensible, pragmatic, choice... in a circumstance where kernel-level 
caching of the mapped file seems desirable... or would a different 
approach be more appropriate?

Thanks in advance for any comments.

Steve



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Relationship between io-uring asynchronous idioms and mmap/LRU paging.
  2025-07-11 18:12 Relationship between io-uring asynchronous idioms and mmap/LRU paging Steve
@ 2025-07-17 14:50 ` Jens Axboe
  0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2025-07-17 14:50 UTC (permalink / raw)
  To: Steve, io-uring

On 7/11/25 12:12 PM, Steve wrote:
> I hope my post is appropriate for this list. Relative to other recent
> posts on this list, my interests are high-level.

Certainly is.

> I want to develop efficient, scalable, low-latency, asynchronous
> services in user-space. I've dabbled with liburing in the context of
> an experimental service involving network request/responses.  For the
> purpose of this post, assume calculating responses, to requests,
> requires looking-up pages in a huge read-only file.  In order to reap
> all the performance benefits of io-uring, I know I should avoid
> blocking calls in my event loop.
> 
> If I were to use a multithreaded (c.f. asynchronous) paradigm... my
> strategy, to look-up pages, would have been to mmap
> <https://man7.org/linux/man-pages/man2/mmap.2.html> the huge file and
> rely upon the kernel LRU cache. Cache misses, relative to the memory
> mapped file will result in a page fault and a blocked thread. This
> could be OK, if cache-misses are rare events... but, while cache hits
> are expected to be frequent, I can't assume cache misses will be rare.
> 
> Options I have considered:
> 
>   1. Introduce a thread-pool, with task-request and task-response
>   queues... using tasks to de-couple reading requests from writing
>   responses... the strategy would be to avoid the io-uring event loop
>   thread interacting with the memory mapped file. Intuitively, this
>   seems cumbersome - compared with using a 'more asynchronous' idiom
>   to avoid having to depend upon multithreaded concurrency and thread
>   synchronisation.
> 
>   2. Implement an explicit application-layer page cache. Pages could
>   be retrieved, into explicitly allocated memory, asynchronously...
>   using io-uring read requests. I could suspend request/response
>   processing on any cache miss... then resume processing when the
>   io-uring completion queue informs that each page has been loaded.  A
>   C++20 coroutine, for example, could allow this asynchronous
>   suspension and resumption of calculation of responses to requests.
>   This approach seems to undermine resource-use cooperation between
>   processes. A single page on disk could end-up cached separately by
>   each process instance (inefficient) and there would be difficulties
>   efficiently managing appropriate sizes for application layer caches.
> 
> In an ideal world, I would like to fuse the benefits of mmap's
> kernel-managed cache, with the advantages of an io-uring asynchronous
> idiom.  I find myself wishing there were kernel-level APIs to:
> 
>   * Determine if a page, at a virtual address, is already cached in
>   RAM. [ Perhaps mincore()
>   <https://man7.org/linux/man-pages/man2/mincore.2.html> could be
>   adequate? ]
>   * Submit an asynchronous io-uring request with comparable (but
>   non-blocking) effect to a page-fault for the virtual address whose
>   page was not in core.
>   * Receive notification, on the io-uring completion queue, that an
>   requested page has now been cached.
> 
> If such facilities were to exist, I can imagine a process, using
> io-uring asynchronous idioms, that retains the memory management
> advantages associated with mmap... without introducing dependence upon
> threads.  I've not found any documentation to suggest that my imagined
> io-uring features exist.  Am I overlooking something? Are there plans
> to implement asynchronous features involving the kernel page-cache and
> io-uring scheduling?  Would io-uring experts consider option 1 a
> sensible, pragmatic, choice... in a circumstance where kernel-level
> caching of the mapped file seems desirable... or would a different
> approach be more appropriate?

Just a heads-up then I'm OOO for a bit, and since it looks like nobody
else has replied to this, I'll take a closer look when I'm back next
week.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-07-17 14:50 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-11 18:12 Relationship between io-uring asynchronous idioms and mmap/LRU paging Steve
2025-07-17 14:50 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox