* [PATCH] io_uring/memmap: cast nr_pages to size_t before shifting
@ 2025-08-08 12:42 Jens Axboe
0 siblings, 0 replies; only message in thread
From: Jens Axboe @ 2025-08-08 12:42 UTC (permalink / raw)
To: io-uring; +Cc: Pavel Begunkov
If the allocated size exceeds UINT_MAX, then it's necessary to cast
the mr->nr_pages value to size_t to prevent it from overflowing. In
practice this isn't much of a concern as the required memory size will
have been validated upfront, and accounted to the user. And > 4GB sizes
will be necessary to make the lack of a cast a problem, which greatly
exceeds normal user locked_vm settings that are generally in the kb to
mb range. However, if root is used, then accounting isn't done, and
then it's possible to hit this issue.
Link: https://lore.kernel.org/all/6895b298.050a0220.7f033.0059.GAE@google.com/
Cc: stable@vger.kernel.org
Reported-by: syzbot+23727438116feb13df15@syzkaller.appspotmail.com
Fixes: 087f997870a9 ("io_uring/memmap: implement mmap for regions")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
diff --git a/io_uring/memmap.c b/io_uring/memmap.c
index 725dc0bec24c..2e99dffddfc5 100644
--- a/io_uring/memmap.c
+++ b/io_uring/memmap.c
@@ -156,7 +156,7 @@ static int io_region_allocate_pages(struct io_ring_ctx *ctx,
unsigned long mmap_offset)
{
gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN;
- unsigned long size = mr->nr_pages << PAGE_SHIFT;
+ size_t size = (size_t) mr->nr_pages << PAGE_SHIFT;
unsigned long nr_allocated;
struct page **pages;
void *p;
--
Jens Axboe
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2025-08-08 12:42 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-08 12:42 [PATCH] io_uring/memmap: cast nr_pages to size_t before shifting Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox