* [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
@ 2025-12-17 21:04 veygax
2025-12-17 21:22 ` Caleb Sander Mateos
2025-12-18 0:32 ` Ming Lei
0 siblings, 2 replies; 13+ messages in thread
From: veygax @ 2025-12-17 21:04 UTC (permalink / raw)
To: Jens Axboe, io-uring@vger.kernel.org
Cc: Caleb Sander Mateos, linux-kernel@vger.kernel.org, Evan Lambert
From: Evan Lambert <veyga@veygax.dev>
The function io_buffer_register_bvec() calculates the allocation size
for the io_mapped_ubuf based on blk_rq_nr_phys_segments(rq). This
function calculates the number of scatter-gather elements after megine
physically contiguous pages.
However, the subsequent loop uses rq_for_each_bvec() to populate the
array, which iterates over every individual bio_vec in the request,
regardless of physical contiguity.
If a request has multiple bio_vec entries that are physically
contiguous, blk_rq_nr_phys_segments() returns a value smaller than
the total number of bio_vecs. This leads to a slab-out-of-bounds write.
The path is reachable from userspace via the ublk driver when a server
issues a UBLK_IO_REGISTER_IO_BUF command. This requires the
UBLK_F_SUPPORT_ZERO_COPY flag which is protected by CAP_NET_ADMIN.
Fix this by calculating the total number of bio_vecs by iterating
over the request's bios and summing their bi_vcnt.
KASAN report:
[18:01:50] BUG: KASAN: slab-out-of-bounds in io_buffer_register_bvec+0x813/0xb80
[18:01:50] Write of size 8 at addr ffff88800223b238 by task kunit_try_catch/27
[18:01:50]
[18:01:50] CPU: 0 UID: 0 PID: 27 Comm: kunit_try_catch Tainted: G N 6.19.0-rc1-g346af1a0c65a-dirty #44 PREEMPT(none)
[18:01:50] Tainted: [N]=TEST
[18:01:50] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.1 11/11/2019
[18:01:50] Call Trace:
[18:01:50] <TASK>
[18:01:50] dump_stack_lvl+0x4d/0x70
[18:01:50] print_report+0x151/0x4c0
[18:01:50] ? __pfx__raw_spin_lock_irqsave+0x10/0x10
[18:01:50] ? io_buffer_register_bvec+0x813/0xb80
[18:01:50] kasan_report+0xec/0x120
[18:01:50] ? io_buffer_register_bvec+0x813/0xb80
[18:01:50] io_buffer_register_bvec+0x813/0xb80
[18:01:50] io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
[18:01:50] ? __pfx_io_buffer_register_bvec_overflow_test+0x10/0x10
[18:01:50] ? __pfx_pick_next_task_fair+0x10/0x10
[18:01:50] ? _raw_spin_lock+0x7e/0xd0
[18:01:50] ? finish_task_switch.isra.0+0x19a/0x650
[18:01:50] ? __pfx_read_tsc+0x10/0x10
[18:01:50] ? ktime_get_ts64+0x79/0x240
[18:01:50] kunit_try_run_case+0x19b/0x2c0
[18:01:50] ? __pfx_kunit_try_run_case+0x10/0x10
[18:01:50] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[18:01:50] kunit_generic_run_threadfn_adapter+0x80/0xf0
[18:01:50] kthread+0x323/0x670
[18:01:50] ? __pfx_kthread+0x10/0x10
[18:01:50] ? __pfx__raw_spin_lock_irq+0x10/0x10
[18:01:50] ? __pfx_kthread+0x10/0x10
[18:01:50] ret_from_fork+0x329/0x420
[18:01:50] ? __pfx_ret_from_fork+0x10/0x10
[18:01:50] ? __switch_to+0xa0f/0xd40
[18:01:50] ? __pfx_kthread+0x10/0x10
[18:01:50] ret_from_fork_asm+0x1a/0x30
[18:01:50] </TASK>
[18:01:50]
[18:01:50] Allocated by task 27:
[18:01:50] kasan_save_stack+0x30/0x50
[18:01:50] kasan_save_track+0x14/0x30
[18:01:50] __kasan_kmalloc+0x7f/0x90
[18:01:50] io_cache_alloc_new+0x35/0xc0
[18:01:50] io_buffer_register_bvec+0x196/0xb80
[18:01:50] io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
[18:01:50] kunit_try_run_case+0x19b/0x2c0
[18:01:50] kunit_generic_run_threadfn_adapter+0x80/0xf0
[18:01:50] kthread+0x323/0x670
[18:01:50] ret_from_fork+0x329/0x420
[18:01:50] ret_from_fork_asm+0x1a/0x30
[18:01:50]
[18:01:50] The buggy address belongs to the object at ffff88800223b000
[18:01:50] which belongs to the cache kmalloc-1k of size 1024
[18:01:50] The buggy address is located 0 bytes to the right of
[18:01:50] allocated 568-byte region [ffff88800223b000, ffff88800223b238)
[18:01:50]
[18:01:50] The buggy address belongs to the physical page:
[18:01:50] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2238
[18:01:50] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[18:01:50] flags: 0x4000000000000040(head|zone=1)
[18:01:50] page_type: f5(slab)
[18:01:50] raw: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
[18:01:50] raw: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
[18:01:50] head: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
[18:01:50] head: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
[18:01:50] head: 4000000000000002 ffffea0000088e01 00000000ffffffff 00000000ffffffff
[18:01:50] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[18:01:50] page dumped because: kasan: bad access detected
[18:01:50]
[18:01:50] Memory state around the buggy address:
[18:01:50] ffff88800223b100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[18:01:50] ffff88800223b180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[18:01:50] >ffff88800223b200: 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc
[18:01:50] ^
[18:01:50] ffff88800223b280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[18:01:50] ffff88800223b300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[18:01:50] ==================================================================
[18:01:50] Disabling lock debugging due to kernel taint
Fixes: 27cb27b6d5ea ("io_uring: add support for kernel registered bvecs")
Signed-off-by: Evan Lambert <veyga@veygax.dev>
---
io_uring/rsrc.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index a63474b331bf..7602b71543e0 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -946,6 +946,7 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
struct io_mapped_ubuf *imu;
struct io_rsrc_node *node;
struct bio_vec bv;
+ struct bio *bio;
unsigned int nr_bvecs = 0;
int ret = 0;
@@ -967,11 +968,10 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
goto unlock;
}
- /*
- * blk_rq_nr_phys_segments() may overestimate the number of bvecs
- * but avoids needing to iterate over the bvecs
- */
- imu = io_alloc_imu(ctx, blk_rq_nr_phys_segments(rq));
+ __rq_for_each_bio(bio, rq)
+ nr_bvecs += bio->bi_vcnt;
+
+ imu = io_alloc_imu(ctx, nr_bvecs);
if (!imu) {
kfree(node);
ret = -ENOMEM;
@@ -988,6 +988,7 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
imu->is_kbuf = true;
imu->dir = 1 << rq_data_dir(rq);
+ nr_bvecs = 0;
rq_for_each_bvec(bv, rq, rq_iter)
imu->bvec[nr_bvecs++] = bv;
imu->nr_bvecs = nr_bvecs;
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-17 21:04 [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec veygax
@ 2025-12-17 21:22 ` Caleb Sander Mateos
2025-12-18 0:32 ` Ming Lei
1 sibling, 0 replies; 13+ messages in thread
From: Caleb Sander Mateos @ 2025-12-17 21:22 UTC (permalink / raw)
To: veygax; +Cc: Jens Axboe, io-uring@vger.kernel.org,
linux-kernel@vger.kernel.org
On Wed, Dec 17, 2025 at 1:04 PM veygax <veyga@veygax.dev> wrote:
>
> From: Evan Lambert <veyga@veygax.dev>
>
> The function io_buffer_register_bvec() calculates the allocation size
> for the io_mapped_ubuf based on blk_rq_nr_phys_segments(rq). This
> function calculates the number of scatter-gather elements after megine
"merging"?
> physically contiguous pages.
>
> However, the subsequent loop uses rq_for_each_bvec() to populate the
> array, which iterates over every individual bio_vec in the request,
> regardless of physical contiguity.
Hmm, I would have thought that physically contiguous bio_vecs would
have been merged by the block layer? But that's definitely beyond my
expertise.
>
> If a request has multiple bio_vec entries that are physically
> contiguous, blk_rq_nr_phys_segments() returns a value smaller than
> the total number of bio_vecs. This leads to a slab-out-of-bounds write.
>
> The path is reachable from userspace via the ublk driver when a server
> issues a UBLK_IO_REGISTER_IO_BUF command. This requires the
> UBLK_F_SUPPORT_ZERO_COPY flag which is protected by CAP_NET_ADMIN.
"CAP_SYS_ADMIN"?
>
> Fix this by calculating the total number of bio_vecs by iterating
> over the request's bios and summing their bi_vcnt.
>
> KASAN report:
>
> [18:01:50] BUG: KASAN: slab-out-of-bounds in io_buffer_register_bvec+0x813/0xb80
> [18:01:50] Write of size 8 at addr ffff88800223b238 by task kunit_try_catch/27
> [18:01:50]
> [18:01:50] CPU: 0 UID: 0 PID: 27 Comm: kunit_try_catch Tainted: G N 6.19.0-rc1-g346af1a0c65a-dirty #44 PREEMPT(none)
> [18:01:50] Tainted: [N]=TEST
> [18:01:50] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.1 11/11/2019
> [18:01:50] Call Trace:
> [18:01:50] <TASK>
> [18:01:50] dump_stack_lvl+0x4d/0x70
> [18:01:50] print_report+0x151/0x4c0
> [18:01:50] ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> [18:01:50] ? io_buffer_register_bvec+0x813/0xb80
> [18:01:50] kasan_report+0xec/0x120
> [18:01:50] ? io_buffer_register_bvec+0x813/0xb80
> [18:01:50] io_buffer_register_bvec+0x813/0xb80
> [18:01:50] io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
> [18:01:50] ? __pfx_io_buffer_register_bvec_overflow_test+0x10/0x10
> [18:01:50] ? __pfx_pick_next_task_fair+0x10/0x10
> [18:01:50] ? _raw_spin_lock+0x7e/0xd0
> [18:01:50] ? finish_task_switch.isra.0+0x19a/0x650
> [18:01:50] ? __pfx_read_tsc+0x10/0x10
> [18:01:50] ? ktime_get_ts64+0x79/0x240
> [18:01:50] kunit_try_run_case+0x19b/0x2c0
This doesn't look like an actual ublk zero-copy buffer registration.
Where does the struct request come from?
> [18:01:50] ? __pfx_kunit_try_run_case+0x10/0x10
> [18:01:50] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
> [18:01:50] kunit_generic_run_threadfn_adapter+0x80/0xf0
> [18:01:50] kthread+0x323/0x670
> [18:01:50] ? __pfx_kthread+0x10/0x10
> [18:01:50] ? __pfx__raw_spin_lock_irq+0x10/0x10
> [18:01:50] ? __pfx_kthread+0x10/0x10
> [18:01:50] ret_from_fork+0x329/0x420
> [18:01:50] ? __pfx_ret_from_fork+0x10/0x10
> [18:01:50] ? __switch_to+0xa0f/0xd40
> [18:01:50] ? __pfx_kthread+0x10/0x10
> [18:01:50] ret_from_fork_asm+0x1a/0x30
> [18:01:50] </TASK>
> [18:01:50]
> [18:01:50] Allocated by task 27:
> [18:01:50] kasan_save_stack+0x30/0x50
> [18:01:50] kasan_save_track+0x14/0x30
> [18:01:50] __kasan_kmalloc+0x7f/0x90
> [18:01:50] io_cache_alloc_new+0x35/0xc0
> [18:01:50] io_buffer_register_bvec+0x196/0xb80
> [18:01:50] io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
> [18:01:50] kunit_try_run_case+0x19b/0x2c0
> [18:01:50] kunit_generic_run_threadfn_adapter+0x80/0xf0
> [18:01:50] kthread+0x323/0x670
> [18:01:50] ret_from_fork+0x329/0x420
> [18:01:50] ret_from_fork_asm+0x1a/0x30
> [18:01:50]
> [18:01:50] The buggy address belongs to the object at ffff88800223b000
> [18:01:50] which belongs to the cache kmalloc-1k of size 1024
> [18:01:50] The buggy address is located 0 bytes to the right of
> [18:01:50] allocated 568-byte region [ffff88800223b000, ffff88800223b238)
> [18:01:50]
> [18:01:50] The buggy address belongs to the physical page:
> [18:01:50] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2238
> [18:01:50] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [18:01:50] flags: 0x4000000000000040(head|zone=1)
> [18:01:50] page_type: f5(slab)
> [18:01:50] raw: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
> [18:01:50] raw: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
> [18:01:50] head: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
> [18:01:50] head: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
> [18:01:50] head: 4000000000000002 ffffea0000088e01 00000000ffffffff 00000000ffffffff
> [18:01:50] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [18:01:50] page dumped because: kasan: bad access detected
> [18:01:50]
> [18:01:50] Memory state around the buggy address:
> [18:01:50] ffff88800223b100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> [18:01:50] ffff88800223b180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> [18:01:50] >ffff88800223b200: 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc
> [18:01:50] ^
> [18:01:50] ffff88800223b280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [18:01:50] ffff88800223b300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [18:01:50] ==================================================================
> [18:01:50] Disabling lock debugging due to kernel taint
>
> Fixes: 27cb27b6d5ea ("io_uring: add support for kernel registered bvecs")
> Signed-off-by: Evan Lambert <veyga@veygax.dev>
> ---
> io_uring/rsrc.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> index a63474b331bf..7602b71543e0 100644
> --- a/io_uring/rsrc.c
> +++ b/io_uring/rsrc.c
> @@ -946,6 +946,7 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
> struct io_mapped_ubuf *imu;
> struct io_rsrc_node *node;
> struct bio_vec bv;
> + struct bio *bio;
> unsigned int nr_bvecs = 0;
> int ret = 0;
>
> @@ -967,11 +968,10 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
> goto unlock;
> }
>
> - /*
> - * blk_rq_nr_phys_segments() may overestimate the number of bvecs
> - * but avoids needing to iterate over the bvecs
> - */
> - imu = io_alloc_imu(ctx, blk_rq_nr_phys_segments(rq));
> + __rq_for_each_bio(bio, rq)
> + nr_bvecs += bio->bi_vcnt;
> +
> + imu = io_alloc_imu(ctx, nr_bvecs);
> if (!imu) {
> kfree(node);
> ret = -ENOMEM;
> @@ -988,6 +988,7 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
> imu->is_kbuf = true;
> imu->dir = 1 << rq_data_dir(rq);
>
> + nr_bvecs = 0;
> rq_for_each_bvec(bv, rq, rq_iter)
> imu->bvec[nr_bvecs++] = bv;
Could alternatively check for mergability with the previous bvec here.
That would avoid needing to allocate extra memory for physically
contiguous bvecs.
Best,
Caleb
> imu->nr_bvecs = nr_bvecs;
> --
> 2.52.0
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-17 21:04 [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec veygax
2025-12-17 21:22 ` Caleb Sander Mateos
@ 2025-12-18 0:32 ` Ming Lei
2025-12-18 0:37 ` veygax
1 sibling, 1 reply; 13+ messages in thread
From: Ming Lei @ 2025-12-18 0:32 UTC (permalink / raw)
To: veygax
Cc: Jens Axboe, io-uring@vger.kernel.org, Caleb Sander Mateos,
linux-kernel@vger.kernel.org
On Wed, Dec 17, 2025 at 09:04:01PM +0000, veygax wrote:
> From: Evan Lambert <veyga@veygax.dev>
>
> The function io_buffer_register_bvec() calculates the allocation size
> for the io_mapped_ubuf based on blk_rq_nr_phys_segments(rq). This
> function calculates the number of scatter-gather elements after megine
> physically contiguous pages.
>
> However, the subsequent loop uses rq_for_each_bvec() to populate the
> array, which iterates over every individual bio_vec in the request,
> regardless of physical contiguity.
>
> If a request has multiple bio_vec entries that are physically
> contiguous, blk_rq_nr_phys_segments() returns a value smaller than
> the total number of bio_vecs. This leads to a slab-out-of-bounds write.
>
> The path is reachable from userspace via the ublk driver when a server
> issues a UBLK_IO_REGISTER_IO_BUF command. This requires the
> UBLK_F_SUPPORT_ZERO_COPY flag which is protected by CAP_NET_ADMIN.
>
> Fix this by calculating the total number of bio_vecs by iterating
> over the request's bios and summing their bi_vcnt.
>
> KASAN report:
>
> [18:01:50] BUG: KASAN: slab-out-of-bounds in io_buffer_register_bvec+0x813/0xb80
> [18:01:50] Write of size 8 at addr ffff88800223b238 by task kunit_try_catch/27
Can you share the test case so that we can understand why page isn't merged
to last bvec? Maybe there is chance to improve block layer(bio add page
related code)
> [18:01:50]
> [18:01:50] CPU: 0 UID: 0 PID: 27 Comm: kunit_try_catch Tainted: G N 6.19.0-rc1-g346af1a0c65a-dirty #44 PREEMPT(none)
> [18:01:50] Tainted: [N]=TEST
> [18:01:50] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.1 11/11/2019
> [18:01:50] Call Trace:
> [18:01:50] <TASK>
> [18:01:50] dump_stack_lvl+0x4d/0x70
> [18:01:50] print_report+0x151/0x4c0
> [18:01:50] ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> [18:01:50] ? io_buffer_register_bvec+0x813/0xb80
> [18:01:50] kasan_report+0xec/0x120
> [18:01:50] ? io_buffer_register_bvec+0x813/0xb80
> [18:01:50] io_buffer_register_bvec+0x813/0xb80
> [18:01:50] io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
> [18:01:50] ? __pfx_io_buffer_register_bvec_overflow_test+0x10/0x10
> [18:01:50] ? __pfx_pick_next_task_fair+0x10/0x10
> [18:01:50] ? _raw_spin_lock+0x7e/0xd0
> [18:01:50] ? finish_task_switch.isra.0+0x19a/0x650
> [18:01:50] ? __pfx_read_tsc+0x10/0x10
> [18:01:50] ? ktime_get_ts64+0x79/0x240
> [18:01:50] kunit_try_run_case+0x19b/0x2c0
> [18:01:50] ? __pfx_kunit_try_run_case+0x10/0x10
> [18:01:50] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
> [18:01:50] kunit_generic_run_threadfn_adapter+0x80/0xf0
> [18:01:50] kthread+0x323/0x670
> [18:01:50] ? __pfx_kthread+0x10/0x10
> [18:01:50] ? __pfx__raw_spin_lock_irq+0x10/0x10
> [18:01:50] ? __pfx_kthread+0x10/0x10
> [18:01:50] ret_from_fork+0x329/0x420
> [18:01:50] ? __pfx_ret_from_fork+0x10/0x10
> [18:01:50] ? __switch_to+0xa0f/0xd40
> [18:01:50] ? __pfx_kthread+0x10/0x10
> [18:01:50] ret_from_fork_asm+0x1a/0x30
> [18:01:50] </TASK>
> [18:01:50]
> [18:01:50] Allocated by task 27:
> [18:01:50] kasan_save_stack+0x30/0x50
> [18:01:50] kasan_save_track+0x14/0x30
> [18:01:50] __kasan_kmalloc+0x7f/0x90
> [18:01:50] io_cache_alloc_new+0x35/0xc0
> [18:01:50] io_buffer_register_bvec+0x196/0xb80
> [18:01:50] io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
> [18:01:50] kunit_try_run_case+0x19b/0x2c0
> [18:01:50] kunit_generic_run_threadfn_adapter+0x80/0xf0
> [18:01:50] kthread+0x323/0x670
> [18:01:50] ret_from_fork+0x329/0x420
> [18:01:50] ret_from_fork_asm+0x1a/0x30
> [18:01:50]
> [18:01:50] The buggy address belongs to the object at ffff88800223b000
> [18:01:50] which belongs to the cache kmalloc-1k of size 1024
> [18:01:50] The buggy address is located 0 bytes to the right of
> [18:01:50] allocated 568-byte region [ffff88800223b000, ffff88800223b238)
> [18:01:50]
> [18:01:50] The buggy address belongs to the physical page:
> [18:01:50] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2238
> [18:01:50] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [18:01:50] flags: 0x4000000000000040(head|zone=1)
> [18:01:50] page_type: f5(slab)
> [18:01:50] raw: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
> [18:01:50] raw: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
> [18:01:50] head: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
> [18:01:50] head: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
> [18:01:50] head: 4000000000000002 ffffea0000088e01 00000000ffffffff 00000000ffffffff
> [18:01:50] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [18:01:50] page dumped because: kasan: bad access detected
> [18:01:50]
> [18:01:50] Memory state around the buggy address:
> [18:01:50] ffff88800223b100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> [18:01:50] ffff88800223b180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> [18:01:50] >ffff88800223b200: 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc
> [18:01:50] ^
> [18:01:50] ffff88800223b280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [18:01:50] ffff88800223b300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [18:01:50] ==================================================================
> [18:01:50] Disabling lock debugging due to kernel taint
>
> Fixes: 27cb27b6d5ea ("io_uring: add support for kernel registered bvecs")
> Signed-off-by: Evan Lambert <veyga@veygax.dev>
> ---
> io_uring/rsrc.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> index a63474b331bf..7602b71543e0 100644
> --- a/io_uring/rsrc.c
> +++ b/io_uring/rsrc.c
> @@ -946,6 +946,7 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
> struct io_mapped_ubuf *imu;
> struct io_rsrc_node *node;
> struct bio_vec bv;
> + struct bio *bio;
> unsigned int nr_bvecs = 0;
> int ret = 0;
>
> @@ -967,11 +968,10 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
> goto unlock;
> }
>
> - /*
> - * blk_rq_nr_phys_segments() may overestimate the number of bvecs
> - * but avoids needing to iterate over the bvecs
> - */
> - imu = io_alloc_imu(ctx, blk_rq_nr_phys_segments(rq));
> + __rq_for_each_bio(bio, rq)
> + nr_bvecs += bio->bi_vcnt;
This way is wrong, bio->bi_vcnt can't be trusted for this purpose, you may
have to use rq_for_each_bvec() for calculating the real nr_bvecs.
Thanks,
Ming
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-18 0:32 ` Ming Lei
@ 2025-12-18 0:37 ` veygax
2025-12-18 0:56 ` Keith Busch
0 siblings, 1 reply; 13+ messages in thread
From: veygax @ 2025-12-18 0:37 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, io-uring@vger.kernel.org, Caleb Sander Mateos,
linux-kernel@vger.kernel.org
On 18/12/2025 00:32, Ming Lei wrote:
> Can you share the test case so that we can understand why page isn't merged
> to last bvec? Maybe there is chance to improve block layer(bio add page
> related code)
Sure, this is how i triggered it:
#include <kunit/test.h>
#include <linux/io_uring.h>
#include <linux/io_uring_types.h>
#include <linux/io_uring/cmd.h>
#include <linux/blk-mq.h>
#include <linux/bio.h>
#include <linux/bvec.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include "io_uring.h"
#include "rsrc.h"
static void dummy_release(void *priv)
{
}
static void io_buffer_register_bvec_overflow_test(struct kunit *test)
{
struct io_ring_ctx *ctx;
struct io_uring_cmd *cmd;
struct io_kiocb *req;
struct request *rq;
struct bio *bio;
struct page *page;
int i, ret;
/*
* IO_CACHED_BVECS_SEGS is 32.
* We want more than 32 bvecs to trigger overflow if allocation uses 32.
*/
int num_bvecs = 40;
ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, ctx);
/* Initialize caches so io_alloc_imu works and knows the size */
if (io_rsrc_cache_init(ctx))
kunit_skip(test, "failed to init rsrc cache");
/* Initialize buf_table so index check passes */
ret = io_rsrc_data_alloc(&ctx->buf_table, 1);
KUNIT_ASSERT_EQ(test, ret, 0);
req = kunit_kzalloc(test, sizeof(*req), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, req);
req->ctx = ctx;
cmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
rq = kunit_kzalloc(test, sizeof(*rq), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, rq);
/* Allocate bio with enough slots */
bio = bio_kmalloc(num_bvecs, GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, bio);
bio_init(bio, NULL, bio_inline_vecs(bio), num_bvecs, REQ_OP_WRITE);
rq->bio = bio;
page = alloc_pages(GFP_KERNEL | __GFP_COMP | __GFP_ZERO, 6);
KUNIT_ASSERT_NOT_NULL(test, page);
/*
* Add pages to bio manually.
* We use physically contiguous pages to trick blk_rq_nr_phys_segments
* into returning 1 segment.
* We use multiple bvec entries to trick the loop in
io_buffer_register_bvec
* into writing out of bounds.
*/
for (i = 0; i < num_bvecs; i++) {
struct bio_vec *bv = &bio->bi_io_vec[i];
bv->bv_page = page + i;
bv->bv_len = PAGE_SIZE;
bv->bv_offset = 0;
bio->bi_vcnt++;
bio->bi_iter.bi_size += PAGE_SIZE;
}
/* Trigger */
ret = io_buffer_register_bvec(cmd, rq, dummy_release, 0, 0);
/* this should not be reachable */
__free_pages(page, 6);
kfree(bio);
io_rsrc_data_free(ctx, &ctx->buf_table);
io_rsrc_cache_free(ctx);
}
static struct kunit_case io_uring_rsrc_test_cases[] = {
KUNIT_CASE(io_buffer_register_bvec_overflow_test),
{}
};
static struct kunit_suite io_uring_rsrc_test_suite = {
.name = "io_uring_rsrc_test",
.test_cases = io_uring_rsrc_test_cases,
};
kunit_test_suite(io_uring_rsrc_test_suite);
MODULE_LICENSE("GPL");
--
- Evan Lambert / veygax
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-18 0:37 ` veygax
@ 2025-12-18 0:56 ` Keith Busch
2025-12-18 1:13 ` veygax
2025-12-19 5:33 ` Christoph Hellwig
0 siblings, 2 replies; 13+ messages in thread
From: Keith Busch @ 2025-12-18 0:56 UTC (permalink / raw)
To: veygax
Cc: Ming Lei, Jens Axboe, io-uring@vger.kernel.org,
Caleb Sander Mateos, linux-kernel@vger.kernel.org
On Thu, Dec 18, 2025 at 12:37:47AM +0000, veygax wrote:
> /*
> * Add pages to bio manually.
> * We use physically contiguous pages to trick blk_rq_nr_phys_segments
> * into returning 1 segment.
> * We use multiple bvec entries to trick the loop in io_buffer_register_bvec
> * into writing out of bounds.
> */
> for (i = 0; i < num_bvecs; i++) {
> struct bio_vec *bv = &bio->bi_io_vec[i];
> bv->bv_page = page + i;
> bv->bv_len = PAGE_SIZE;
> bv->bv_offset = 0;
> bio->bi_vcnt++;
> bio->bi_iter.bi_size += PAGE_SIZE;
> }
I believe you're supposed to use the bio_add_page() API rather than open
code the bvec setup.
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-18 0:56 ` Keith Busch
@ 2025-12-18 1:13 ` veygax
2025-12-18 2:58 ` Ming Lei
2025-12-18 23:00 ` Keith Busch
2025-12-19 5:33 ` Christoph Hellwig
1 sibling, 2 replies; 13+ messages in thread
From: veygax @ 2025-12-18 1:13 UTC (permalink / raw)
To: Keith Busch
Cc: Ming Lei, Jens Axboe, io-uring@vger.kernel.org,
Caleb Sander Mateos, linux-kernel@vger.kernel.org
On 18/12/2025 00:56, Keith Busch wrote:
> I believe you're supposed to use the bio_add_page() API rather than open
> code the bvec setup.
True, but I wanted fine control to prove my theory
--
- Evan Lambert / veygax
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-18 1:13 ` veygax
@ 2025-12-18 2:58 ` Ming Lei
2025-12-18 23:00 ` Keith Busch
1 sibling, 0 replies; 13+ messages in thread
From: Ming Lei @ 2025-12-18 2:58 UTC (permalink / raw)
To: veygax
Cc: Keith Busch, Jens Axboe, io-uring@vger.kernel.org,
Caleb Sander Mateos, linux-kernel@vger.kernel.org
On Thu, Dec 18, 2025 at 9:13 AM veygax <veyga@veygax.dev> wrote:
>
> On 18/12/2025 00:56, Keith Busch wrote:
> > I believe you're supposed to use the bio_add_page() API rather than open
> > code the bvec setup.
>
> True, but I wanted fine control to prove my theory
But almost there aren't such cases for in-tree uses, :-)
Thanks,
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-18 1:13 ` veygax
2025-12-18 2:58 ` Ming Lei
@ 2025-12-18 23:00 ` Keith Busch
2025-12-19 3:28 ` Jens Axboe
1 sibling, 1 reply; 13+ messages in thread
From: Keith Busch @ 2025-12-18 23:00 UTC (permalink / raw)
To: veygax
Cc: Ming Lei, Jens Axboe, io-uring@vger.kernel.org,
Caleb Sander Mateos, linux-kernel@vger.kernel.org
On Thu, Dec 18, 2025 at 01:13:11AM +0000, veygax wrote:
> On 18/12/2025 00:56, Keith Busch wrote:
> > I believe you're supposed to use the bio_add_page() API rather than open
> > code the bvec setup.
>
> True, but I wanted fine control to prove my theory
But doesn't that just prove misusing the interface breaks things? Is
there currently a legit way to get this error without the misuse? Or is
there existing mis-use in the kernel that should be fixed instead?
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-18 23:00 ` Keith Busch
@ 2025-12-19 3:28 ` Jens Axboe
0 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2025-12-19 3:28 UTC (permalink / raw)
To: Keith Busch, veygax
Cc: Ming Lei, io-uring@vger.kernel.org, Caleb Sander Mateos,
linux-kernel@vger.kernel.org
On 12/18/25 4:00 PM, Keith Busch wrote:
> On Thu, Dec 18, 2025 at 01:13:11AM +0000, veygax wrote:
>> On 18/12/2025 00:56, Keith Busch wrote:
>>> I believe you're supposed to use the bio_add_page() API rather than open
>>> code the bvec setup.
>>
>> True, but I wanted fine control to prove my theory
>
> But doesn't that just prove misusing the interface breaks things? Is
> there currently a legit way to get this error without the misuse? Or is
> there existing mis-use in the kernel that should be fixed instead?
This is the big question, and also why I originally rejected the posted
poc as it's not a valid use case.
veygax, please make a real reproducer or detail how this can actually
happen with the exposed APIs.
--
Jens Axboe
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-18 0:56 ` Keith Busch
2025-12-18 1:13 ` veygax
@ 2025-12-19 5:33 ` Christoph Hellwig
2025-12-21 23:29 ` Keith Busch
2025-12-22 1:02 ` veygax
1 sibling, 2 replies; 13+ messages in thread
From: Christoph Hellwig @ 2025-12-19 5:33 UTC (permalink / raw)
To: Keith Busch
Cc: veygax, Ming Lei, Jens Axboe, io-uring@vger.kernel.org,
Caleb Sander Mateos, linux-kernel@vger.kernel.org
On Thu, Dec 18, 2025 at 08:56:43AM +0800, Keith Busch wrote:
> On Thu, Dec 18, 2025 at 12:37:47AM +0000, veygax wrote:
> > /*
> > * Add pages to bio manually.
> > * We use physically contiguous pages to trick blk_rq_nr_phys_segments
> > * into returning 1 segment.
> > * We use multiple bvec entries to trick the loop in io_buffer_register_bvec
> > * into writing out of bounds.
> > */
> > for (i = 0; i < num_bvecs; i++) {
> > struct bio_vec *bv = &bio->bi_io_vec[i];
> > bv->bv_page = page + i;
> > bv->bv_len = PAGE_SIZE;
> > bv->bv_offset = 0;
> > bio->bi_vcnt++;
> > bio->bi_iter.bi_size += PAGE_SIZE;
> > }
>
> I believe you're supposed to use the bio_add_page() API rather than open
> code the bvec setup.
The above is simply an open coded version of doing repeated
__bio_add_page calls. Which would be rather suboptimal, but perfectly
valid.
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-19 5:33 ` Christoph Hellwig
@ 2025-12-21 23:29 ` Keith Busch
2025-12-22 22:08 ` Christoph Hellwig
2025-12-22 1:02 ` veygax
1 sibling, 1 reply; 13+ messages in thread
From: Keith Busch @ 2025-12-21 23:29 UTC (permalink / raw)
To: Christoph Hellwig
Cc: veygax, Ming Lei, Jens Axboe, io-uring@vger.kernel.org,
Caleb Sander Mateos, linux-kernel@vger.kernel.org
On Thu, Dec 18, 2025 at 09:33:56PM -0800, Christoph Hellwig wrote:
> On Thu, Dec 18, 2025 at 08:56:43AM +0800, Keith Busch wrote:
> > On Thu, Dec 18, 2025 at 12:37:47AM +0000, veygax wrote:
> > > /*
> > > * Add pages to bio manually.
> > > * We use physically contiguous pages to trick blk_rq_nr_phys_segments
> > > * into returning 1 segment.
> > > * We use multiple bvec entries to trick the loop in io_buffer_register_bvec
> > > * into writing out of bounds.
> > > */
> > > for (i = 0; i < num_bvecs; i++) {
> > > struct bio_vec *bv = &bio->bi_io_vec[i];
> > > bv->bv_page = page + i;
> > > bv->bv_len = PAGE_SIZE;
> > > bv->bv_offset = 0;
> > > bio->bi_vcnt++;
> > > bio->bi_iter.bi_size += PAGE_SIZE;
> > > }
> >
> > I believe you're supposed to use the bio_add_page() API rather than open
> > code the bvec setup.
>
> The above is simply an open coded version of doing repeated
> __bio_add_page calls. Which would be rather suboptimal, but perfectly
> valid.
Yeah, there's nothing stopping someone from using it that way, but a
quick survey of __bio_add_page() users appear to be special cases that
allocate a single vector bio, so its existing use is a short-cut that
bio_add_page() will inevitiably reach anyway. Did you intend for it to
be called directly for multiple vector uses too? It is suboptimal as you
said, so it still feels like a misuse if someone did that.
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-21 23:29 ` Keith Busch
@ 2025-12-22 22:08 ` Christoph Hellwig
0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2025-12-22 22:08 UTC (permalink / raw)
To: Keith Busch
Cc: Christoph Hellwig, veygax, Ming Lei, Jens Axboe,
io-uring@vger.kernel.org, Caleb Sander Mateos,
linux-kernel@vger.kernel.org
On Mon, Dec 22, 2025 at 07:29:31AM +0800, Keith Busch wrote:
> > The above is simply an open coded version of doing repeated
> > __bio_add_page calls. Which would be rather suboptimal, but perfectly
> > valid.
>
> Yeah, there's nothing stopping someone from using it that way, but a
> quick survey of __bio_add_page() users appear to be special cases that
> allocate a single vector bio, so its existing use is a short-cut that
> bio_add_page() will inevitiably reach anyway. Did you intend for it to
> be called directly for multiple vector uses too? It is suboptimal as you
> said, so it still feels like a misuse if someone did that.
We can't even force users to use __bio_add_page. Take a look at
drivers/md/bcache/util.c:bch_bio_map() for a real-life example for
something that could create this bvec pattern.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec
2025-12-19 5:33 ` Christoph Hellwig
2025-12-21 23:29 ` Keith Busch
@ 2025-12-22 1:02 ` veygax
1 sibling, 0 replies; 13+ messages in thread
From: veygax @ 2025-12-22 1:02 UTC (permalink / raw)
To: Christoph Hellwig, Keith Busch
Cc: Ming Lei, Jens Axboe, io-uring@vger.kernel.org,
Caleb Sander Mateos, linux-kernel@vger.kernel.org
On 19/12/2025 05:33, Christoph Hellwig wrote:
> The above is simply an open coded version of doing repeated
> __bio_add_page calls. Which would be rather suboptimal, but perfectly
> valid.
I can confirm iterating over num_bvecs in __bio_add_page produces the
same KASAN output.
for (i = 0; i < num_bvecs; i++)
__bio_add_page(bio, page + i, PAGE_SIZE, 0);
Also (to all CC's), please disregard this version of the patch. V2 has
been posted with typos fixed and a more performance efficient approach.
https://lore.kernel.org/io-uring/20251217214753.218765-3-veyga@veygax.dev/
--
- Evan Lambert / veygax
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-12-22 22:08 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17 21:04 [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec veygax
2025-12-17 21:22 ` Caleb Sander Mateos
2025-12-18 0:32 ` Ming Lei
2025-12-18 0:37 ` veygax
2025-12-18 0:56 ` Keith Busch
2025-12-18 1:13 ` veygax
2025-12-18 2:58 ` Ming Lei
2025-12-18 23:00 ` Keith Busch
2025-12-19 3:28 ` Jens Axboe
2025-12-19 5:33 ` Christoph Hellwig
2025-12-21 23:29 ` Keith Busch
2025-12-22 22:08 ` Christoph Hellwig
2025-12-22 1:02 ` veygax
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox