* re: io_uring: defer flushing cached reqs
@ 2021-02-24 17:15 Colin Ian King
2021-02-24 17:52 ` Pavel Begunkov
0 siblings, 1 reply; 2+ messages in thread
From: Colin Ian King @ 2021-02-24 17:15 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring
Hi,
Static analysis on linux-next with Coverity has detected a potential
issue with the following commit:
commit e5d1bc0a91f16959aa279aa3ee9fdc246d4bb382
Author: Pavel Begunkov <[email protected]>
Date: Wed Feb 10 00:03:23 2021 +0000
io_uring: defer flushing cached reqs
The analysis is as follows:
1679 static struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
1680 {
1681 struct io_submit_state *state = &ctx->submit_state;
1682
1. Condition 0 /* !!(8 > sizeof (state->reqs) / sizeof
(state->reqs[0]) + (int)sizeof (struct io_alloc_req::[unnamed type]))
*/, taking false branch.
1683 BUILD_BUG_ON(IO_REQ_ALLOC_BATCH > ARRAY_SIZE(state->reqs));
1684
2. Condition !state->free_reqs, taking true branch.
3. cond_const: Checking state->free_reqs implies that
state->free_reqs is 0 on the false branch.
1685 if (!state->free_reqs) {
1686 gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
1687 int ret;
1688
4. Condition io_flush_cached_reqs(ctx), taking true branch.
1689 if (io_flush_cached_reqs(ctx))
5. Jumping to label got_req.
1690 goto got_req;
1691
1692 ret = kmem_cache_alloc_bulk(req_cachep, gfp,
IO_REQ_ALLOC_BATCH,
1693 state->reqs);
1694
1695 /*
1696 * Bulk alloc is all-or-nothing. If we fail to get a
batch,
1697 * retry single alloc to be on the safe side.
1698 */
1699 if (unlikely(ret <= 0)) {
1700 state->reqs[0] =
kmem_cache_alloc(req_cachep, gfp);
1701 if (!state->reqs[0])
1702 return NULL;
1703 ret = 1;
1704 }
1705 state->free_reqs = ret;
1706 }
1707got_req:
6. decr: Decrementing state->free_reqs. The value of
state->free_reqs is now 4294967295.
1708 state->free_reqs--;
Out-of-bounds read (OVERRUN)
7. overrun-local: Overrunning array state->reqs of 32 8-byte
elements at element index 4294967295 (byte offset 34359738367) using
index state->free_reqs (which evaluates to 4294967295).
1709 return state->reqs[state->free_reqs];
1710}
If state->free_reqs is zero and io_flush_cached_reqs(ctx) is true, then
we end up on line 1708 decrementing the unsigned int state->free_reqs so
this wraps around to 4294967295 causing an out of bounds read on
state->reqs[].
Colin
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: io_uring: defer flushing cached reqs
2021-02-24 17:15 io_uring: defer flushing cached reqs Colin Ian King
@ 2021-02-24 17:52 ` Pavel Begunkov
0 siblings, 0 replies; 2+ messages in thread
From: Pavel Begunkov @ 2021-02-24 17:52 UTC (permalink / raw)
To: Colin Ian King; +Cc: Jens Axboe, io-uring
On 24/02/2021 17:15, Colin Ian King wrote:
> Hi,
>
> Static analysis on linux-next with Coverity has detected a potential
> issue with the following commit:
>
> commit e5d1bc0a91f16959aa279aa3ee9fdc246d4bb382
> Author: Pavel Begunkov <[email protected]>
> Date: Wed Feb 10 00:03:23 2021 +0000
>
> io_uring: defer flushing cached reqs
>
>
> The analysis is as follows:
>
> 1679 static struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
> 1680 {
> 1681 struct io_submit_state *state = &ctx->submit_state;
> 1682
> 1. Condition 0 /* !!(8 > sizeof (state->reqs) / sizeof
> (state->reqs[0]) + (int)sizeof (struct io_alloc_req::[unnamed type]))
> */, taking false branch.
>
> 1683 BUILD_BUG_ON(IO_REQ_ALLOC_BATCH > ARRAY_SIZE(state->reqs));
> 1684
>
> 2. Condition !state->free_reqs, taking true branch.
> 3. cond_const: Checking state->free_reqs implies that
> state->free_reqs is 0 on the false branch.
>
> 1685 if (!state->free_reqs) {
> 1686 gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
> 1687 int ret;
> 1688
>
> 4. Condition io_flush_cached_reqs(ctx), taking true branch.
>
> 1689 if (io_flush_cached_reqs(ctx))
>
> 5. Jumping to label got_req.
>
> 1690 goto got_req;
> 1691
> 1692 ret = kmem_cache_alloc_bulk(req_cachep, gfp,
> IO_REQ_ALLOC_BATCH,
> 1693 state->reqs);
> 1694
> 1695 /*
> 1696 * Bulk alloc is all-or-nothing. If we fail to get a
> batch,
> 1697 * retry single alloc to be on the safe side.
> 1698 */
> 1699 if (unlikely(ret <= 0)) {
> 1700 state->reqs[0] =
> kmem_cache_alloc(req_cachep, gfp);
> 1701 if (!state->reqs[0])
> 1702 return NULL;
> 1703 ret = 1;
> 1704 }
> 1705 state->free_reqs = ret;
> 1706 }
> 1707got_req:
>
> 6. decr: Decrementing state->free_reqs. The value of
> state->free_reqs is now 4294967295.
>
> 1708 state->free_reqs--;
>
> Out-of-bounds read (OVERRUN)
> 7. overrun-local: Overrunning array state->reqs of 32 8-byte
> elements at element index 4294967295 (byte offset 34359738367) using
> index state->free_reqs (which evaluates to 4294967295).
>
> 1709 return state->reqs[state->free_reqs];
> 1710}
>
> If state->free_reqs is zero and io_flush_cached_reqs(ctx) is true, then
It returns true IFF it incremented state->free_reqs, so should be
a false positive. I'll clean the function up a bit for next, might
help the tool.
> we end up on line 1708 decrementing the unsigned int state->free_reqs so
> this wraps around to 4294967295 causing an out of bounds read on
> state->reqs[].
>
> Colin
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-02-24 17:57 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-02-24 17:15 io_uring: defer flushing cached reqs Colin Ian King
2021-02-24 17:52 ` Pavel Begunkov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox