public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH 5.10 0/5] iopoll fixes
@ 2020-12-06 22:22 Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Following up Xiaoguang's patch, which is included in the series, patch
up when similar bug can happen. There are holes left calling
io_cqring_events(), but that's for later.

The last patch is a bit different and fixes the new personality
grabbing.

Pavel Begunkov (4):
  io_uring: fix racy IOPOLL completions
  io_uring: fix racy IOPOLL flush overflow
  io_uring: fix io_cqring_events()'s noflush
  io_uring: fix mis-seting personality's creds

Xiaoguang Wang (1):
  io_uring: always let io_iopoll_complete() complete polled io.

 fs/io_uring.c | 52 ++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 39 insertions(+), 13 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-06 22:22 [PATCH 5.10 0/5] iopoll fixes Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-07 16:28   ` Jens Axboe
  2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: Xiaoguang Wang, stable, Abaci Fuzz, Joseph Qi

From: Xiaoguang Wang <[email protected]>

Abaci Fuzz reported a double-free or invalid-free BUG in io_commit_cqring():
[   95.504842] BUG: KASAN: double-free or invalid-free in io_commit_cqring+0x3ec/0x8e0
[   95.505921]
[   95.506225] CPU: 0 PID: 4037 Comm: io_wqe_worker-0 Tainted: G    B
W         5.10.0-rc5+ #1
[   95.507434] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[   95.508248] Call Trace:
[   95.508683]  dump_stack+0x107/0x163
[   95.509323]  ? io_commit_cqring+0x3ec/0x8e0
[   95.509982]  print_address_description.constprop.0+0x3e/0x60
[   95.510814]  ? vprintk_func+0x98/0x140
[   95.511399]  ? io_commit_cqring+0x3ec/0x8e0
[   95.512036]  ? io_commit_cqring+0x3ec/0x8e0
[   95.512733]  kasan_report_invalid_free+0x51/0x80
[   95.513431]  ? io_commit_cqring+0x3ec/0x8e0
[   95.514047]  __kasan_slab_free+0x141/0x160
[   95.514699]  kfree+0xd1/0x390
[   95.515182]  io_commit_cqring+0x3ec/0x8e0
[   95.515799]  __io_req_complete.part.0+0x64/0x90
[   95.516483]  io_wq_submit_work+0x1fa/0x260
[   95.517117]  io_worker_handle_work+0xeac/0x1c00
[   95.517828]  io_wqe_worker+0xc94/0x11a0
[   95.518438]  ? io_worker_handle_work+0x1c00/0x1c00
[   95.519151]  ? __kthread_parkme+0x11d/0x1d0
[   95.519806]  ? io_worker_handle_work+0x1c00/0x1c00
[   95.520512]  ? io_worker_handle_work+0x1c00/0x1c00
[   95.521211]  kthread+0x396/0x470
[   95.521727]  ? _raw_spin_unlock_irq+0x24/0x30
[   95.522380]  ? kthread_mod_delayed_work+0x180/0x180
[   95.523108]  ret_from_fork+0x22/0x30
[   95.523684]
[   95.523985] Allocated by task 4035:
[   95.524543]  kasan_save_stack+0x1b/0x40
[   95.525136]  __kasan_kmalloc.constprop.0+0xc2/0xd0
[   95.525882]  kmem_cache_alloc_trace+0x17b/0x310
[   95.533930]  io_queue_sqe+0x225/0xcb0
[   95.534505]  io_submit_sqes+0x1768/0x25f0
[   95.535164]  __x64_sys_io_uring_enter+0x89e/0xd10
[   95.535900]  do_syscall_64+0x33/0x40
[   95.536465]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   95.537199]
[   95.537505] Freed by task 4035:
[   95.538003]  kasan_save_stack+0x1b/0x40
[   95.538599]  kasan_set_track+0x1c/0x30
[   95.539177]  kasan_set_free_info+0x1b/0x30
[   95.539798]  __kasan_slab_free+0x112/0x160
[   95.540427]  kfree+0xd1/0x390
[   95.540910]  io_commit_cqring+0x3ec/0x8e0
[   95.541516]  io_iopoll_complete+0x914/0x1390
[   95.542150]  io_do_iopoll+0x580/0x700
[   95.542724]  io_iopoll_try_reap_events.part.0+0x108/0x200
[   95.543512]  io_ring_ctx_wait_and_kill+0x118/0x340
[   95.544206]  io_uring_release+0x43/0x50
[   95.544791]  __fput+0x28d/0x940
[   95.545291]  task_work_run+0xea/0x1b0
[   95.545873]  do_exit+0xb6a/0x2c60
[   95.546400]  do_group_exit+0x12a/0x320
[   95.546967]  __x64_sys_exit_group+0x3f/0x50
[   95.547605]  do_syscall_64+0x33/0x40
[   95.548155]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
we'll complete req by calling io_req_complete(), which will hold completion_lock
to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
hold completion_lock to call io_commit_cqring(), then there maybe concurrent
access to ctx->defer_list, double free may happen.

To fix this bug, we always let io_iopoll_complete() complete polled io.

Cc: <[email protected]> # 5.5+
Reported-by: Abaci Fuzz <[email protected]>
Signed-off-by: Xiaoguang Wang <[email protected]>
Reviewed-by: Pavel Begunkov <[email protected]>
Reviewed-by: Joseph Qi <[email protected]>
Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index a2a7c65a77aa..c895a306f919 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6074,8 +6074,19 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
 	}
 
 	if (ret) {
-		req_set_fail_links(req);
-		io_req_complete(req, ret);
+		/*
+		 * io_iopoll_complete() does not hold completion_lock to complete
+		 * polled io, so here for polled io, just mark it done and still let
+		 * io_iopoll_complete() complete it.
+		 */
+		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
+			struct kiocb *kiocb = &req->rw.kiocb;
+
+			kiocb_done(kiocb, ret, NULL);
+		} else {
+			req_set_fail_links(req);
+			io_req_complete(req, ret);
+		}
 	}
 
 	return io_steal_work(req);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions
  2020-12-06 22:22 [PATCH 5.10 0/5] iopoll fixes Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-07 18:31   ` Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow Pavel Begunkov
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

IOPOLL allows buffer remove/provide requests, but they doesn't
synchronise by rules of IOPOLL, namely it have to hold uring_lock.

Cc: <[email protected]> # 5.7+
Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c895a306f919..4fac02ea5f4c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3948,11 +3948,17 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
 	head = idr_find(&ctx->io_buffer_idr, p->bgid);
 	if (head)
 		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
-
-	io_ring_submit_lock(ctx, !force_nonblock);
 	if (ret < 0)
 		req_set_fail_links(req);
-	__io_req_complete(req, ret, 0, cs);
+
+	/* need to hold the lock to complete IOPOLL requests */
+	if (ctx->flags & IORING_SETUP_IOPOLL) {
+		__io_req_complete(req, ret, 0, cs);
+		io_ring_submit_unlock(ctx, !force_nonblock);
+	} else {
+		io_ring_submit_unlock(ctx, !force_nonblock);
+		__io_req_complete(req, ret, 0, cs);
+	}
 	return 0;
 }
 
@@ -4037,10 +4043,17 @@ static int io_provide_buffers(struct io_kiocb *req, bool force_nonblock,
 		}
 	}
 out:
-	io_ring_submit_unlock(ctx, !force_nonblock);
 	if (ret < 0)
 		req_set_fail_links(req);
-	__io_req_complete(req, ret, 0, cs);
+
+	/* need to hold the lock to complete IOPOLL requests */
+	if (ctx->flags & IORING_SETUP_IOPOLL) {
+		__io_req_complete(req, ret, 0, cs);
+		io_ring_submit_unlock(ctx, !force_nonblock);
+	} else {
+		io_ring_submit_unlock(ctx, !force_nonblock);
+		__io_req_complete(req, ret, 0, cs);
+	}
 	return 0;
 }
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow
  2020-12-06 22:22 [PATCH 5.10 0/5] iopoll fixes Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush Pavel Begunkov
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

It's not safe to call io_cqring_overflow_flush() for IOPOLL mode without
hodling uring_lock, because it does synchronisation differently. Make
sure we have it.

As for io_ring_exit_work(), we don't even need it there because
io_ring_ctx_wait_and_kill() already set force flag making all overflowed
requests to be dropped.

Cc: <[email protected]> # 5.5+
Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4fac02ea5f4c..b1ba9a738315 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8393,8 +8393,6 @@ static void io_ring_exit_work(struct work_struct *work)
 	 * as nobody else will be looking for them.
 	 */
 	do {
-		if (ctx->rings)
-			io_cqring_overflow_flush(ctx, true, NULL, NULL);
 		io_iopoll_try_reap_events(ctx);
 	} while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20));
 	io_ring_ctx_free(ctx);
@@ -8404,6 +8402,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 {
 	mutex_lock(&ctx->uring_lock);
 	percpu_ref_kill(&ctx->refs);
+	if (ctx->rings)
+		io_cqring_overflow_flush(ctx, true, NULL, NULL);
 	mutex_unlock(&ctx->uring_lock);
 
 	io_kill_timeouts(ctx, NULL);
@@ -8413,8 +8413,6 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 		io_wq_cancel_all(ctx->io_wq);
 
 	/* if we failed setting up the ctx, we might not have any rings */
-	if (ctx->rings)
-		io_cqring_overflow_flush(ctx, true, NULL, NULL);
 	io_iopoll_try_reap_events(ctx);
 	idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx);
 
@@ -8691,7 +8689,9 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
 	else
 		io_cancel_defer_files(ctx, task, NULL);
 
+	io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 	io_cqring_overflow_flush(ctx, true, task, files);
+	io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 
 	while (__io_uring_cancel_task_requests(ctx, task, files)) {
 		io_run_task_work();
@@ -8993,8 +8993,10 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 	 */
 	ret = 0;
 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+		io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 		if (!list_empty_careful(&ctx->cq_overflow_list))
 			io_cqring_overflow_flush(ctx, false, NULL, NULL);
+		io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 		if (flags & IORING_ENTER_SQ_WAKEUP)
 			wake_up(&ctx->sq_data->wait);
 		if (flags & IORING_ENTER_SQ_WAIT)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush
  2020-12-06 22:22 [PATCH 5.10 0/5] iopoll fixes Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-12-06 22:22 ` [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 5/5] io_uring: fix mis-seting personality's creds Pavel Begunkov
  2020-12-07 15:05 ` [PATCH 5.10 0/5] iopoll fixes Jens Axboe
  5 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

Checking !list_empty(&ctx->cq_overflow_list) around noflush in
io_cqring_events() is racy, because if it fails but a request overflowed
just after that, io_cqring_overflow_flush() still will be called.

Remove the second check, it shouldn't be a problem for performance,
because there is cq_check_overflow bit check just above.

Cc: <[email protected]> # 5.5+
Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b1ba9a738315..f707caed9f79 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2246,7 +2246,7 @@ static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush)
 		 * we wake up the task, and the next invocation will flush the
 		 * entries. We cannot safely to it from here.
 		 */
-		if (noflush && !list_empty(&ctx->cq_overflow_list))
+		if (noflush)
 			return -1U;
 
 		io_cqring_overflow_flush(ctx, false, NULL, NULL);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5.10 5/5] io_uring: fix mis-seting personality's creds
  2020-12-06 22:22 [PATCH 5.10 0/5] iopoll fixes Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-12-06 22:22 ` [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-07 15:05 ` [PATCH 5.10 0/5] iopoll fixes Jens Axboe
  5 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring

After io_identity_cow() copies an work.identity it wants to copy creds
to the new just allocated id, not the old one. Otherwise it's
akin to req->work.identity->creds = req->work.identity->creds.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f707caed9f79..201e5354b07b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1284,7 +1284,7 @@ static bool io_identity_cow(struct io_kiocb *req)
 	 */
 	io_init_identity(id);
 	if (creds)
-		req->work.identity->creds = creds;
+		id->creds = creds;
 
 	/* add one for this request */
 	refcount_inc(&id->count);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 0/5] iopoll fixes
  2020-12-06 22:22 [PATCH 5.10 0/5] iopoll fixes Pavel Begunkov
                   ` (4 preceding siblings ...)
  2020-12-06 22:22 ` [PATCH 5.10 5/5] io_uring: fix mis-seting personality's creds Pavel Begunkov
@ 2020-12-07 15:05 ` Jens Axboe
  2020-12-07 15:24   ` Pavel Begunkov
  5 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2020-12-07 15:05 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 12/6/20 3:22 PM, Pavel Begunkov wrote:
> Following up Xiaoguang's patch, which is included in the series, patch
> up when similar bug can happen. There are holes left calling
> io_cqring_events(), but that's for later.
> 
> The last patch is a bit different and fixes the new personality
> grabbing.
> 
> Pavel Begunkov (4):
>   io_uring: fix racy IOPOLL completions
>   io_uring: fix racy IOPOLL flush overflow
>   io_uring: fix io_cqring_events()'s noflush
>   io_uring: fix mis-seting personality's creds
> 
> Xiaoguang Wang (1):
>   io_uring: always let io_iopoll_complete() complete polled io.
> 
>  fs/io_uring.c | 52 ++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 39 insertions(+), 13 deletions(-)

I'm going to apply 5/5 for 5.10, the rest for 5.11. None of these are no
in this series, and we're very late at this point. They are all marked for
stable, so not a huge concern on my front.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 0/5] iopoll fixes
  2020-12-07 15:05 ` [PATCH 5.10 0/5] iopoll fixes Jens Axboe
@ 2020-12-07 15:24   ` Pavel Begunkov
  2020-12-07 15:28     ` Jens Axboe
  0 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-07 15:24 UTC (permalink / raw)
  To: Jens Axboe, io-uring

On 07/12/2020 15:05, Jens Axboe wrote:
> On 12/6/20 3:22 PM, Pavel Begunkov wrote:
>> Following up Xiaoguang's patch, which is included in the series, patch
>> up when similar bug can happen. There are holes left calling
>> io_cqring_events(), but that's for later.
>>
>> The last patch is a bit different and fixes the new personality
>> grabbing.
>>
>> Pavel Begunkov (4):
>>   io_uring: fix racy IOPOLL completions
>>   io_uring: fix racy IOPOLL flush overflow
>>   io_uring: fix io_cqring_events()'s noflush
>>   io_uring: fix mis-seting personality's creds
>>
>> Xiaoguang Wang (1):
>>   io_uring: always let io_iopoll_complete() complete polled io.
>>
>>  fs/io_uring.c | 52 ++++++++++++++++++++++++++++++++++++++-------------
>>  1 file changed, 39 insertions(+), 13 deletions(-)
> 
> I'm going to apply 5/5 for 5.10, the rest for 5.11. None of these are no

Didn't get what you mean by "None of these are no in this series."

> in this series, and we're very late at this point. They are all marked for
> stable, so not a huge concern on my front.

Makes sense. I hope it applies to 5.11 well. 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 0/5] iopoll fixes
  2020-12-07 15:24   ` Pavel Begunkov
@ 2020-12-07 15:28     ` Jens Axboe
  0 siblings, 0 replies; 17+ messages in thread
From: Jens Axboe @ 2020-12-07 15:28 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 12/7/20 8:24 AM, Pavel Begunkov wrote:
> On 07/12/2020 15:05, Jens Axboe wrote:
>> On 12/6/20 3:22 PM, Pavel Begunkov wrote:
>>> Following up Xiaoguang's patch, which is included in the series, patch
>>> up when similar bug can happen. There are holes left calling
>>> io_cqring_events(), but that's for later.
>>>
>>> The last patch is a bit different and fixes the new personality
>>> grabbing.
>>>
>>> Pavel Begunkov (4):
>>>   io_uring: fix racy IOPOLL completions
>>>   io_uring: fix racy IOPOLL flush overflow
>>>   io_uring: fix io_cqring_events()'s noflush
>>>   io_uring: fix mis-seting personality's creds
>>>
>>> Xiaoguang Wang (1):
>>>   io_uring: always let io_iopoll_complete() complete polled io.
>>>
>>>  fs/io_uring.c | 52 ++++++++++++++++++++++++++++++++++++++-------------
>>>  1 file changed, 39 insertions(+), 13 deletions(-)
>>
>> I'm going to apply 5/5 for 5.10, the rest for 5.11. None of these are no
> 
> Didn't get what you mean by "None of these are no in this series."

Yeah that turned out better in my head... Meant that none of these are
_regressions_ in this series, so not necessarily urgent for 5.10.

>> in this series, and we're very late at this point. They are all marked for
>> stable, so not a huge concern on my front.
> 
> Makes sense. I hope it applies to 5.11 well. 

Mostly, just 3/5 needed a bit of hand holding.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
@ 2020-12-07 16:28   ` Jens Axboe
  2020-12-08 19:12     ` Pavel Begunkov
  0 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2020-12-07 16:28 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: xiaoguang.wang, io-uring

On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <[email protected]> wrote:
>
> From: Xiaoguang Wang <[email protected]>
>
> Abaci Fuzz reported a double-free or invalid-free BUG in io_commit_cqring():
> [   95.504842] BUG: KASAN: double-free or invalid-free in io_commit_cqring+0x3ec/0x8e0
> [   95.505921]
> [   95.506225] CPU: 0 PID: 4037 Comm: io_wqe_worker-0 Tainted: G    B
> W         5.10.0-rc5+ #1
> [   95.507434] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
> [   95.508248] Call Trace:
> [   95.508683]  dump_stack+0x107/0x163
> [   95.509323]  ? io_commit_cqring+0x3ec/0x8e0
> [   95.509982]  print_address_description.constprop.0+0x3e/0x60
> [   95.510814]  ? vprintk_func+0x98/0x140
> [   95.511399]  ? io_commit_cqring+0x3ec/0x8e0
> [   95.512036]  ? io_commit_cqring+0x3ec/0x8e0
> [   95.512733]  kasan_report_invalid_free+0x51/0x80
> [   95.513431]  ? io_commit_cqring+0x3ec/0x8e0
> [   95.514047]  __kasan_slab_free+0x141/0x160
> [   95.514699]  kfree+0xd1/0x390
> [   95.515182]  io_commit_cqring+0x3ec/0x8e0
> [   95.515799]  __io_req_complete.part.0+0x64/0x90
> [   95.516483]  io_wq_submit_work+0x1fa/0x260
> [   95.517117]  io_worker_handle_work+0xeac/0x1c00
> [   95.517828]  io_wqe_worker+0xc94/0x11a0
> [   95.518438]  ? io_worker_handle_work+0x1c00/0x1c00
> [   95.519151]  ? __kthread_parkme+0x11d/0x1d0
> [   95.519806]  ? io_worker_handle_work+0x1c00/0x1c00
> [   95.520512]  ? io_worker_handle_work+0x1c00/0x1c00
> [   95.521211]  kthread+0x396/0x470
> [   95.521727]  ? _raw_spin_unlock_irq+0x24/0x30
> [   95.522380]  ? kthread_mod_delayed_work+0x180/0x180
> [   95.523108]  ret_from_fork+0x22/0x30
> [   95.523684]
> [   95.523985] Allocated by task 4035:
> [   95.524543]  kasan_save_stack+0x1b/0x40
> [   95.525136]  __kasan_kmalloc.constprop.0+0xc2/0xd0
> [   95.525882]  kmem_cache_alloc_trace+0x17b/0x310
> [   95.533930]  io_queue_sqe+0x225/0xcb0
> [   95.534505]  io_submit_sqes+0x1768/0x25f0
> [   95.535164]  __x64_sys_io_uring_enter+0x89e/0xd10
> [   95.535900]  do_syscall_64+0x33/0x40
> [   95.536465]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [   95.537199]
> [   95.537505] Freed by task 4035:
> [   95.538003]  kasan_save_stack+0x1b/0x40
> [   95.538599]  kasan_set_track+0x1c/0x30
> [   95.539177]  kasan_set_free_info+0x1b/0x30
> [   95.539798]  __kasan_slab_free+0x112/0x160
> [   95.540427]  kfree+0xd1/0x390
> [   95.540910]  io_commit_cqring+0x3ec/0x8e0
> [   95.541516]  io_iopoll_complete+0x914/0x1390
> [   95.542150]  io_do_iopoll+0x580/0x700
> [   95.542724]  io_iopoll_try_reap_events.part.0+0x108/0x200
> [   95.543512]  io_ring_ctx_wait_and_kill+0x118/0x340
> [   95.544206]  io_uring_release+0x43/0x50
> [   95.544791]  __fput+0x28d/0x940
> [   95.545291]  task_work_run+0xea/0x1b0
> [   95.545873]  do_exit+0xb6a/0x2c60
> [   95.546400]  do_group_exit+0x12a/0x320
> [   95.546967]  __x64_sys_exit_group+0x3f/0x50
> [   95.547605]  do_syscall_64+0x33/0x40
> [   95.548155]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>
> The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
> we'll complete req by calling io_req_complete(), which will hold completion_lock
> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
> hold completion_lock to call io_commit_cqring(), then there maybe concurrent
> access to ctx->defer_list, double free may happen.
>
> To fix this bug, we always let io_iopoll_complete() complete polled io.

This patch is causing hangs with iopoll testing, if you end up getting
-EAGAIN on request submission. I've dropped it.

Reproducible with test/iopoll /dev/somedevice

where somedevice has a low queue depth and hits request starvation
during the test.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions
  2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
@ 2020-12-07 18:31   ` Pavel Begunkov
  0 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-07 18:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

On 06/12/2020 22:22, Pavel Begunkov wrote:
> IOPOLL allows buffer remove/provide requests, but they doesn't
> synchronise by rules of IOPOLL, namely it have to hold uring_lock.
> 
> Cc: <[email protected]> # 5.7+
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
>  fs/io_uring.c | 23 ++++++++++++++++++-----
>  1 file changed, 18 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index c895a306f919..4fac02ea5f4c 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -3948,11 +3948,17 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
>  	head = idr_find(&ctx->io_buffer_idr, p->bgid);
>  	if (head)
>  		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
> -
> -	io_ring_submit_lock(ctx, !force_nonblock);

Here should be unlock(), not a second lock(). I didn't notice
first, but the patch fixes it. maybe for 5.10?

>  	if (ret < 0)
>  		req_set_fail_links(req);
> -	__io_req_complete(req, ret, 0, cs);
> +
> +	/* need to hold the lock to complete IOPOLL requests */
> +	if (ctx->flags & IORING_SETUP_IOPOLL) {
> +		__io_req_complete(req, ret, 0, cs);
> +		io_ring_submit_unlock(ctx, !force_nonblock);
> +	} else {
> +		io_ring_submit_unlock(ctx, !force_nonblock);
> +		__io_req_complete(req, ret, 0, cs);
> +	}
>  	return 0;
>  }


-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-07 16:28   ` Jens Axboe
@ 2020-12-08 19:12     ` Pavel Begunkov
  2020-12-08 19:17       ` Jens Axboe
  0 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-08 19:12 UTC (permalink / raw)
  To: Jens Axboe; +Cc: xiaoguang.wang, io-uring

On 07/12/2020 16:28, Jens Axboe wrote:
> On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <[email protected]> wrote:
>> From: Xiaoguang Wang <[email protected]>
>>
>> The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
>> we'll complete req by calling io_req_complete(), which will hold completion_lock
>> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
>> hold completion_lock to call io_commit_cqring(), then there maybe concurrent
>> access to ctx->defer_list, double free may happen.
>>
>> To fix this bug, we always let io_iopoll_complete() complete polled io.
> 
> This patch is causing hangs with iopoll testing, if you end up getting
> -EAGAIN on request submission. I've dropped it.

I fail to understand without debugging how does it happen, especially since
it shouldn't even get out of the while in io_wq_submit_work(). Is that
something obvious I've missed?

> 
> Reproducible with test/iopoll /dev/somedevice
> 
> where somedevice has a low queue depth and hits request starvation
> during the test.
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-08 19:12     ` Pavel Begunkov
@ 2020-12-08 19:17       ` Jens Axboe
  2020-12-08 19:24         ` Pavel Begunkov
  0 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2020-12-08 19:17 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: xiaoguang.wang, io-uring

On 12/8/20 12:12 PM, Pavel Begunkov wrote:
> On 07/12/2020 16:28, Jens Axboe wrote:
>> On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <[email protected]> wrote:
>>> From: Xiaoguang Wang <[email protected]>
>>>
>>> The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
>>> we'll complete req by calling io_req_complete(), which will hold completion_lock
>>> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
>>> hold completion_lock to call io_commit_cqring(), then there maybe concurrent
>>> access to ctx->defer_list, double free may happen.
>>>
>>> To fix this bug, we always let io_iopoll_complete() complete polled io.
>>
>> This patch is causing hangs with iopoll testing, if you end up getting
>> -EAGAIN on request submission. I've dropped it.
> 
> I fail to understand without debugging how does it happen, especially since
> it shouldn't even get out of the while in io_wq_submit_work(). Is that
> something obvious I've missed?

I didn't have time to look into it, and haven't yet, just reporting that
it very reliably fails (and under what conditions).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-08 19:17       ` Jens Axboe
@ 2020-12-08 19:24         ` Pavel Begunkov
  2020-12-08 21:10           ` Jens Axboe
  0 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-08 19:24 UTC (permalink / raw)
  To: Jens Axboe; +Cc: xiaoguang.wang, io-uring

On 08/12/2020 19:17, Jens Axboe wrote:
> On 12/8/20 12:12 PM, Pavel Begunkov wrote:
>> On 07/12/2020 16:28, Jens Axboe wrote:
>>> On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <[email protected]> wrote:
>>>> From: Xiaoguang Wang <[email protected]>
>>>>
>>>> The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
>>>> we'll complete req by calling io_req_complete(), which will hold completion_lock
>>>> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
>>>> hold completion_lock to call io_commit_cqring(), then there maybe concurrent
>>>> access to ctx->defer_list, double free may happen.
>>>>
>>>> To fix this bug, we always let io_iopoll_complete() complete polled io.
>>>
>>> This patch is causing hangs with iopoll testing, if you end up getting
>>> -EAGAIN on request submission. I've dropped it.
>>
>> I fail to understand without debugging how does it happen, especially since
>> it shouldn't even get out of the while in io_wq_submit_work(). Is that
>> something obvious I've missed?
> 
> I didn't have time to look into it, and haven't yet, just reporting that
> it very reliably fails (and under what conditions).

Yeah, I get it, asked just in case.
I'll see what's going on if Xiaoguang wouldn't handle it before.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-08 19:24         ` Pavel Begunkov
@ 2020-12-08 21:10           ` Jens Axboe
  2020-12-09 20:17             ` Pavel Begunkov
  0 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2020-12-08 21:10 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: xiaoguang.wang, io-uring

On 12/8/20 12:24 PM, Pavel Begunkov wrote:
> On 08/12/2020 19:17, Jens Axboe wrote:
>> On 12/8/20 12:12 PM, Pavel Begunkov wrote:
>>> On 07/12/2020 16:28, Jens Axboe wrote:
>>>> On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <[email protected]> wrote:
>>>>> From: Xiaoguang Wang <[email protected]>
>>>>>
>>>>> The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
>>>>> we'll complete req by calling io_req_complete(), which will hold completion_lock
>>>>> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
>>>>> hold completion_lock to call io_commit_cqring(), then there maybe concurrent
>>>>> access to ctx->defer_list, double free may happen.
>>>>>
>>>>> To fix this bug, we always let io_iopoll_complete() complete polled io.
>>>>
>>>> This patch is causing hangs with iopoll testing, if you end up getting
>>>> -EAGAIN on request submission. I've dropped it.
>>>
>>> I fail to understand without debugging how does it happen, especially since
>>> it shouldn't even get out of the while in io_wq_submit_work(). Is that
>>> something obvious I've missed?
>>
>> I didn't have time to look into it, and haven't yet, just reporting that
>> it very reliably fails (and under what conditions).
> 
> Yeah, I get it, asked just in case.
> I'll see what's going on if Xiaoguang wouldn't handle it before.

Should be trivial to reproduce on eg nvme by doing:

echo mq-deadline > /sys/block/nvme0n1/queue/scheduler
echo 2 > /sys/block/nvme0n1/queue/nr_requests

and then run test/iopoll on that device. I'll try and take a look
tomorrow unless someone beats me to it.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-08 21:10           ` Jens Axboe
@ 2020-12-09 20:17             ` Pavel Begunkov
  2020-12-10 17:38               ` Pavel Begunkov
  0 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-09 20:17 UTC (permalink / raw)
  To: Jens Axboe; +Cc: xiaoguang.wang, io-uring

On 08/12/2020 21:10, Jens Axboe wrote:
> On 12/8/20 12:24 PM, Pavel Begunkov wrote:
>> On 08/12/2020 19:17, Jens Axboe wrote:
>>> On 12/8/20 12:12 PM, Pavel Begunkov wrote:
>>>> On 07/12/2020 16:28, Jens Axboe wrote:
>>>>> On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <[email protected]> wrote:
>>>>>> From: Xiaoguang Wang <[email protected]>
>>>>>>
>>>>>> The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
>>>>>> we'll complete req by calling io_req_complete(), which will hold completion_lock
>>>>>> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
>>>>>> hold completion_lock to call io_commit_cqring(), then there maybe concurrent
>>>>>> access to ctx->defer_list, double free may happen.
>>>>>>
>>>>>> To fix this bug, we always let io_iopoll_complete() complete polled io.
>>>>>
>>>>> This patch is causing hangs with iopoll testing, if you end up getting
>>>>> -EAGAIN on request submission. I've dropped it.
>>>>
>>>> I fail to understand without debugging how does it happen, especially since
>>>> it shouldn't even get out of the while in io_wq_submit_work(). Is that
>>>> something obvious I've missed?
>>>
>>> I didn't have time to look into it, and haven't yet, just reporting that
>>> it very reliably fails (and under what conditions).
>>
>> Yeah, I get it, asked just in case.
>> I'll see what's going on if Xiaoguang wouldn't handle it before.
> 
> Should be trivial to reproduce on eg nvme by doing:
> 
> echo mq-deadline > /sys/block/nvme0n1/queue/scheduler
> echo 2 > /sys/block/nvme0n1/queue/nr_requests
> 
> and then run test/iopoll on that device. I'll try and take a look
> tomorrow unless someone beats me to it.

Tried out with iopoll-enabled null_blk. test/iopoll fails with
"test_io_uring_submit_enters failed", but if remove iteration limit
from the test, it completes... eventually.

premise: io_complete_rw_iopoll() gets -EAGAIN but returns 0 to
io_wq_submit_work().
The old version happily completes IO with that 0, but the patch delays
it to do_iopoll(), which retries and so all that repeats. And, I believe,
that the behaviour that io_wq_submit_work()'s -EGAIN check was trying to
achieve...

The question left is why no one progresses. May even be something in block.
Need to trace further.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
  2020-12-09 20:17             ` Pavel Begunkov
@ 2020-12-10 17:38               ` Pavel Begunkov
  0 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-12-10 17:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: xiaoguang.wang, io-uring

On 09/12/2020 20:17, Pavel Begunkov wrote:
> On 08/12/2020 21:10, Jens Axboe wrote:
>> On 12/8/20 12:24 PM, Pavel Begunkov wrote:
>>> On 08/12/2020 19:17, Jens Axboe wrote:
>>>> On 12/8/20 12:12 PM, Pavel Begunkov wrote:
>>>>> On 07/12/2020 16:28, Jens Axboe wrote:
>>>>>> On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <[email protected]> wrote:
>>>>>>> From: Xiaoguang Wang <[email protected]>
>>>>>>>
>>>>>>> The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
>>>>>>> we'll complete req by calling io_req_complete(), which will hold completion_lock
>>>>>>> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
>>>>>>> hold completion_lock to call io_commit_cqring(), then there maybe concurrent
>>>>>>> access to ctx->defer_list, double free may happen.
>>>>>>>
>>>>>>> To fix this bug, we always let io_iopoll_complete() complete polled io.
>>>>>>
>>>>>> This patch is causing hangs with iopoll testing, if you end up getting
>>>>>> -EAGAIN on request submission. I've dropped it.
>>>>>
>>>>> I fail to understand without debugging how does it happen, especially since
>>>>> it shouldn't even get out of the while in io_wq_submit_work(). Is that
>>>>> something obvious I've missed?
>>>>
>>>> I didn't have time to look into it, and haven't yet, just reporting thation.
>>>> it very reliably fails (and under what conditions).
>>>
>>> Yeah, I get it, asked just in case.
>>> I'll see what's going on if Xiaoguang wouldn't handle it before.
>>
>> Should be trivial to reproduce on eg nvme by doing:
>>
>> echo mq-deadline > /sys/block/nvme0n1/queue/scheduler
>> echo 2 > /sys/block/nvme0n1/queue/nr_requests
>>
>> and then run test/iopoll on that device. I'll try and take a look
>> tomorrow unless someone beats me to it.
> 
> Tried out with iopoll-enabled null_blk. test/iopoll fails with
> "test_io_uring_submit_enters failed", but if remove iteration limit
> from the test, it completes... eventually.
> 
> premise: io_complete_rw_iopoll() gets -EAGAIN but returns 0 to
> io_wq_submit_work().
> The old version happily completes IO with that 0, but the patch delays
> it to do_iopoll(), which retries and so all that repeats. And, I believe,
> that the behaviour that io_wq_submit_work()'s -EGAIN check was trying to
> achieve...
> 
> The question left is why no one progresses. May even be something in block.
> Need to trace further.

test_io_uring_submit_enters()'s io_uring_submit never goes into the kernel,
IMHO it's saner to not expect to get any CQE, that's also implied in a comment
above the function. I guess before we were getting blk-mq/etc. them back
because of timers in blk-mq/etc.

So I guess it should have been be more like the diff below, which still
doesn't match the comment though.

diff --git a/test/iopoll.c b/test/iopoll.c
index d70ae56..d6f2f3e 100644
--- a/test/iopoll.c
+++ b/test/iopoll.c
@@ -269,13 +269,13 @@ static int test_io_uring_submit_enters(const char *file)
 	/* submit manually to avoid adding IORING_ENTER_GETEVENTS */
 	ret = __sys_io_uring_enter(ring.ring_fd, __io_uring_flush_sq(&ring), 0,
 						0, NULL);
-	if (ret < 0)
+	if (ret != BUFFERS)
 		goto err;
 
 	for (i = 0; i < 500; i++) {
-		ret = io_uring_submit(&ring);
-		if (ret != 0) {
-			fprintf(stderr, "still had %d sqes to submit, this is unexpected", ret);
+		ret = io_uring_wait_cqe(&ring, &cqe);
+		if (ret < 0) {
+			fprintf(stderr, "wait cqe failed %i\n", ret);
 			goto err;
 		}

-- 
Pavel Begunkov

^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-12-10 17:43 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-12-06 22:22 [PATCH 5.10 0/5] iopoll fixes Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
2020-12-07 16:28   ` Jens Axboe
2020-12-08 19:12     ` Pavel Begunkov
2020-12-08 19:17       ` Jens Axboe
2020-12-08 19:24         ` Pavel Begunkov
2020-12-08 21:10           ` Jens Axboe
2020-12-09 20:17             ` Pavel Begunkov
2020-12-10 17:38               ` Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
2020-12-07 18:31   ` Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 5/5] io_uring: fix mis-seting personality's creds Pavel Begunkov
2020-12-07 15:05 ` [PATCH 5.10 0/5] iopoll fixes Jens Axboe
2020-12-07 15:24   ` Pavel Begunkov
2020-12-07 15:28     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox