* [PATCH] io_uring: execute task_work_run() before dropping mm
@ 2020-06-06 15:12 Xiaoguang Wang
2020-06-06 15:55 ` Pavel Begunkov
2020-06-06 18:50 ` Jens Axboe
0 siblings, 2 replies; 7+ messages in thread
From: Xiaoguang Wang @ 2020-06-06 15:12 UTC (permalink / raw)
To: io-uring; +Cc: axboe, asml.silence, joseph.qi, Xiaoguang Wang
While testing io_uring in our internal kernel, note it's not upstream
kernel, we see below panic:
[ 872.498723] x29: ffff00002d553cf0 x28: 0000000000000000
[ 872.508973] x27: ffff807ef691a0e0 x26: 0000000000000000
[ 872.519116] x25: 0000000000000000 x24: ffff0000090a7980
[ 872.529184] x23: ffff000009272060 x22: 0000000100022b11
[ 872.539144] x21: 0000000046aa5668 x20: ffff80bee8562b18
[ 872.549000] x19: ffff80bee8562080 x18: 0000000000000000
[ 872.558876] x17: 0000000000000000 x16: 0000000000000000
[ 872.568976] x15: 0000000000000000 x14: 0000000000000000
[ 872.578762] x13: 0000000000000000 x12: 0000000000000000
[ 872.588474] x11: 0000000000000000 x10: 0000000000000c40
[ 872.598324] x9 : ffff000008100c00 x8 : 000000007ffff000
[ 872.608014] x7 : ffff80bee8562080 x6 : ffff80beea862d30
[ 872.617709] x5 : 0000000000000000 x4 : ffff80beea862d48
[ 872.627399] x3 : ffff80bee8562b18 x2 : 0000000000000000
[ 872.637044] x1 : ffff0000090a7000 x0 : 0000000000208040
[ 872.646575] Call trace:
[ 872.653139] task_numa_work+0x4c/0x310
[ 872.660916] task_work_run+0xb0/0xe0
[ 872.668400] io_sq_thread+0x164/0x388
[ 872.675829] kthread+0x108/0x138
The reason is that once io_sq_thread has a valid mm, schedule subsystem
may call task_tick_numa() adding a task_numa_work() callback, which will
visit mm, then above panic will happen.
To fix this bug, only call task_work_run() before dropping mm.
Signed-off-by: Xiaoguang Wang <[email protected]>
---
fs/io_uring.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6391a00ff8b7..32381984b2a6 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6134,6 +6134,13 @@ static int io_sq_thread(void *data)
* to enter the kernel to reap and flush events.
*/
if (!to_submit || ret == -EBUSY) {
+ /*
+ * Current task context may already have valid mm, that
+ * means some works that visit mm may have been queued,
+ * so we must execute the works before dropping mm.
+ */
+ if (current->task_works)
+ task_work_run();
/*
* Drop cur_mm before scheduling, we can't hold it for
* long periods (or over schedule()). Do this before
@@ -6152,8 +6159,6 @@ static int io_sq_thread(void *data)
if (!list_empty(&ctx->poll_list) ||
(!time_after(jiffies, timeout) && ret != -EBUSY &&
!percpu_ref_is_dying(&ctx->refs))) {
- if (current->task_works)
- task_work_run();
cond_resched();
continue;
}
@@ -6185,11 +6190,7 @@ static int io_sq_thread(void *data)
finish_wait(&ctx->sqo_wait, &wait);
break;
}
- if (current->task_works) {
- task_work_run();
- finish_wait(&ctx->sqo_wait, &wait);
- continue;
- }
+
if (signal_pending(current))
flush_signals(current);
schedule();
--
2.17.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: execute task_work_run() before dropping mm
2020-06-06 15:12 [PATCH] io_uring: execute task_work_run() before dropping mm Xiaoguang Wang
@ 2020-06-06 15:55 ` Pavel Begunkov
2020-06-06 16:39 ` Pavel Begunkov
2020-06-07 12:37 ` Xiaoguang Wang
2020-06-06 18:50 ` Jens Axboe
1 sibling, 2 replies; 7+ messages in thread
From: Pavel Begunkov @ 2020-06-06 15:55 UTC (permalink / raw)
To: Xiaoguang Wang, io-uring; +Cc: axboe, joseph.qi
On 06/06/2020 18:12, Xiaoguang Wang wrote:
> While testing io_uring in our internal kernel, note it's not upstream
> kernel, we see below panic:
> [ 872.498723] x29: ffff00002d553cf0 x28: 0000000000000000
> [ 872.508973] x27: ffff807ef691a0e0 x26: 0000000000000000
> [ 872.519116] x25: 0000000000000000 x24: ffff0000090a7980
> [ 872.529184] x23: ffff000009272060 x22: 0000000100022b11
> [ 872.539144] x21: 0000000046aa5668 x20: ffff80bee8562b18
> [ 872.549000] x19: ffff80bee8562080 x18: 0000000000000000
> [ 872.558876] x17: 0000000000000000 x16: 0000000000000000
> [ 872.568976] x15: 0000000000000000 x14: 0000000000000000
> [ 872.578762] x13: 0000000000000000 x12: 0000000000000000
> [ 872.588474] x11: 0000000000000000 x10: 0000000000000c40
> [ 872.598324] x9 : ffff000008100c00 x8 : 000000007ffff000
> [ 872.608014] x7 : ffff80bee8562080 x6 : ffff80beea862d30
> [ 872.617709] x5 : 0000000000000000 x4 : ffff80beea862d48
> [ 872.627399] x3 : ffff80bee8562b18 x2 : 0000000000000000
> [ 872.637044] x1 : ffff0000090a7000 x0 : 0000000000208040
> [ 872.646575] Call trace:
> [ 872.653139] task_numa_work+0x4c/0x310
> [ 872.660916] task_work_run+0xb0/0xe0
> [ 872.668400] io_sq_thread+0x164/0x388
> [ 872.675829] kthread+0x108/0x138
>
> The reason is that once io_sq_thread has a valid mm, schedule subsystem
> may call task_tick_numa() adding a task_numa_work() callback, which will
> visit mm, then above panic will happen.
>
> To fix this bug, only call task_work_run() before dropping mm.
So, the problem is that poll/async paths re-issue requests with
__io_queue_sqe(), which doesn't care about current->mm, and which
can be NULL for io_sq_thread(). Right?
>
> Signed-off-by: Xiaoguang Wang <[email protected]>
> ---
> fs/io_uring.c | 15 ++++++++-------
> 1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 6391a00ff8b7..32381984b2a6 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6134,6 +6134,13 @@ static int io_sq_thread(void *data)
> * to enter the kernel to reap and flush events.
> */
> if (!to_submit || ret == -EBUSY) {
> + /*
> + * Current task context may already have valid mm, that
> + * means some works that visit mm may have been queued,
> + * so we must execute the works before dropping mm.
> + */
> + if (current->task_works)
> + task_work_run();
Even though you're not dropping mm, the thread might not have it in the first
place. see how it's done in io_init_req(). How about setting mm either lazily
in io_poll_task_func()/io_async_task_func(), or before task_work_run() in
io_sq_thread().
> /*
> * Drop cur_mm before scheduling, we can't hold it for
> * long periods (or over schedule()). Do this before
> @@ -6152,8 +6159,6 @@ static int io_sq_thread(void *data)
> if (!list_empty(&ctx->poll_list) ||
> (!time_after(jiffies, timeout) && ret != -EBUSY &&
> !percpu_ref_is_dying(&ctx->refs))) {
> - if (current->task_works)
> - task_work_run();
> cond_resched();
> continue;
> }
> @@ -6185,11 +6190,7 @@ static int io_sq_thread(void *data)
> finish_wait(&ctx->sqo_wait, &wait);
> break;
> }
> - if (current->task_works) {
> - task_work_run();
> - finish_wait(&ctx->sqo_wait, &wait);
> - continue;
> - }
> +
> if (signal_pending(current))
> flush_signals(current);
> schedule();
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: execute task_work_run() before dropping mm
2020-06-06 15:55 ` Pavel Begunkov
@ 2020-06-06 16:39 ` Pavel Begunkov
2020-06-07 12:37 ` Xiaoguang Wang
1 sibling, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2020-06-06 16:39 UTC (permalink / raw)
To: Xiaoguang Wang, io-uring; +Cc: axboe, joseph.qi
On 06/06/2020 18:55, Pavel Begunkov wrote:
> On 06/06/2020 18:12, Xiaoguang Wang wrote:
>> While testing io_uring in our internal kernel, note it's not upstream
>> kernel, we see below panic:
>> [ 872.498723] x29: ffff00002d553cf0 x28: 0000000000000000
>> [ 872.508973] x27: ffff807ef691a0e0 x26: 0000000000000000
>> [ 872.519116] x25: 0000000000000000 x24: ffff0000090a7980
>> [ 872.529184] x23: ffff000009272060 x22: 0000000100022b11
>> [ 872.539144] x21: 0000000046aa5668 x20: ffff80bee8562b18
>> [ 872.549000] x19: ffff80bee8562080 x18: 0000000000000000
>> [ 872.558876] x17: 0000000000000000 x16: 0000000000000000
>> [ 872.568976] x15: 0000000000000000 x14: 0000000000000000
>> [ 872.578762] x13: 0000000000000000 x12: 0000000000000000
>> [ 872.588474] x11: 0000000000000000 x10: 0000000000000c40
>> [ 872.598324] x9 : ffff000008100c00 x8 : 000000007ffff000
>> [ 872.608014] x7 : ffff80bee8562080 x6 : ffff80beea862d30
>> [ 872.617709] x5 : 0000000000000000 x4 : ffff80beea862d48
>> [ 872.627399] x3 : ffff80bee8562b18 x2 : 0000000000000000
>> [ 872.637044] x1 : ffff0000090a7000 x0 : 0000000000208040
>> [ 872.646575] Call trace:
>> [ 872.653139] task_numa_work+0x4c/0x310
>> [ 872.660916] task_work_run+0xb0/0xe0
>> [ 872.668400] io_sq_thread+0x164/0x388
>> [ 872.675829] kthread+0x108/0x138
>>
>> The reason is that once io_sq_thread has a valid mm, schedule subsystem
>> may call task_tick_numa() adding a task_numa_work() callback, which will
>> visit mm, then above panic will happen.
>>
>> To fix this bug, only call task_work_run() before dropping mm.
>
> So, the problem is that poll/async paths re-issue requests with
> __io_queue_sqe(), which doesn't care about current->mm, and which
> can be NULL for io_sq_thread(). Right?
>
>>
>> Signed-off-by: Xiaoguang Wang <[email protected]>
>> ---
>> fs/io_uring.c | 15 ++++++++-------
>> 1 file changed, 8 insertions(+), 7 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 6391a00ff8b7..32381984b2a6 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -6134,6 +6134,13 @@ static int io_sq_thread(void *data)
>> * to enter the kernel to reap and flush events.
>> */
>> if (!to_submit || ret == -EBUSY) {
>> + /*
>> + * Current task context may already have valid mm, that
>> + * means some works that visit mm may have been queued,
>> + * so we must execute the works before dropping mm.
>> + */
>> + if (current->task_works)
>> + task_work_run();
>
> Even though you're not dropping mm, the thread might not have it in the first
> place. see how it's done in io_init_req(). How about setting mm either lazily
> in io_poll_task_func()/io_async_task_func(), or before task_work_run() in
> io_sq_thread().
Thinking about use_mm(), it's more about setting up env before execution rather
than request intialisation. Another way would be to move use_mm() from io_init_req()
into __io_queue_sqe(), more clearly separating responsibilities.
BTW, it may need adding extra io_sq_thread_drop_mm() either way
>
>> /*
>> * Drop cur_mm before scheduling, we can't hold it for
>> * long periods (or over schedule()). Do this before
>> @@ -6152,8 +6159,6 @@ static int io_sq_thread(void *data)
>> if (!list_empty(&ctx->poll_list) ||
>> (!time_after(jiffies, timeout) && ret != -EBUSY &&
>> !percpu_ref_is_dying(&ctx->refs))) {
>> - if (current->task_works)
>> - task_work_run();
>> cond_resched();
>> continue;
>> }
>> @@ -6185,11 +6190,7 @@ static int io_sq_thread(void *data)
>> finish_wait(&ctx->sqo_wait, &wait);
>> break;
>> }
>> - if (current->task_works) {
>> - task_work_run();
>> - finish_wait(&ctx->sqo_wait, &wait);
>> - continue;
>> - }
>> +
>> if (signal_pending(current))
>> flush_signals(current);
>> schedule();
>>
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: execute task_work_run() before dropping mm
2020-06-06 15:12 [PATCH] io_uring: execute task_work_run() before dropping mm Xiaoguang Wang
2020-06-06 15:55 ` Pavel Begunkov
@ 2020-06-06 18:50 ` Jens Axboe
2020-06-07 11:41 ` Xiaoguang Wang
1 sibling, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2020-06-06 18:50 UTC (permalink / raw)
To: Xiaoguang Wang, io-uring; +Cc: asml.silence, joseph.qi
On 6/6/20 9:12 AM, Xiaoguang Wang wrote:
> While testing io_uring in our internal kernel, note it's not upstream
> kernel, we see below panic:
> [ 872.498723] x29: ffff00002d553cf0 x28: 0000000000000000
> [ 872.508973] x27: ffff807ef691a0e0 x26: 0000000000000000
> [ 872.519116] x25: 0000000000000000 x24: ffff0000090a7980
> [ 872.529184] x23: ffff000009272060 x22: 0000000100022b11
> [ 872.539144] x21: 0000000046aa5668 x20: ffff80bee8562b18
> [ 872.549000] x19: ffff80bee8562080 x18: 0000000000000000
> [ 872.558876] x17: 0000000000000000 x16: 0000000000000000
> [ 872.568976] x15: 0000000000000000 x14: 0000000000000000
> [ 872.578762] x13: 0000000000000000 x12: 0000000000000000
> [ 872.588474] x11: 0000000000000000 x10: 0000000000000c40
> [ 872.598324] x9 : ffff000008100c00 x8 : 000000007ffff000
> [ 872.608014] x7 : ffff80bee8562080 x6 : ffff80beea862d30
> [ 872.617709] x5 : 0000000000000000 x4 : ffff80beea862d48
> [ 872.627399] x3 : ffff80bee8562b18 x2 : 0000000000000000
> [ 872.637044] x1 : ffff0000090a7000 x0 : 0000000000208040
> [ 872.646575] Call trace:
> [ 872.653139] task_numa_work+0x4c/0x310
> [ 872.660916] task_work_run+0xb0/0xe0
> [ 872.668400] io_sq_thread+0x164/0x388
> [ 872.675829] kthread+0x108/0x138
>
> The reason is that once io_sq_thread has a valid mm, schedule subsystem
> may call task_tick_numa() adding a task_numa_work() callback, which will
> visit mm, then above panic will happen.>
> To fix this bug, only call task_work_run() before dropping mm.
That's a bug outside of io_uring, you'll want to backport this patch
from 5.7:
commit 18f855e574d9799a0e7489f8ae6fd8447d0dd74a
Author: Jens Axboe <[email protected]>
Date: Tue May 26 09:38:31 2020 -0600
sched/fair: Don't NUMA balance for kthreads
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: execute task_work_run() before dropping mm
2020-06-06 18:50 ` Jens Axboe
@ 2020-06-07 11:41 ` Xiaoguang Wang
0 siblings, 0 replies; 7+ messages in thread
From: Xiaoguang Wang @ 2020-06-07 11:41 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: asml.silence, joseph.qi
hi,
> On 6/6/20 9:12 AM, Xiaoguang Wang wrote:
>> While testing io_uring in our internal kernel, note it's not upstream
>> kernel, we see below panic:
>> [ 872.498723] x29: ffff00002d553cf0 x28: 0000000000000000
>> [ 872.508973] x27: ffff807ef691a0e0 x26: 0000000000000000
>> [ 872.519116] x25: 0000000000000000 x24: ffff0000090a7980
>> [ 872.529184] x23: ffff000009272060 x22: 0000000100022b11
>> [ 872.539144] x21: 0000000046aa5668 x20: ffff80bee8562b18
>> [ 872.549000] x19: ffff80bee8562080 x18: 0000000000000000
>> [ 872.558876] x17: 0000000000000000 x16: 0000000000000000
>> [ 872.568976] x15: 0000000000000000 x14: 0000000000000000
>> [ 872.578762] x13: 0000000000000000 x12: 0000000000000000
>> [ 872.588474] x11: 0000000000000000 x10: 0000000000000c40
>> [ 872.598324] x9 : ffff000008100c00 x8 : 000000007ffff000
>> [ 872.608014] x7 : ffff80bee8562080 x6 : ffff80beea862d30
>> [ 872.617709] x5 : 0000000000000000 x4 : ffff80beea862d48
>> [ 872.627399] x3 : ffff80bee8562b18 x2 : 0000000000000000
>> [ 872.637044] x1 : ffff0000090a7000 x0 : 0000000000208040
>> [ 872.646575] Call trace:
>> [ 872.653139] task_numa_work+0x4c/0x310
>> [ 872.660916] task_work_run+0xb0/0xe0
>> [ 872.668400] io_sq_thread+0x164/0x388
>> [ 872.675829] kthread+0x108/0x138
>>
>> The reason is that once io_sq_thread has a valid mm, schedule subsystem
>> may call task_tick_numa() adding a task_numa_work() callback, which will
>> visit mm, then above panic will happen.>
>> To fix this bug, only call task_work_run() before dropping mm.
>
> That's a bug outside of io_uring, you'll want to backport this patch
> from 5.7:
>
> commit 18f855e574d9799a0e7489f8ae6fd8447d0dd74a
> Author: Jens Axboe <[email protected]>
> Date: Tue May 26 09:38:31 2020 -0600
>
> sched/fair: Don't NUMA balance for kthreadsThanks, it's a better fix than mine, will backport it.
Regards,
Xiaoguang Wang
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: execute task_work_run() before dropping mm
2020-06-06 15:55 ` Pavel Begunkov
2020-06-06 16:39 ` Pavel Begunkov
@ 2020-06-07 12:37 ` Xiaoguang Wang
2020-06-07 15:36 ` Pavel Begunkov
1 sibling, 1 reply; 7+ messages in thread
From: Xiaoguang Wang @ 2020-06-07 12:37 UTC (permalink / raw)
To: Pavel Begunkov, io-uring; +Cc: axboe, joseph.qi
hi,
> On 06/06/2020 18:12, Xiaoguang Wang wrote:
>> While testing io_uring in our internal kernel, note it's not upstream
>> kernel, we see below panic:
>> [ 872.498723] x29: ffff00002d553cf0 x28: 0000000000000000
>> [ 872.508973] x27: ffff807ef691a0e0 x26: 0000000000000000
>> [ 872.519116] x25: 0000000000000000 x24: ffff0000090a7980
>> [ 872.529184] x23: ffff000009272060 x22: 0000000100022b11
>> [ 872.539144] x21: 0000000046aa5668 x20: ffff80bee8562b18
>> [ 872.549000] x19: ffff80bee8562080 x18: 0000000000000000
>> [ 872.558876] x17: 0000000000000000 x16: 0000000000000000
>> [ 872.568976] x15: 0000000000000000 x14: 0000000000000000
>> [ 872.578762] x13: 0000000000000000 x12: 0000000000000000
>> [ 872.588474] x11: 0000000000000000 x10: 0000000000000c40
>> [ 872.598324] x9 : ffff000008100c00 x8 : 000000007ffff000
>> [ 872.608014] x7 : ffff80bee8562080 x6 : ffff80beea862d30
>> [ 872.617709] x5 : 0000000000000000 x4 : ffff80beea862d48
>> [ 872.627399] x3 : ffff80bee8562b18 x2 : 0000000000000000
>> [ 872.637044] x1 : ffff0000090a7000 x0 : 0000000000208040
>> [ 872.646575] Call trace:
>> [ 872.653139] task_numa_work+0x4c/0x310
>> [ 872.660916] task_work_run+0xb0/0xe0
>> [ 872.668400] io_sq_thread+0x164/0x388
>> [ 872.675829] kthread+0x108/0x138
>>
>> The reason is that once io_sq_thread has a valid mm, schedule subsystem
>> may call task_tick_numa() adding a task_numa_work() callback, which will
>> visit mm, then above panic will happen.
>>
>> To fix this bug, only call task_work_run() before dropping mm.
>
> So, the problem is that poll/async paths re-issue requests with
> __io_queue_sqe(), which doesn't care about current->mm, and which
> can be NULL for io_sq_thread(). Right?
No, above panic is not triggered by poll/async paths.
See below code path:
==> task_tick_fair()
====> task_tick_numa()
======> task_work_add, work is task_numa_work, which will visit mm.
In sqpoll mode, there maybe are sqes that need mm, then above codes
maybe executed by schedule subsystem. In io_sq_thread, we drop mm before
task_work_run, if there is a task_numa_work, panic occurs.
Regards,
Xiaoguang Wang
>
>>
>> Signed-off-by: Xiaoguang Wang <[email protected]>
>> ---
>> fs/io_uring.c | 15 ++++++++-------
>> 1 file changed, 8 insertions(+), 7 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 6391a00ff8b7..32381984b2a6 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -6134,6 +6134,13 @@ static int io_sq_thread(void *data)
>> * to enter the kernel to reap and flush events.
>> */
>> if (!to_submit || ret == -EBUSY) {
>> + /*
>> + * Current task context may already have valid mm, that
>> + * means some works that visit mm may have been queued,
>> + * so we must execute the works before dropping mm.
>> + */
>> + if (current->task_works)
>> + task_work_run();
>
> Even though you're not dropping mm, the thread might not have it in the first
> place. see how it's done in io_init_req(). How about setting mm either lazily
> in io_poll_task_func()/io_async_task_func(), or before task_work_run() in
> io_sq_thread().
>
>> /*
>> * Drop cur_mm before scheduling, we can't hold it for
>> * long periods (or over schedule()). Do this before
>> @@ -6152,8 +6159,6 @@ static int io_sq_thread(void *data)
>> if (!list_empty(&ctx->poll_list) ||
>> (!time_after(jiffies, timeout) && ret != -EBUSY &&
>> !percpu_ref_is_dying(&ctx->refs))) {
>> - if (current->task_works)
>> - task_work_run();
>> cond_resched();
>> continue;
>> }
>> @@ -6185,11 +6190,7 @@ static int io_sq_thread(void *data)
>> finish_wait(&ctx->sqo_wait, &wait);
>> break;
>> }
>> - if (current->task_works) {
>> - task_work_run();
>> - finish_wait(&ctx->sqo_wait, &wait);
>> - continue;
>> - }
>> +
>> if (signal_pending(current))
>> flush_signals(current);
>> schedule();
>>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] io_uring: execute task_work_run() before dropping mm
2020-06-07 12:37 ` Xiaoguang Wang
@ 2020-06-07 15:36 ` Pavel Begunkov
0 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2020-06-07 15:36 UTC (permalink / raw)
To: Xiaoguang Wang, io-uring; +Cc: axboe, joseph.qi
On 07/06/2020 15:37, Xiaoguang Wang wrote:
>>> The reason is that once io_sq_thread has a valid mm, schedule subsystem
>>> may call task_tick_numa() adding a task_numa_work() callback, which will
>>> visit mm, then above panic will happen.
>>>
>>> To fix this bug, only call task_work_run() before dropping mm.
>>
>> So, the problem is that poll/async paths re-issue requests with
>> __io_queue_sqe(), which doesn't care about current->mm, and which
>> can be NULL for io_sq_thread(). Right?
> No, above panic is not triggered by poll/async paths.
> See below code path:
> ==> task_tick_fair()
> ====> task_tick_numa()
> ======> task_work_add, work is task_numa_work, which will visit mm.
>
> In sqpoll mode, there maybe are sqes that need mm, then above codes
> maybe executed by schedule subsystem. In io_sq_thread, we drop mm before
> task_work_run, if there is a task_numa_work, panic occurs.
>
Got it, thanks for explaining
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-06-07 15:38 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-06-06 15:12 [PATCH] io_uring: execute task_work_run() before dropping mm Xiaoguang Wang
2020-06-06 15:55 ` Pavel Begunkov
2020-06-06 16:39 ` Pavel Begunkov
2020-06-07 12:37 ` Xiaoguang Wang
2020-06-07 15:36 ` Pavel Begunkov
2020-06-06 18:50 ` Jens Axboe
2020-06-07 11:41 ` Xiaoguang Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox