public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Hillf Danton <[email protected]>
Cc: syzbot <[email protected]>,
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected]
Subject: Re: INFO: task hung in __io_uring_files_cancel
Date: Sun, 22 Nov 2020 15:57:12 +0000	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 22/11/2020 09:20, Hillf Danton wrote:
> On Sun, 22 Nov 2020 03:32:01 +0000 Pavel Begunkov wrote:
>> On 22/11/2020 03:04, Hillf Danton wrote:
>>> On Sat, 21 Nov 2020 14:35:15 -0800
>>>> syzbot found the following issue on:
>>>>
>>>> bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=10401726500000
>>>> final oops:     https://syzkaller.appspot.com/x/report.txt?x=12401726500000
>>>> console output: https://syzkaller.appspot.com/x/log.txt?x=14401726500000
>>>>
>>>> IMPORTANT: if you fix the issue, please add the following tag to the commit:
>>>> Reported-by: [email protected]
>>>> Fixes: 4d004099a668 ("lockdep: Fix lockdep recursion")
>>>>
>>>> INFO: task syz-executor.0:9557 blocked for more than 143 seconds.
>>>>       Not tainted 5.10.0-rc4-next-20201117-syzkaller #0
>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> task:syz-executor.0  state:D stack:28584 pid: 9557 ppid:  8485 flags:0x00004002
>>>> Call Trace:
>>>>  context_switch kernel/sched/core.c:4269 [inline]
>>>>  __schedule+0x890/0x2030 kernel/sched/core.c:5019
>>>>  schedule+0xcf/0x270 kernel/sched/core.c:5098
>>>>  io_uring_cancel_files fs/io_uring.c:8720 [inline]
>>>>  io_uring_cancel_task_requests fs/io_uring.c:8772 [inline]
>>>>  __io_uring_files_cancel+0xc4d/0x14b0 fs/io_uring.c:8868
>>>>  io_uring_files_cancel include/linux/io_uring.h:51 [inline]
>>>>  exit_files+0xe4/0x170 fs/file.c:456
>>>>  do_exit+0xb61/0x29f0 kernel/exit.c:818
>>>>  do_group_exit+0x125/0x310 kernel/exit.c:920
>>>>  get_signal+0x3ea/0x1f70 kernel/signal.c:2750
>>>>  arch_do_signal_or_restart+0x2a6/0x1ea0 arch/x86/kernel/signal.c:811
>>>>  handle_signal_work kernel/entry/common.c:145 [inline]
>>>>  exit_to_user_mode_loop kernel/entry/common.c:169 [inline]
>>>>  exit_to_user_mode_prepare+0x124/0x200 kernel/entry/common.c:199
>>>>  syscall_exit_to_user_mode+0x38/0x260 kernel/entry/common.c:274
>>>>  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>>> RIP: 0033:0x45deb9
>>>> Code: Unable to access opcode bytes at RIP 0x45de8f.
>>>> RSP: 002b:00007fa68397ccf8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
>>>> RAX: fffffffffffffe00 RBX: 000000000118bf28 RCX: 000000000045deb9
>>>> RDX: 0000000000000000 RSI: 0000000000000080 RDI: 000000000118bf28
>>>> RBP: 000000000118bf20 R08: 0000000000000000 R09: 0000000000000000
>>>> R10: 0000000000000000 R11: 0000000000000246 R12: 000000000118bf2c
>>>> R13: 00007fff50acc9af R14: 00007fa68397d9c0 R15: 000000000118bf2c
>>>>
>> ...
>>>
>>> Fix 311daef8013a ("io_uring: replace inflight_wait with tctx->wait")
>>> by cutting the pre-condition to wakeup because waitqueue_active()
>>> speaks the language in the East End while atomic_read() may speak
>>> the language in Paris.
>>
>> Your description doesn't help,
> 
> That is what I could find to describe the changs added in
> 311daef8013a, given that waitqueue_active() and atomic_read() IMO are
> primary bricks without natural links prebuilt. I dont think either
> of them could be replaced with another even in 311daef8013a.

There are not linked, but that doesn't make it automatically wrong.
I still don't understand how it fixes the problem, and it's better
to find the root cause because there are similar places that might
still be similarly flawed.

>> why do you think this is the problem?
> 
> This is not a tough question, thanks to the reproducer.
> 
>> ->in_idle is always set when io_uring_cancel_files() sleeps on it,
>> and ->inflight_lock should guarantee ordering.
> 
> The syzbot report makes sense, right?
> 
>>>
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -6082,8 +6082,7 @@ static void io_req_drop_files(struct io_
>>>  
>>>  	spin_lock_irqsave(&ctx->inflight_lock, flags);
>>>  	list_del(&req->inflight_entry);
>>> -	if (atomic_read(&tctx->in_idle))
>>> -		wake_up(&tctx->wait);
>>> +	wake_up(&tctx->wait);
>>>  	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
>>>  	req->flags &= ~REQ_F_INFLIGHT;
>>>  	put_files_struct(req->work.identity->files);

-- 
Pavel Begunkov

  parent reply	other threads:[~2020-11-22 16:00 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21 22:35 INFO: task hung in __io_uring_files_cancel syzbot
     [not found] ` <[email protected]>
2020-11-22  3:32   ` Pavel Begunkov
     [not found]   ` <[email protected]>
2020-11-22 15:57     ` Pavel Begunkov [this message]
     [not found]     ` <[email protected]>
2020-11-23 20:25       ` Pavel Begunkov
2020-11-24  9:39 ` syzbot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox