* Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
@ 2020-11-28 23:59 Nadav Amit
2020-11-29 0:13 ` Pavel Begunkov
0 siblings, 1 reply; 6+ messages in thread
From: Nadav Amit @ 2020-11-28 23:59 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: linux-fsdevel, io-uring, LKML, Alexander Viro, Jens Axboe
Hello Pavel,
I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
kernel (based on 5.10-rc5+).
I did not actually confirm that the problem is triggered without my changes,
as my iouring workload requires some kernel changes (not iouring changes),
yet IMHO it seems pretty clear that this is a result of your commit
e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
io_file_data_ref_zero() inside a softirq context.
Let me know if my analysis is wrong.
Regards,
Nadav
[ 136.349353] ================================
[ 136.350212] WARNING: inconsistent lock state
[ 136.351093] 5.10.0-rc5+ #1435 Not tainted
[ 136.352003] --------------------------------
[ 136.352891] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
[ 136.354057] swapper/5/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
[ 136.355078] ffff88810417d6a8 (&file_data->lock){+.?.}-{2:2}, at: io_file_data_ref_zero+0x4d/0x220
[ 136.356717] {SOFTIRQ-ON-W} state was registered at:
[ 136.357539] lock_acquire+0x172/0x520
[ 136.358209] _raw_spin_lock+0x30/0x40
[ 136.358880] __io_uring_register+0x1c99/0x1fe0
[ 136.359656] __x64_sys_io_uring_register+0xe2/0x270
[ 136.360489] do_syscall_64+0x39/0x90
[ 136.361144] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 136.361991] irq event stamp: 835836
[ 136.362627] hardirqs last enabled at (835836): [<ffffffff82856721>] _raw_spin_unlock_irqrestore+0x41/0x50
[ 136.364112] hardirqs last disabled at (835835): [<ffffffff828564ca>] _raw_spin_lock_irqsave+0x5a/0x60
[ 136.365553] softirqs last enabled at (835824): [<ffffffff810cfa71>] _local_bh_enable+0x21/0x40
[ 136.366920] softirqs last disabled at (835825): [<ffffffff82a01022>] asm_call_irq_on_stack+0x12/0x20
[ 136.368335]
[ 136.368335] other info that might help us debug this:
[ 136.369414] Possible unsafe locking scenario:
[ 136.369414]
[ 136.370414] CPU0
[ 136.370907] ----
[ 136.371403] lock(&file_data->lock);
[ 136.372064] <Interrupt>
[ 136.372585] lock(&file_data->lock);
[ 136.373269]
[ 136.373269] *** DEADLOCK ***
[ 136.373269]
[ 136.374319] 2 locks held by swapper/5/0:
[ 136.375005] #0: ffffffff83c45380 (rcu_callback){....}-{0:0}, at: rcu_core+0x451/0xb70
[ 136.376284] #1: ffffffff83c454a0 (rcu_read_lock){....}-{1:2}, at: percpu_ref_switch_to_atomic_rcu+0x139/0x320
[ 136.377849]
[ 136.377849] stack backtrace:
[ 136.378650] CPU: 5 PID: 0 Comm: swapper/5 Not tainted 5.10.0-rc5irina+ #1435
[ 136.379746] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
[ 136.381550] Call Trace:
[ 136.382053] <IRQ>
[ 136.382502] dump_stack+0xa4/0xd9
[ 136.383116] print_usage_bug.cold+0x217/0x220
[ 136.383871] mark_lock+0xb90/0xe80
[ 136.384506] ? print_usage_bug+0x180/0x180
[ 136.385223] ? __kasan_check_read+0x11/0x20
[ 136.385946] ? mark_lock+0x116/0xe80
[ 136.386599] ? print_usage_bug+0x180/0x180
[ 136.387324] ? __lock_acquire+0x8f5/0x2a80
[ 136.388039] ? __kasan_check_read+0x11/0x20
[ 136.388776] ? __lock_acquire+0x8f5/0x2a80
[ 136.389493] __lock_acquire+0xdc9/0x2a80
[ 136.390190] ? lockdep_hardirqs_on_prepare+0x210/0x210
[ 136.391039] ? rcu_read_lock_sched_held+0xa1/0xd0
[ 136.391835] ? rcu_read_lock_bh_held+0xb0/0xb0
[ 136.392603] lock_acquire+0x172/0x520
[ 136.393258] ? io_file_data_ref_zero+0x4d/0x220
[ 136.394025] ? lock_release+0x410/0x410
[ 136.394705] ? lock_acquire+0x172/0x520
[ 136.395386] ? percpu_ref_switch_to_atomic_rcu+0x139/0x320
[ 136.396277] ? lock_release+0x410/0x410
[ 136.396961] _raw_spin_lock+0x30/0x40
[ 136.397620] ? io_file_data_ref_zero+0x4d/0x220
[ 136.398392] io_file_data_ref_zero+0x4d/0x220
[ 136.399138] percpu_ref_switch_to_atomic_rcu+0x310/0x320
[ 136.400007] ? percpu_ref_init+0x180/0x180
[ 136.400730] rcu_core+0x49c/0xb70
[ 136.401344] ? rcu_core+0x451/0xb70
[ 136.401978] ? strict_work_handler+0x150/0x150
[ 136.402740] ? rcu_read_lock_sched_held+0xa1/0xd0
[ 136.403535] ? rcu_read_lock_bh_held+0xb0/0xb0
[ 136.404298] rcu_core_si+0xe/0x10
[ 136.404914] __do_softirq+0x104/0x59d
[ 136.405572] asm_call_irq_on_stack+0x12/0x20
[ 136.406306] </IRQ>
[ 136.406760] do_softirq_own_stack+0x6f/0x80
[ 136.407484] irq_exit_rcu+0xf3/0x100
[ 136.408134] sysvec_apic_timer_interrupt+0x4b/0xb0
[ 136.408946] asm_sysvec_apic_timer_interrupt+0x12/0x20
[ 136.409798] RIP: 0010:default_idle+0x1c/0x20
[ 136.410536] Code: eb cd 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f 44 00 00 55 48 89 e5 e8 b2 b1 a6 fe e9 07 00 00 00 0f 00 2d 26 f1 5c 00 fb f4 <5d> c3 cc cc 0f 1f 44 00 00 55 48 89 e5 41 55 4c 8b 2d 8e c2 00 02
[ 136.413291] RSP: 0018:ffffc9000011fda8 EFLAGS: 00000206
[ 136.414150] RAX: 00000000000cc0ed RBX: 0000000000000005 RCX: dffffc0000000000
[ 136.415256] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8285578e
[ 136.416364] RBP: ffffc9000011fda8 R08: 0000000000000001 R09: 0000000000000001
[ 136.417474] R10: ffff8881e877546b R11: ffffed103d0eea8d R12: 0000000000000005
[ 136.418579] R13: ffffffff84538220 R14: 0000000000000000 R15: ffff888100808000
[ 136.419694] ? default_idle+0xe/0x20
[ 136.420340] ? default_idle+0xe/0x20
[ 136.420995] arch_cpu_idle+0x15/0x20
[ 136.421640] default_idle_call+0x95/0x2c0
[ 136.422343] do_idle+0x3bd/0x480
[ 136.422947] ? arch_cpu_idle_exit+0x40/0x40
[ 136.423679] cpu_startup_entry+0x20/0x30
[ 136.424369] start_secondary+0x1c7/0x220
[ 136.425067] ? set_cpu_sibling_map+0xdc0/0xdc0
[ 136.425829] ? set_bringup_idt_handler.constprop.0+0x84/0x90
[ 136.426746] ? start_cpu0+0xc/0xc
[ 136.427357] secondary_startup_64_no_verify+0xb0/0xbb
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
2020-11-28 23:59 Lockdep warning on io_file_data_ref_zero() with 5.10-rc5 Nadav Amit
@ 2020-11-29 0:13 ` Pavel Begunkov
2020-11-29 0:21 ` Nadav Amit
2020-11-30 18:16 ` Jens Axboe
0 siblings, 2 replies; 6+ messages in thread
From: Pavel Begunkov @ 2020-11-29 0:13 UTC (permalink / raw)
To: Nadav Amit; +Cc: linux-fsdevel, io-uring, LKML, Alexander Viro, Jens Axboe
On 28/11/2020 23:59, Nadav Amit wrote:
> Hello Pavel,
>
> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
> kernel (based on 5.10-rc5+).
>
> I did not actually confirm that the problem is triggered without my changes,
> as my iouring workload requires some kernel changes (not iouring changes),
> yet IMHO it seems pretty clear that this is a result of your commit
> e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
> io_file_data_ref_zero() inside a softirq context.
Yeah, that's true. It was already reported by syzkaller and fixed by Jens, but
queued for 5.11. Thanks for letting know anyway!
https://lore.kernel.org/io-uring/[email protected]/T/#t
Jens, I think it's for the best to add it for 5.10, at least so that lockdep
doesn't complain.
>
> Let me know if my analysis is wrong.
>
> Regards,
> Nadav
>
> [ 136.349353] ================================
> [ 136.350212] WARNING: inconsistent lock state
> [ 136.351093] 5.10.0-rc5+ #1435 Not tainted
> [ 136.352003] --------------------------------
> [ 136.352891] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
> [ 136.354057] swapper/5/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
> [ 136.355078] ffff88810417d6a8 (&file_data->lock){+.?.}-{2:2}, at: io_file_data_ref_zero+0x4d/0x220
> [ 136.356717] {SOFTIRQ-ON-W} state was registered at:
> [ 136.357539] lock_acquire+0x172/0x520
> [ 136.358209] _raw_spin_lock+0x30/0x40
> [ 136.358880] __io_uring_register+0x1c99/0x1fe0
> [ 136.359656] __x64_sys_io_uring_register+0xe2/0x270
> [ 136.360489] do_syscall_64+0x39/0x90
> [ 136.361144] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 136.361991] irq event stamp: 835836
> [ 136.362627] hardirqs last enabled at (835836): [<ffffffff82856721>] _raw_spin_unlock_irqrestore+0x41/0x50
> [ 136.364112] hardirqs last disabled at (835835): [<ffffffff828564ca>] _raw_spin_lock_irqsave+0x5a/0x60
> [ 136.365553] softirqs last enabled at (835824): [<ffffffff810cfa71>] _local_bh_enable+0x21/0x40
> [ 136.366920] softirqs last disabled at (835825): [<ffffffff82a01022>] asm_call_irq_on_stack+0x12/0x20
> [ 136.368335]
> [ 136.368335] other info that might help us debug this:
> [ 136.369414] Possible unsafe locking scenario:
> [ 136.369414]
> [ 136.370414] CPU0
> [ 136.370907] ----
> [ 136.371403] lock(&file_data->lock);
> [ 136.372064] <Interrupt>
> [ 136.372585] lock(&file_data->lock);
> [ 136.373269]
> [ 136.373269] *** DEADLOCK ***
> [ 136.373269]
> [ 136.374319] 2 locks held by swapper/5/0:
> [ 136.375005] #0: ffffffff83c45380 (rcu_callback){....}-{0:0}, at: rcu_core+0x451/0xb70
> [ 136.376284] #1: ffffffff83c454a0 (rcu_read_lock){....}-{1:2}, at: percpu_ref_switch_to_atomic_rcu+0x139/0x320
> [ 136.377849]
> [ 136.377849] stack backtrace:
> [ 136.378650] CPU: 5 PID: 0 Comm: swapper/5 Not tainted 5.10.0-rc5irina+ #1435
> [ 136.379746] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
> [ 136.381550] Call Trace:
> [ 136.382053] <IRQ>
> [ 136.382502] dump_stack+0xa4/0xd9
> [ 136.383116] print_usage_bug.cold+0x217/0x220
> [ 136.383871] mark_lock+0xb90/0xe80
> [ 136.384506] ? print_usage_bug+0x180/0x180
> [ 136.385223] ? __kasan_check_read+0x11/0x20
> [ 136.385946] ? mark_lock+0x116/0xe80
> [ 136.386599] ? print_usage_bug+0x180/0x180
> [ 136.387324] ? __lock_acquire+0x8f5/0x2a80
> [ 136.388039] ? __kasan_check_read+0x11/0x20
> [ 136.388776] ? __lock_acquire+0x8f5/0x2a80
> [ 136.389493] __lock_acquire+0xdc9/0x2a80
> [ 136.390190] ? lockdep_hardirqs_on_prepare+0x210/0x210
> [ 136.391039] ? rcu_read_lock_sched_held+0xa1/0xd0
> [ 136.391835] ? rcu_read_lock_bh_held+0xb0/0xb0
> [ 136.392603] lock_acquire+0x172/0x520
> [ 136.393258] ? io_file_data_ref_zero+0x4d/0x220
> [ 136.394025] ? lock_release+0x410/0x410
> [ 136.394705] ? lock_acquire+0x172/0x520
> [ 136.395386] ? percpu_ref_switch_to_atomic_rcu+0x139/0x320
> [ 136.396277] ? lock_release+0x410/0x410
> [ 136.396961] _raw_spin_lock+0x30/0x40
> [ 136.397620] ? io_file_data_ref_zero+0x4d/0x220
> [ 136.398392] io_file_data_ref_zero+0x4d/0x220
> [ 136.399138] percpu_ref_switch_to_atomic_rcu+0x310/0x320
> [ 136.400007] ? percpu_ref_init+0x180/0x180
> [ 136.400730] rcu_core+0x49c/0xb70
> [ 136.401344] ? rcu_core+0x451/0xb70
> [ 136.401978] ? strict_work_handler+0x150/0x150
> [ 136.402740] ? rcu_read_lock_sched_held+0xa1/0xd0
> [ 136.403535] ? rcu_read_lock_bh_held+0xb0/0xb0
> [ 136.404298] rcu_core_si+0xe/0x10
> [ 136.404914] __do_softirq+0x104/0x59d
> [ 136.405572] asm_call_irq_on_stack+0x12/0x20
> [ 136.406306] </IRQ>
> [ 136.406760] do_softirq_own_stack+0x6f/0x80
> [ 136.407484] irq_exit_rcu+0xf3/0x100
> [ 136.408134] sysvec_apic_timer_interrupt+0x4b/0xb0
> [ 136.408946] asm_sysvec_apic_timer_interrupt+0x12/0x20
> [ 136.409798] RIP: 0010:default_idle+0x1c/0x20
> [ 136.410536] Code: eb cd 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f 44 00 00 55 48 89 e5 e8 b2 b1 a6 fe e9 07 00 00 00 0f 00 2d 26 f1 5c 00 fb f4 <5d> c3 cc cc 0f 1f 44 00 00 55 48 89 e5 41 55 4c 8b 2d 8e c2 00 02
> [ 136.413291] RSP: 0018:ffffc9000011fda8 EFLAGS: 00000206
> [ 136.414150] RAX: 00000000000cc0ed RBX: 0000000000000005 RCX: dffffc0000000000
> [ 136.415256] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8285578e
> [ 136.416364] RBP: ffffc9000011fda8 R08: 0000000000000001 R09: 0000000000000001
> [ 136.417474] R10: ffff8881e877546b R11: ffffed103d0eea8d R12: 0000000000000005
> [ 136.418579] R13: ffffffff84538220 R14: 0000000000000000 R15: ffff888100808000
> [ 136.419694] ? default_idle+0xe/0x20
> [ 136.420340] ? default_idle+0xe/0x20
> [ 136.420995] arch_cpu_idle+0x15/0x20
> [ 136.421640] default_idle_call+0x95/0x2c0
> [ 136.422343] do_idle+0x3bd/0x480
> [ 136.422947] ? arch_cpu_idle_exit+0x40/0x40
> [ 136.423679] cpu_startup_entry+0x20/0x30
> [ 136.424369] start_secondary+0x1c7/0x220
> [ 136.425067] ? set_cpu_sibling_map+0xdc0/0xdc0
> [ 136.425829] ? set_bringup_idt_handler.constprop.0+0x84/0x90
> [ 136.426746] ? start_cpu0+0xc/0xc
> [ 136.427357] secondary_startup_64_no_verify+0xb0/0xbb
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
2020-11-29 0:13 ` Pavel Begunkov
@ 2020-11-29 0:21 ` Nadav Amit
2020-11-30 18:16 ` Jens Axboe
1 sibling, 0 replies; 6+ messages in thread
From: Nadav Amit @ 2020-11-29 0:21 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: linux-fsdevel, io-uring, LKML, Alexander Viro, Jens Axboe
> On Nov 28, 2020, at 4:13 PM, Pavel Begunkov <[email protected]> wrote:
>
> On 28/11/2020 23:59, Nadav Amit wrote:
>> Hello Pavel,
>>
>> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
>> kernel (based on 5.10-rc5+).
>>
>> I did not actually confirm that the problem is triggered without my changes,
>> as my iouring workload requires some kernel changes (not iouring changes),
>> yet IMHO it seems pretty clear that this is a result of your commit
>> e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
>> io_file_data_ref_zero() inside a softirq context.
>
> Yeah, that's true. It was already reported by syzkaller and fixed by Jens, but
> queued for 5.11. Thanks for letting know anyway!
>
> https://lore.kernel.org/io-uring/[email protected]/T/#t
Thanks for the quick response and sorry for the noise. I should improve my
Googling abilities and check the iouring repository the next time.
Regards,
Nadav
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
2020-11-29 0:13 ` Pavel Begunkov
2020-11-29 0:21 ` Nadav Amit
@ 2020-11-30 18:16 ` Jens Axboe
2020-12-15 6:58 ` Xiaoguang Wang
1 sibling, 1 reply; 6+ messages in thread
From: Jens Axboe @ 2020-11-30 18:16 UTC (permalink / raw)
To: Pavel Begunkov, Nadav Amit; +Cc: linux-fsdevel, io-uring, LKML, Alexander Viro
On 11/28/20 5:13 PM, Pavel Begunkov wrote:
> On 28/11/2020 23:59, Nadav Amit wrote:
>> Hello Pavel,
>>
>> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
>> kernel (based on 5.10-rc5+).
>>
>> I did not actually confirm that the problem is triggered without my changes,
>> as my iouring workload requires some kernel changes (not iouring changes),
>> yet IMHO it seems pretty clear that this is a result of your commit
>> e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
>> io_file_data_ref_zero() inside a softirq context.
>
> Yeah, that's true. It was already reported by syzkaller and fixed by Jens, but
> queued for 5.11. Thanks for letting know anyway!
>
> https://lore.kernel.org/io-uring/[email protected]/T/#t
>
>
> Jens, I think it's for the best to add it for 5.10, at least so that lockdep
> doesn't complain.
Yeah maybe, though it's "just" a lockdep issue, it can't trigger any
deadlocks. I'd rather just keep it in 5.11 and ensure it goes to stable.
This isn't new in this series.
--
Jens Axboe
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
2020-11-30 18:16 ` Jens Axboe
@ 2020-12-15 6:58 ` Xiaoguang Wang
2020-12-15 16:18 ` Jens Axboe
0 siblings, 1 reply; 6+ messages in thread
From: Xiaoguang Wang @ 2020-12-15 6:58 UTC (permalink / raw)
To: Jens Axboe, Pavel Begunkov, Nadav Amit
Cc: linux-fsdevel, io-uring, LKML, Alexander Viro
hi,
> On 11/28/20 5:13 PM, Pavel Begunkov wrote:
>> On 28/11/2020 23:59, Nadav Amit wrote:
>>> Hello Pavel,
>>>
>>> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
>>> kernel (based on 5.10-rc5+).
>>>
>>> I did not actually confirm that the problem is triggered without my changes,
>>> as my iouring workload requires some kernel changes (not iouring changes),
>>> yet IMHO it seems pretty clear that this is a result of your commit
>>> e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
>>> io_file_data_ref_zero() inside a softirq context.
>>
>> Yeah, that's true. It was already reported by syzkaller and fixed by Jens, but
>> queued for 5.11. Thanks for letting know anyway!
>>
>> https://lore.kernel.org/io-uring/[email protected]/T/#t
>>
>>
>> Jens, I think it's for the best to add it for 5.10, at least so that lockdep
>> doesn't complain.
>
> Yeah maybe, though it's "just" a lockdep issue, it can't trigger any
> deadlocks. I'd rather just keep it in 5.11 and ensure it goes to stable.
> This isn't new in this series.
Sorry, I'm not familiar with lockdep implementation, here I wonder why you say
it can't trigger any deadlocks, looking at that the syzbot report, seems that
the deadlock may happen.
And I also wonder whether spin lock bh variants are enough, normal ios are
completed in interrupt context,
==> io_complete_rw
====> __io_complete_rw
======> io_complete_rw_common
========> __io_req_complete
==========> io_put_req
============> io_free_req
==============> __io_free_req
================> io_dismantle_req
==================> io_put_file
====================> percpu_ref_put(req->fixed_file_refs);
if we drop the last reference here,
io_file_data_ref_zero() will be called,
then we'll call spin_lock(&data->lock);
in interrupt context.
Should we use spin lock irq variants?
Regards,
Xiaoguang Wang
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
2020-12-15 6:58 ` Xiaoguang Wang
@ 2020-12-15 16:18 ` Jens Axboe
0 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2020-12-15 16:18 UTC (permalink / raw)
To: Xiaoguang Wang, Pavel Begunkov, Nadav Amit
Cc: linux-fsdevel, io-uring, LKML, Alexander Viro
On 12/14/20 11:58 PM, Xiaoguang Wang wrote:
> hi,
>
>> On 11/28/20 5:13 PM, Pavel Begunkov wrote:
>>> On 28/11/2020 23:59, Nadav Amit wrote:
>>>> Hello Pavel,
>>>>
>>>> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
>>>> kernel (based on 5.10-rc5+).
>>>>
>>>> I did not actually confirm that the problem is triggered without my changes,
>>>> as my iouring workload requires some kernel changes (not iouring changes),
>>>> yet IMHO it seems pretty clear that this is a result of your commit
>>>> e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
>>>> io_file_data_ref_zero() inside a softirq context.
>>>
>>> Yeah, that's true. It was already reported by syzkaller and fixed by Jens, but
>>> queued for 5.11. Thanks for letting know anyway!
>>>
>>> https://lore.kernel.org/io-uring/[email protected]/T/#t
>>>
>>>
>>> Jens, I think it's for the best to add it for 5.10, at least so that lockdep
>>> doesn't complain.
>>
>> Yeah maybe, though it's "just" a lockdep issue, it can't trigger any
>> deadlocks. I'd rather just keep it in 5.11 and ensure it goes to stable.
>> This isn't new in this series.
> Sorry, I'm not familiar with lockdep implementation, here I wonder why you say
> it can't trigger any deadlocks, looking at that the syzbot report, seems that
> the deadlock may happen.
Because the only time the lock is actually grabbed in bh context is when
it has dropped to zero and is no longer used. The classic deadlock for this
is if regular use has both contexts, so you can get:
CPU0 CPU1
grab_lock()
bh context, grab_lock()
deadlock. But this simply cannot happen here, as by the time we get to
grabbing it from bh context, there can by definition be no other users
of it left (or new ones).
--
Jens Axboe
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-12-15 16:22 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-11-28 23:59 Lockdep warning on io_file_data_ref_zero() with 5.10-rc5 Nadav Amit
2020-11-29 0:13 ` Pavel Begunkov
2020-11-29 0:21 ` Nadav Amit
2020-11-30 18:16 ` Jens Axboe
2020-12-15 6:58 ` Xiaoguang Wang
2020-12-15 16:18 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox