public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Nadav Amit <[email protected]>
Cc: [email protected], [email protected],
	LKML <[email protected]>,
	Alexander Viro <[email protected]>,
	Jens Axboe <[email protected]>
Subject: Re: Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
Date: Sun, 29 Nov 2020 00:13:20 +0000	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 28/11/2020 23:59, Nadav Amit wrote:
> Hello Pavel,
> 
> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
> kernel (based on 5.10-rc5+).
> 
> I did not actually confirm that the problem is triggered without my changes,
> as my iouring workload requires some kernel changes (not iouring changes),
> yet IMHO it seems pretty clear that this is a result of your commit
> e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
> io_file_data_ref_zero() inside a softirq context.

Yeah, that's true. It was already reported by syzkaller and fixed by Jens, but
queued for 5.11. Thanks for letting know anyway!

https://lore.kernel.org/io-uring/[email protected]/T/#t


Jens, I think it's for the best to add it for 5.10, at least so that lockdep
doesn't complain.

> 
> Let me know if my analysis is wrong.
> 
> Regards,
> Nadav
> 
> [  136.349353] ================================
> [  136.350212] WARNING: inconsistent lock state
> [  136.351093] 5.10.0-rc5+ #1435 Not tainted
> [  136.352003] --------------------------------
> [  136.352891] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
> [  136.354057] swapper/5/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
> [  136.355078] ffff88810417d6a8 (&file_data->lock){+.?.}-{2:2}, at: io_file_data_ref_zero+0x4d/0x220
> [  136.356717] {SOFTIRQ-ON-W} state was registered at:
> [  136.357539]   lock_acquire+0x172/0x520
> [  136.358209]   _raw_spin_lock+0x30/0x40
> [  136.358880]   __io_uring_register+0x1c99/0x1fe0
> [  136.359656]   __x64_sys_io_uring_register+0xe2/0x270
> [  136.360489]   do_syscall_64+0x39/0x90
> [  136.361144]   entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [  136.361991] irq event stamp: 835836
> [  136.362627] hardirqs last  enabled at (835836): [<ffffffff82856721>] _raw_spin_unlock_irqrestore+0x41/0x50
> [  136.364112] hardirqs last disabled at (835835): [<ffffffff828564ca>] _raw_spin_lock_irqsave+0x5a/0x60
> [  136.365553] softirqs last  enabled at (835824): [<ffffffff810cfa71>] _local_bh_enable+0x21/0x40
> [  136.366920] softirqs last disabled at (835825): [<ffffffff82a01022>] asm_call_irq_on_stack+0x12/0x20
> [  136.368335] 
> [  136.368335] other info that might help us debug this:
> [  136.369414]  Possible unsafe locking scenario:
> [  136.369414] 
> [  136.370414]        CPU0
> [  136.370907]        ----
> [  136.371403]   lock(&file_data->lock);
> [  136.372064]   <Interrupt>
> [  136.372585]     lock(&file_data->lock);
> [  136.373269] 
> [  136.373269]  *** DEADLOCK ***
> [  136.373269] 
> [  136.374319] 2 locks held by swapper/5/0:
> [  136.375005]  #0: ffffffff83c45380 (rcu_callback){....}-{0:0}, at: rcu_core+0x451/0xb70
> [  136.376284]  #1: ffffffff83c454a0 (rcu_read_lock){....}-{1:2}, at: percpu_ref_switch_to_atomic_rcu+0x139/0x320
> [  136.377849] 
> [  136.377849] stack backtrace:
> [  136.378650] CPU: 5 PID: 0 Comm: swapper/5 Not tainted 5.10.0-rc5irina+ #1435
> [  136.379746] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
> [  136.381550] Call Trace:
> [  136.382053]  <IRQ>
> [  136.382502]  dump_stack+0xa4/0xd9
> [  136.383116]  print_usage_bug.cold+0x217/0x220
> [  136.383871]  mark_lock+0xb90/0xe80
> [  136.384506]  ? print_usage_bug+0x180/0x180
> [  136.385223]  ? __kasan_check_read+0x11/0x20
> [  136.385946]  ? mark_lock+0x116/0xe80
> [  136.386599]  ? print_usage_bug+0x180/0x180
> [  136.387324]  ? __lock_acquire+0x8f5/0x2a80
> [  136.388039]  ? __kasan_check_read+0x11/0x20
> [  136.388776]  ? __lock_acquire+0x8f5/0x2a80
> [  136.389493]  __lock_acquire+0xdc9/0x2a80
> [  136.390190]  ? lockdep_hardirqs_on_prepare+0x210/0x210
> [  136.391039]  ? rcu_read_lock_sched_held+0xa1/0xd0
> [  136.391835]  ? rcu_read_lock_bh_held+0xb0/0xb0
> [  136.392603]  lock_acquire+0x172/0x520
> [  136.393258]  ? io_file_data_ref_zero+0x4d/0x220
> [  136.394025]  ? lock_release+0x410/0x410
> [  136.394705]  ? lock_acquire+0x172/0x520
> [  136.395386]  ? percpu_ref_switch_to_atomic_rcu+0x139/0x320
> [  136.396277]  ? lock_release+0x410/0x410
> [  136.396961]  _raw_spin_lock+0x30/0x40
> [  136.397620]  ? io_file_data_ref_zero+0x4d/0x220
> [  136.398392]  io_file_data_ref_zero+0x4d/0x220
> [  136.399138]  percpu_ref_switch_to_atomic_rcu+0x310/0x320
> [  136.400007]  ? percpu_ref_init+0x180/0x180
> [  136.400730]  rcu_core+0x49c/0xb70
> [  136.401344]  ? rcu_core+0x451/0xb70
> [  136.401978]  ? strict_work_handler+0x150/0x150
> [  136.402740]  ? rcu_read_lock_sched_held+0xa1/0xd0
> [  136.403535]  ? rcu_read_lock_bh_held+0xb0/0xb0
> [  136.404298]  rcu_core_si+0xe/0x10
> [  136.404914]  __do_softirq+0x104/0x59d
> [  136.405572]  asm_call_irq_on_stack+0x12/0x20
> [  136.406306]  </IRQ>
> [  136.406760]  do_softirq_own_stack+0x6f/0x80
> [  136.407484]  irq_exit_rcu+0xf3/0x100
> [  136.408134]  sysvec_apic_timer_interrupt+0x4b/0xb0
> [  136.408946]  asm_sysvec_apic_timer_interrupt+0x12/0x20
> [  136.409798] RIP: 0010:default_idle+0x1c/0x20
> [  136.410536] Code: eb cd 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f 44 00 00 55 48 89 e5 e8 b2 b1 a6 fe e9 07 00 00 00 0f 00 2d 26 f1 5c 00 fb f4 <5d> c3 cc cc 0f 1f 44 00 00 55 48 89 e5 41 55 4c 8b 2d 8e c2 00 02
> [  136.413291] RSP: 0018:ffffc9000011fda8 EFLAGS: 00000206
> [  136.414150] RAX: 00000000000cc0ed RBX: 0000000000000005 RCX: dffffc0000000000
> [  136.415256] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8285578e
> [  136.416364] RBP: ffffc9000011fda8 R08: 0000000000000001 R09: 0000000000000001
> [  136.417474] R10: ffff8881e877546b R11: ffffed103d0eea8d R12: 0000000000000005
> [  136.418579] R13: ffffffff84538220 R14: 0000000000000000 R15: ffff888100808000
> [  136.419694]  ? default_idle+0xe/0x20
> [  136.420340]  ? default_idle+0xe/0x20
> [  136.420995]  arch_cpu_idle+0x15/0x20
> [  136.421640]  default_idle_call+0x95/0x2c0
> [  136.422343]  do_idle+0x3bd/0x480
> [  136.422947]  ? arch_cpu_idle_exit+0x40/0x40
> [  136.423679]  cpu_startup_entry+0x20/0x30
> [  136.424369]  start_secondary+0x1c7/0x220
> [  136.425067]  ? set_cpu_sibling_map+0xdc0/0xdc0
> [  136.425829]  ? set_bringup_idt_handler.constprop.0+0x84/0x90
> [  136.426746]  ? start_cpu0+0xc/0xc
> [  136.427357]  secondary_startup_64_no_verify+0xb0/0xbb
> 

-- 
Pavel Begunkov

  reply	other threads:[~2020-11-29  0:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-28 23:59 Lockdep warning on io_file_data_ref_zero() with 5.10-rc5 Nadav Amit
2020-11-29  0:13 ` Pavel Begunkov [this message]
2020-11-29  0:21   ` Nadav Amit
2020-11-30 18:16   ` Jens Axboe
2020-12-15  6:58     ` Xiaoguang Wang
2020-12-15 16:18       ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox