public inbox for [email protected]
 help / color / mirror / Atom feed
* [BUG] io-uring triggered lockdep splat
@ 2021-08-10  7:57 Thomas Gleixner
  2021-08-10  9:58 ` Pavel Begunkov
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Gleixner @ 2021-08-10  7:57 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-fsdevel, Alexander Viro, Pavel Begunkov, io-uring,
	Sebastian Andrzej Siewior

Jens,

running 'rsrc_tags' from the liburing tests on v5.14-rc5 triggers the
following lockdep splat:

[  265.866713] ======================================================
[  265.867585] WARNING: possible circular locking dependency detected
[  265.868450] 5.14.0-rc5 #69 Tainted: G            E    
[  265.869174] ------------------------------------------------------
[  265.870050] kworker/3:1/86 is trying to acquire lock:
[  265.870759] ffff88812100f0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_rsrc_put_work+0x142/0x1b0
[  265.871957] 
               but task is already holding lock:
[  265.872777] ffffc900004a3e70 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}, at: process_one_work+0x218/0x590
[  265.874334] 
               which lock already depends on the new lock.

[  265.875474] 
               the existing dependency chain (in reverse order) is:
[  265.876512] 
               -> #1 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}:
[  265.877750]        __flush_work+0x372/0x4f0
[  265.878343]        io_rsrc_ref_quiesce.part.0.constprop.0+0x35/0xb0
[  265.879227]        __do_sys_io_uring_register+0x652/0x1080
[  265.880009]        do_syscall_64+0x3b/0x90
[  265.880598]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[  265.881383] 
               -> #0 (&ctx->uring_lock){+.+.}-{3:3}:
[  265.882257]        __lock_acquire+0x1130/0x1df0
[  265.882903]        lock_acquire+0xc8/0x2d0
[  265.883485]        __mutex_lock+0x88/0x780
[  265.884067]        io_rsrc_put_work+0x142/0x1b0
[  265.884713]        process_one_work+0x2a2/0x590
[  265.885357]        worker_thread+0x55/0x3c0
[  265.885958]        kthread+0x143/0x160
[  265.886493]        ret_from_fork+0x22/0x30
[  265.887079] 
               other info that might help us debug this:

[  265.888206]  Possible unsafe locking scenario:

[  265.889043]        CPU0                    CPU1
[  265.889687]        ----                    ----
[  265.890328]   lock((work_completion)(&(&ctx->rsrc_put_work)->work));
[  265.891211]                                lock(&ctx->uring_lock);
[  265.892074]                                lock((work_completion)(&(&ctx->rsrc_put_work)->work));
[  265.893310]   lock(&ctx->uring_lock);
[  265.893833] 
                *** DEADLOCK ***

[  265.894660] 2 locks held by kworker/3:1/86:
[  265.895252]  #0: ffff888100059738 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x218/0x590
[  265.896561]  #1: ffffc900004a3e70 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}, at: process_one_work+0x218/0x590
[  265.898178] 
               stack backtrace:
[  265.898789] CPU: 3 PID: 86 Comm: kworker/3:1 Kdump: loaded Tainted: G            E     5.14.0-rc5 #69
[  265.900072] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
[  265.901195] Workqueue: events io_rsrc_put_work
[  265.901825] Call Trace:
[  265.902173]  dump_stack_lvl+0x57/0x72
[  265.902698]  check_noncircular+0xf2/0x110
[  265.903270]  ? __lock_acquire+0x380/0x1df0
[  265.903889]  __lock_acquire+0x1130/0x1df0
[  265.904462]  lock_acquire+0xc8/0x2d0
[  265.904967]  ? io_rsrc_put_work+0x142/0x1b0
[  265.905596]  ? lock_is_held_type+0xa5/0x120
[  265.906193]  __mutex_lock+0x88/0x780
[  265.906700]  ? io_rsrc_put_work+0x142/0x1b0
[  265.907286]  ? io_rsrc_put_work+0x142/0x1b0
[  265.907877]  ? lock_acquire+0xc8/0x2d0
[  265.908408]  io_rsrc_put_work+0x142/0x1b0
[  265.908976]  process_one_work+0x2a2/0x590
[  265.909544]  worker_thread+0x55/0x3c0
[  265.910061]  ? process_one_work+0x590/0x590
[  265.910655]  kthread+0x143/0x160
[  265.911114]  ? set_kthread_struct+0x40/0x40
[  265.911704]  ret_from_fork+0x22/0x30

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [BUG] io-uring triggered lockdep splat
  2021-08-10  7:57 [BUG] io-uring triggered lockdep splat Thomas Gleixner
@ 2021-08-10  9:58 ` Pavel Begunkov
  2021-08-10 16:51   ` Thomas Gleixner
  0 siblings, 1 reply; 3+ messages in thread
From: Pavel Begunkov @ 2021-08-10  9:58 UTC (permalink / raw)
  To: Thomas Gleixner, Jens Axboe
  Cc: linux-fsdevel, Alexander Viro, io-uring,
	Sebastian Andrzej Siewior

On 8/10/21 8:57 AM, Thomas Gleixner wrote:
> Jens,
> 
> running 'rsrc_tags' from the liburing tests on v5.14-rc5 triggers the
> following lockdep splat:

Got addressed yesterday, thanks

https://git.kernel.dk/cgit/linux-block/commit/?h=io_uring-5.14&id=c018db4a57f3e31a9cb24d528e9f094eda89a499


> [  265.866713] ======================================================
> [  265.867585] WARNING: possible circular locking dependency detected
> [  265.868450] 5.14.0-rc5 #69 Tainted: G            E    
> [  265.869174] ------------------------------------------------------
> [  265.870050] kworker/3:1/86 is trying to acquire lock:
> [  265.870759] ffff88812100f0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_rsrc_put_work+0x142/0x1b0
> [  265.871957] 
>                but task is already holding lock:
> [  265.872777] ffffc900004a3e70 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}, at: process_one_work+0x218/0x590
> [  265.874334] 
>                which lock already depends on the new lock.
> 
> [  265.875474] 
>                the existing dependency chain (in reverse order) is:
> [  265.876512] 
>                -> #1 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}:
> [  265.877750]        __flush_work+0x372/0x4f0
> [  265.878343]        io_rsrc_ref_quiesce.part.0.constprop.0+0x35/0xb0
> [  265.879227]        __do_sys_io_uring_register+0x652/0x1080
> [  265.880009]        do_syscall_64+0x3b/0x90
> [  265.880598]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [  265.881383] 
>                -> #0 (&ctx->uring_lock){+.+.}-{3:3}:
> [  265.882257]        __lock_acquire+0x1130/0x1df0
> [  265.882903]        lock_acquire+0xc8/0x2d0
> [  265.883485]        __mutex_lock+0x88/0x780
> [  265.884067]        io_rsrc_put_work+0x142/0x1b0
> [  265.884713]        process_one_work+0x2a2/0x590
> [  265.885357]        worker_thread+0x55/0x3c0
> [  265.885958]        kthread+0x143/0x160
> [  265.886493]        ret_from_fork+0x22/0x30
> [  265.887079] 
>                other info that might help us debug this:
> 
> [  265.888206]  Possible unsafe locking scenario:
> 
> [  265.889043]        CPU0                    CPU1
> [  265.889687]        ----                    ----
> [  265.890328]   lock((work_completion)(&(&ctx->rsrc_put_work)->work));
> [  265.891211]                                lock(&ctx->uring_lock);
> [  265.892074]                                lock((work_completion)(&(&ctx->rsrc_put_work)->work));
> [  265.893310]   lock(&ctx->uring_lock);
> [  265.893833] 
>                 *** DEADLOCK ***
> 
> [  265.894660] 2 locks held by kworker/3:1/86:
> [  265.895252]  #0: ffff888100059738 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x218/0x590
> [  265.896561]  #1: ffffc900004a3e70 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}, at: process_one_work+0x218/0x590
> [  265.898178] 
>                stack backtrace:
> [  265.898789] CPU: 3 PID: 86 Comm: kworker/3:1 Kdump: loaded Tainted: G            E     5.14.0-rc5 #69
> [  265.900072] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
> [  265.901195] Workqueue: events io_rsrc_put_work
> [  265.901825] Call Trace:
> [  265.902173]  dump_stack_lvl+0x57/0x72
> [  265.902698]  check_noncircular+0xf2/0x110
> [  265.903270]  ? __lock_acquire+0x380/0x1df0
> [  265.903889]  __lock_acquire+0x1130/0x1df0
> [  265.904462]  lock_acquire+0xc8/0x2d0
> [  265.904967]  ? io_rsrc_put_work+0x142/0x1b0
> [  265.905596]  ? lock_is_held_type+0xa5/0x120
> [  265.906193]  __mutex_lock+0x88/0x780
> [  265.906700]  ? io_rsrc_put_work+0x142/0x1b0
> [  265.907286]  ? io_rsrc_put_work+0x142/0x1b0
> [  265.907877]  ? lock_acquire+0xc8/0x2d0
> [  265.908408]  io_rsrc_put_work+0x142/0x1b0
> [  265.908976]  process_one_work+0x2a2/0x590
> [  265.909544]  worker_thread+0x55/0x3c0
> [  265.910061]  ? process_one_work+0x590/0x590
> [  265.910655]  kthread+0x143/0x160
> [  265.911114]  ? set_kthread_struct+0x40/0x40
> [  265.911704]  ret_from_fork+0x22/0x30
> 
> Thanks,
> 
>         tglx
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [BUG] io-uring triggered lockdep splat
  2021-08-10  9:58 ` Pavel Begunkov
@ 2021-08-10 16:51   ` Thomas Gleixner
  0 siblings, 0 replies; 3+ messages in thread
From: Thomas Gleixner @ 2021-08-10 16:51 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe
  Cc: linux-fsdevel, Alexander Viro, io-uring,
	Sebastian Andrzej Siewior

On Tue, Aug 10 2021 at 10:58, Pavel Begunkov wrote:

> On 8/10/21 8:57 AM, Thomas Gleixner wrote:
>> Jens,
>> 
>> running 'rsrc_tags' from the liburing tests on v5.14-rc5 triggers the
>> following lockdep splat:
>
> Got addressed yesterday, thanks
>
> https://git.kernel.dk/cgit/linux-block/commit/?h=io_uring-5.14&id=c018db4a57f3e31a9cb24d528e9f094eda89a499

Thanks for the pointer!

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-08-10 16:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-08-10  7:57 [BUG] io-uring triggered lockdep splat Thomas Gleixner
2021-08-10  9:58 ` Pavel Begunkov
2021-08-10 16:51   ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox