public inbox for [email protected]
 help / color / mirror / Atom feed
From: Thomas Gleixner <[email protected]>
To: Jens Axboe <[email protected]>
Cc: [email protected],
	Alexander Viro <[email protected]>,
	Pavel Begunkov <[email protected]>,
	[email protected],
	Sebastian Andrzej Siewior <[email protected]>
Subject: [BUG] io-uring triggered lockdep splat
Date: Tue, 10 Aug 2021 09:57:26 +0200	[thread overview]
Message-ID: <87r1f1speh.ffs@tglx> (raw)

Jens,

running 'rsrc_tags' from the liburing tests on v5.14-rc5 triggers the
following lockdep splat:

[  265.866713] ======================================================
[  265.867585] WARNING: possible circular locking dependency detected
[  265.868450] 5.14.0-rc5 #69 Tainted: G            E    
[  265.869174] ------------------------------------------------------
[  265.870050] kworker/3:1/86 is trying to acquire lock:
[  265.870759] ffff88812100f0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_rsrc_put_work+0x142/0x1b0
[  265.871957] 
               but task is already holding lock:
[  265.872777] ffffc900004a3e70 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}, at: process_one_work+0x218/0x590
[  265.874334] 
               which lock already depends on the new lock.

[  265.875474] 
               the existing dependency chain (in reverse order) is:
[  265.876512] 
               -> #1 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}:
[  265.877750]        __flush_work+0x372/0x4f0
[  265.878343]        io_rsrc_ref_quiesce.part.0.constprop.0+0x35/0xb0
[  265.879227]        __do_sys_io_uring_register+0x652/0x1080
[  265.880009]        do_syscall_64+0x3b/0x90
[  265.880598]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[  265.881383] 
               -> #0 (&ctx->uring_lock){+.+.}-{3:3}:
[  265.882257]        __lock_acquire+0x1130/0x1df0
[  265.882903]        lock_acquire+0xc8/0x2d0
[  265.883485]        __mutex_lock+0x88/0x780
[  265.884067]        io_rsrc_put_work+0x142/0x1b0
[  265.884713]        process_one_work+0x2a2/0x590
[  265.885357]        worker_thread+0x55/0x3c0
[  265.885958]        kthread+0x143/0x160
[  265.886493]        ret_from_fork+0x22/0x30
[  265.887079] 
               other info that might help us debug this:

[  265.888206]  Possible unsafe locking scenario:

[  265.889043]        CPU0                    CPU1
[  265.889687]        ----                    ----
[  265.890328]   lock((work_completion)(&(&ctx->rsrc_put_work)->work));
[  265.891211]                                lock(&ctx->uring_lock);
[  265.892074]                                lock((work_completion)(&(&ctx->rsrc_put_work)->work));
[  265.893310]   lock(&ctx->uring_lock);
[  265.893833] 
                *** DEADLOCK ***

[  265.894660] 2 locks held by kworker/3:1/86:
[  265.895252]  #0: ffff888100059738 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x218/0x590
[  265.896561]  #1: ffffc900004a3e70 ((work_completion)(&(&ctx->rsrc_put_work)->work)){+.+.}-{0:0}, at: process_one_work+0x218/0x590
[  265.898178] 
               stack backtrace:
[  265.898789] CPU: 3 PID: 86 Comm: kworker/3:1 Kdump: loaded Tainted: G            E     5.14.0-rc5 #69
[  265.900072] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
[  265.901195] Workqueue: events io_rsrc_put_work
[  265.901825] Call Trace:
[  265.902173]  dump_stack_lvl+0x57/0x72
[  265.902698]  check_noncircular+0xf2/0x110
[  265.903270]  ? __lock_acquire+0x380/0x1df0
[  265.903889]  __lock_acquire+0x1130/0x1df0
[  265.904462]  lock_acquire+0xc8/0x2d0
[  265.904967]  ? io_rsrc_put_work+0x142/0x1b0
[  265.905596]  ? lock_is_held_type+0xa5/0x120
[  265.906193]  __mutex_lock+0x88/0x780
[  265.906700]  ? io_rsrc_put_work+0x142/0x1b0
[  265.907286]  ? io_rsrc_put_work+0x142/0x1b0
[  265.907877]  ? lock_acquire+0xc8/0x2d0
[  265.908408]  io_rsrc_put_work+0x142/0x1b0
[  265.908976]  process_one_work+0x2a2/0x590
[  265.909544]  worker_thread+0x55/0x3c0
[  265.910061]  ? process_one_work+0x590/0x590
[  265.910655]  kthread+0x143/0x160
[  265.911114]  ? set_kthread_struct+0x40/0x40
[  265.911704]  ret_from_fork+0x22/0x30

Thanks,

        tglx

             reply	other threads:[~2021-08-10  7:57 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-10  7:57 Thomas Gleixner [this message]
2021-08-10  9:58 ` [BUG] io-uring triggered lockdep splat Pavel Begunkov
2021-08-10 16:51   ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87r1f1speh.ffs@tglx \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox