If a process is killed while in IORING_OP_FUTEX_WAIT, do_exit()'s call to exit_mm() causes the futex_private_hash to be freed, along with its buckets' locks, while the iouring request still exists. When (a little later in do_exit()) the iouring fd is fput(), the resulting futex_unqueue() tries to use the freed memory that req->async_data->lock_ptr points to. I've attached a demo: # cc uring46b.c # ./a.out killing child BUG: spinlock bad magic on CPU#0, kworker/u4:1/26 Unable to handle kernel paging request at virtual address 6b6b6b6b6b6b711b Current kworker/u4:1 pgtable: 4K pagesize, 39-bit VAs, pgdp=0x000000008202a000 [6b6b6b6b6b6b711b] pgd=0000000000000000, p4d=0000000000000000, pud=0000000000000000 Oops [#1] Modules linked in: CPU: 0 UID: 0 PID: 26 Comm: kworker/u4:1 Not tainted 6.15.0-11192-ga82d78bc13a8 #553 NONE Hardware name: riscv-virtio,qemu (DT) Workqueue: iou_exit io_ring_exit_work epc : spin_dump+0x38/0x6e ra : spin_dump+0x30/0x6e epc : ffffffff80003354 ra : ffffffff8000334c sp : ffffffc600113b60 ... status: 0000000200000120 badaddr: 6b6b6b6b6b6b711b cause: 000000000000000d [] spin_dump+0x38/0x6e [] do_raw_spin_lock+0x10a/0x126 [] _raw_spin_lock+0x1a/0x22 [] futex_unqueue+0x2a/0x76 [] __io_futex_cancel+0x72/0x88 [] io_cancel_remove_all+0x50/0x74 [] io_futex_remove_all+0x1a/0x22 [] io_uring_try_cancel_requests+0x2e2/0x36e [] io_ring_exit_work+0xec/0x3f0 [] process_one_work+0x132/0x2fe [] worker_thread+0x21e/0x2fe [] kthread+0xe8/0x1ba [] ret_from_fork_kernel+0xe/0x5e [] ret_from_fork_kernel_asm+0x16/0x18 Code: 4517 018b 0513 ca05 00ef 3b60 2603 0049 2601 c491 (a703) 5b04 ---[ end trace 0000000000000000 ]--- Kernel panic - not syncing: Fatal exception ---[ end Kernel panic - not syncing: Fatal exception ]--- Robert Morris rtm@mit.edu