From: Peter Zijlstra <[email protected]>
To: Jens Axboe <[email protected]>
Cc: io-uring <[email protected]>,
LKML <[email protected]>,
Daniel Wagner <[email protected]>
Subject: Re: [PATCH] io-wq: remove GFP_ATOMIC allocation off schedule out path
Date: Thu, 5 Aug 2021 11:17:41 +0200 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On Wed, Aug 04, 2021 at 08:43:43AM -0600, Jens Axboe wrote:
> Daniel reports that the v5.14-rc4-rt4 kernel throws a BUG when running
> stress-ng:
>
> | [ 90.202543] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:35
> | [ 90.202549] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 2047, name: iou-wrk-2041
> | [ 90.202555] CPU: 5 PID: 2047 Comm: iou-wrk-2041 Tainted: G W 5.14.0-rc4-rt4+ #89
> | [ 90.202559] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
> | [ 90.202561] Call Trace:
> | [ 90.202577] dump_stack_lvl+0x34/0x44
> | [ 90.202584] ___might_sleep.cold+0x87/0x94
> | [ 90.202588] rt_spin_lock+0x19/0x70
> | [ 90.202593] ___slab_alloc+0xcb/0x7d0
> | [ 90.202598] ? newidle_balance.constprop.0+0xf5/0x3b0
> | [ 90.202603] ? dequeue_entity+0xc3/0x290
> | [ 90.202605] ? io_wqe_dec_running.isra.0+0x98/0xe0
> | [ 90.202610] ? pick_next_task_fair+0xb9/0x330
> | [ 90.202612] ? __schedule+0x670/0x1410
> | [ 90.202615] ? io_wqe_dec_running.isra.0+0x98/0xe0
> | [ 90.202618] kmem_cache_alloc_trace+0x79/0x1f0
> | [ 90.202621] io_wqe_dec_running.isra.0+0x98/0xe0
> | [ 90.202625] io_wq_worker_sleeping+0x37/0x50
> | [ 90.202628] schedule+0x30/0xd0
> | [ 90.202630] schedule_timeout+0x8f/0x1a0
> | [ 90.202634] ? __bpf_trace_tick_stop+0x10/0x10
> | [ 90.202637] io_wqe_worker+0xfd/0x320
> | [ 90.202641] ? finish_task_switch.isra.0+0xd3/0x290
> | [ 90.202644] ? io_worker_handle_work+0x670/0x670
> | [ 90.202646] ? io_worker_handle_work+0x670/0x670
> | [ 90.202649] ret_from_fork+0x22/0x30
>
> which is due to the RT kernel not liking a GFP_ATOMIC allocation inside
> a raw spinlock. Besides that not working on RT, doing any kind of
> allocation from inside schedule() is kind of nasty and should be avoided
> if at all possible.
>
> This particular path happens when an io-wq worker goes to sleep, and we
> need a new worker to handle pending work. We currently allocate a small
> data item to hold the information we need to create a new worker, but we
> can instead include this data in the io_worker struct itself and just
> protect it with a single bit lock. We only really need one per worker
> anyway, as we will have run pending work between to sleep cycles.
>
> https://lore.kernel.org/lkml/[email protected]/
> Reported-by: Daniel Wagner <[email protected]>
> Signed-off-by: Jens Axboe <[email protected]>
Thanks!
Acked-by: Peter Zijlstra (Intel) <[email protected]>
prev parent reply other threads:[~2021-08-05 9:17 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-04 14:43 [PATCH] io-wq: remove GFP_ATOMIC allocation off schedule out path Jens Axboe
2021-08-04 15:33 ` Daniel Wagner
2021-08-04 15:39 ` Jens Axboe
2021-08-05 9:17 ` Peter Zijlstra [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210805091741.GB22037@worktop.programming.kicks-ass.net \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox