From: Olivier Langlois <[email protected]>
To: Jens Axboe <[email protected]>,
Pavel Begunkov <[email protected]>,
[email protected]
Subject: Re: io_uring NAPI busy poll RCU is causing 50 context switches/second to my sqpoll thread
Date: Sat, 03 Aug 2024 12:50:57 -0400 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On Sat, 2024-08-03 at 08:36 -0600, Jens Axboe wrote:
> You can check the mappings in /sys/kernel/debug/block/<device>/
>
> in there you'll find a number of hctxN folders, each of these is a
> hardware queue. hcxt0/type tells you what kind of queue it is, and
> inside the directory, you'll find which CPUs this queue is mapped to.
> Example:
>
> root@r7625 /s/k/d/b/nvme0n1# cat hctx1/type
> default
>
> "default" means it's a read/write queue, so it'll handle both reads
> and
> writes.
>
> root@r7625 /s/k/d/b/nvme0n1# ls hctx1/
> active cpu11/ dispatch sched_tags tags
> busy cpu266/ dispatch_busy sched_tags_bitmap tags_bitmap
> cpu10/ ctx_map flags state type
>
> and we can see this hardware queue is mapped to cpu 10/11/266.
>
> That ties into how these are mapped. It's pretty simple - if a task
> is
> running on cpu 10/11/266 when it's queueing IO, then it'll use hw
> queue
> 1. This maps to the interrupts you found, but note that the admin
> queue
> (which is not listed these directories, as it's not an IO queue) is
> the
> first one there. hctx0 is nvme0q1 in your /proc/interrupts list.
>
> If IO is queued on hctx1, then it should complete on the interrupt
> vector associated with nvme0q2.
>
Jens,
I knew there were nvme experts here!
thx for your help.
# ls nvme0n1/hctx0/
active busy cpu0 cpu1 ctx_map dispatch dispatch_busy flags
sched_tags sched_tags_bitmap state tags tags_bitmap type
it means that some I/O that I am unaware of is initiated either from
cpu0-cpu1...
It seems like nvme number of queues is configurable... I'll try to find
out how to reduce it to 1...
but my real problem is not really which I/O queue is assigned to a
request. It is the irq affinity assigned to the queues...
I have found the function:
nvme_setup_irqs() where the assignations happen.
Considering that I have the bootparams irqaffinity=3
I do not understand how the admin queue and hctx0 irqs can be assigned
to the cpu 0 and 1. It is as-if the irqaffinity param had no effect on
MSIX interrupts affinity masks...
next prev parent reply other threads:[~2024-08-03 16:51 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-30 20:05 io_uring NAPI busy poll RCU is causing 50 context switches/second to my sqpoll thread Olivier Langlois
2024-07-30 20:25 ` Pavel Begunkov
2024-07-30 23:14 ` Olivier Langlois
2024-07-31 0:33 ` Pavel Begunkov
2024-07-31 1:00 ` Pavel Begunkov
2024-08-01 23:05 ` Olivier Langlois
2024-08-01 22:02 ` Olivier Langlois
2024-08-02 15:22 ` Pavel Begunkov
2024-08-03 14:15 ` Olivier Langlois
2024-08-03 14:36 ` Jens Axboe
2024-08-03 16:50 ` Olivier Langlois [this message]
2024-08-03 21:37 ` Olivier Langlois
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ca8c2c60e3deeecec14820c422dfeae841ec7ea8.camel@trillion01.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox