From: Olivier Langlois <[email protected]>
To: Jens Axboe <[email protected]>,
Pavel Begunkov <[email protected]>,
[email protected]
Subject: Re: io_uring NAPI busy poll RCU is causing 50 context switches/second to my sqpoll thread
Date: Sat, 03 Aug 2024 17:37:36 -0400 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On Sat, 2024-08-03 at 08:36 -0600, Jens Axboe wrote:
>
> You can check the mappings in /sys/kernel/debug/block/<device>/
>
> in there you'll find a number of hctxN folders, each of these is a
> hardware queue. hcxt0/type tells you what kind of queue it is, and
> inside the directory, you'll find which CPUs this queue is mapped to.
> Example:
>
> root@r7625 /s/k/d/b/nvme0n1# cat hctx1/type
> default
>
> "default" means it's a read/write queue, so it'll handle both reads
> and
> writes.
>
> root@r7625 /s/k/d/b/nvme0n1# ls hctx1/
> active cpu11/ dispatch sched_tags tags
> busy cpu266/ dispatch_busy sched_tags_bitmap tags_bitmap
> cpu10/ ctx_map flags state type
>
> and we can see this hardware queue is mapped to cpu 10/11/266.
>
> That ties into how these are mapped. It's pretty simple - if a task
> is
> running on cpu 10/11/266 when it's queueing IO, then it'll use hw
> queue
> 1. This maps to the interrupts you found, but note that the admin
> queue
> (which is not listed these directories, as it's not an IO queue) is
> the
> first one there. hctx0 is nvme0q1 in your /proc/interrupts list.
>
> If IO is queued on hctx1, then it should complete on the interrupt
> vector associated with nvme0q2.
>
I have entered hacking territory but I did not find any other way to do
it...
drivers/nvme/host/pci.c
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6cd9395ba9ec..70b7ca84ee21 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2299,7 +2299,7 @@ static unsigned int nvme_max_io_queues(struct
nvme_dev *dev)
*/
if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS)
return 1;
- return num_possible_cpus() + dev->nr_write_queues + dev-
>nr_poll_queues;
+ return 1 + dev->nr_write_queues + dev->nr_poll_queues;
}
static int nvme_setup_io_queues(struct nvme_dev *dev)
it works. I have no more IRQ on cpu1 as I wanted
63: 9 0 0 0 PCI-MSIX-0000:00:04.0
0-edge nvme0q0
64: 0 0 0 7533 PCI-MSIX-0000:00:04.0
1-edge nvme0q1
# ls /sys/kernel/debug/block/nvme0n1/hctx0/
active busy cpu0 cpu1 cpu2 cpu3 ctx_map dispatch dispatch_busy
flags sched_tags sched_tags_bitmap state tags tags_bitmap type
prev parent reply other threads:[~2024-08-03 21:37 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-30 20:05 io_uring NAPI busy poll RCU is causing 50 context switches/second to my sqpoll thread Olivier Langlois
2024-07-30 20:25 ` Pavel Begunkov
2024-07-30 23:14 ` Olivier Langlois
2024-07-31 0:33 ` Pavel Begunkov
2024-07-31 1:00 ` Pavel Begunkov
2024-08-01 23:05 ` Olivier Langlois
2024-08-01 22:02 ` Olivier Langlois
2024-08-02 15:22 ` Pavel Begunkov
2024-08-03 14:15 ` Olivier Langlois
2024-08-03 14:36 ` Jens Axboe
2024-08-03 16:50 ` Olivier Langlois
2024-08-03 21:37 ` Olivier Langlois [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1be64672f22be44fbe1540053427d978c0224dfc.camel@trillion01.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox