public inbox for [email protected]
 help / color / mirror / Atom feed
From: Christian Loehle <[email protected]>
To: Jens Axboe <[email protected]>,
	Pavel Begunkov <[email protected]>,
	David Wei <[email protected]>,
	[email protected]
Subject: Re: [PATCH v1 1/4] io_uring: only account cqring wait time as iowait if enabled for a ring
Date: Tue, 5 Mar 2024 14:59:23 +0000	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

Hi folks,

On 25/02/2024 16:39, Jens Axboe wrote:
> On 2/24/24 5:58 PM, Pavel Begunkov wrote:
>> On 2/24/24 18:51, Jens Axboe wrote:
>>> On 2/24/24 8:31 AM, Pavel Begunkov wrote:
>>>> On 2/24/24 05:07, David Wei wrote:
>>>>> Currently we unconditionally account time spent waiting for events in CQ
>>>>> ring as iowait time.
>>>>>
>>>>> Some userspace tools consider iowait time to be CPU util/load which can
>>>>> be misleading as the process is sleeping. High iowait time might be
>>>>> indicative of issues for storage IO, but for network IO e.g. socket
>>>>> recv() we do not control when the completions happen so its value
>>>>> misleads userspace tooling.
>>>>>
>>>>> This patch gates the previously unconditional iowait accounting behind a
>>>>> new IORING_REGISTER opcode. By default time is not accounted as iowait,
>>>>> unless this is explicitly enabled for a ring. Thus userspace can decide,
>>>>> depending on the type of work it expects to do, whether it wants to
>>>>> consider cqring wait time as iowait or not.
>>>>
>>>> I don't believe it's a sane approach. I think we agree that per
>>>> cpu iowait is a silly and misleading metric. I have hard time to
>>>> define what it is, and I'm sure most probably people complaining
>>>> wouldn't be able to tell as well. Now we're taking that metric
>>>> and expose even more knobs to userspace.
>>>
>>> For sure, it's a stupid metric. But at the same time, educating people
>>> on this can be like talking to a brick wall, and it'll be years of doing
>>> that before we're making a dent in it. Hence I do think that just
>>> exposing the knob and letting the storage side use it, if they want, is
>>> the path of least resistance. I'm personally not going to do a crusade
>>> on iowait to eliminate it, I don't have the time for that. I'll educate
>>
>> Exactly my point but with a different conclusion. The path of least
> 
> I think that's because I'm a realist, and you are an idealist ;-)
> 
>> resistance is to have io_uring not accounted to iowait. That's how
>> it was so nobody should complain about it, you don't have to care about
>> it at all, you don't have to educate people on iowait when it comes up
>> with in the context of that knob, and you don't have to educate folks
>> on what this knob is and wtf it's there, and we're not pretending that
>> it works when it's not.
> 
> I don't think anyone cares about iowait going away for waiting on events
> with io_uring, but some would very much care about losing the cpufreq
> connection which is why it got added in the first place. If we can
> trivially do that without iowait, then we should certainly just do that
> and call it good. THAT is the main question to answer, in form of a
> patch.

I commented on Jens' patch regarding iowait and iowait_acct, which is
probably the path of least resistance for that specific issue, but let
me expand a bit on the cpufreq iowait connection problem.
cpufreq iowait handling and cpuidle iowait handling I would consider vastly
different in that respect and it seems to me that improving cpufreq should
be feasible effort (if it only catches 90% of scenarios at first then so
be it).
I'm thinking something of a in_iowait_boost (or the opposite in_iowait_queue_full
full meaning reasonably non-empty).
The current behaviour of boosting CPU frequency on anything is just very
unfortunate and for io_uring in particular destroys all of the power-savings
it could have due to reduced CPU usage.
(Just to be clear, current iowait boosting is not just broken in io_uring,
but rather everywhere, particularly because of the lack of definition what
iowait even means and when it should be set. Thus boosting on any iowait seen
is like taking a sledgehammer to crack a nut.)
I'm happy to propose some io_uring patch (that is probably nonsensical because
of a lot of reasons) or test whatever ideas you have.
Something like
if pending_requests > device_queue_size: Don't boost
would be an improvement from what I can tell.

And I know I'm heavily reducing io_uring to block IO here and am aware of how
wrong that is, but storage device IO was the original story that got iowait
boosting introduced in the first place.
If you have any cpufreq (or rather DVFS) related issues with anything else
io_uring like networking I'll give reproducing that a shot as well.
Would love to hear your thoughts!

Kind Regards,
Christian

      reply	other threads:[~2024-03-05 14:59 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-24  5:07 [PATCH v1 1/4] io_uring: only account cqring wait time as iowait if enabled for a ring David Wei
2024-02-24  5:07 ` [PATCH v1 2/4] liburing: add examples/proxy to .gitignore David Wei
2024-02-24  5:07 ` [PATCH v1 3/4] liburing: add helper for IORING_REGISTER_IOWAIT David Wei
2024-02-24 15:29   ` Jens Axboe
2024-02-24 16:39     ` David Wei
2024-02-24  5:07 ` [PATCH v1 4/4] liburing: add unit test for io_uring_register_iowait() David Wei
2024-02-24 15:28 ` [PATCH v1 1/4] io_uring: only account cqring wait time as iowait if enabled for a ring Jens Axboe
2024-02-24 15:31 ` Pavel Begunkov
2024-02-24 17:20   ` David Wei
2024-02-24 18:55     ` Jens Axboe
2024-02-25  1:39       ` Pavel Begunkov
2024-02-25 16:43         ` Jens Axboe
2024-02-25 21:11           ` Jens Axboe
2024-02-25 21:33             ` Jens Axboe
2024-02-26 14:56             ` Pavel Begunkov
2024-02-26 15:22               ` Jens Axboe
2024-02-24 18:51   ` Jens Axboe
2024-02-25  0:58     ` Pavel Begunkov
2024-02-25 16:39       ` Jens Axboe
2024-03-05 14:59         ` Christian Loehle [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox