public inbox for [email protected]
 help / color / mirror / Atom feed
From: Christian Loehle <[email protected]>
To: Quentin Perret <[email protected]>,
	"Rafael J. Wysocki" <[email protected]>
Cc: [email protected], [email protected],
	[email protected], [email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected], [email protected]
Subject: Re: [RFC PATCH 5/8] cpufreq/schedutil: Remove iowait boost
Date: Thu, 3 Oct 2024 11:30:52 +0100	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 10/3/24 10:47, Quentin Perret wrote:
> On Monday 30 Sep 2024 at 18:34:24 (+0200), Rafael J. Wysocki wrote:
>> On Thu, Sep 5, 2024 at 11:27 AM Christian Loehle
>> <[email protected]> wrote:
>>>
>>> iowait boost in schedutil was introduced by
>>> commit ("21ca6d2c52f8 cpufreq: schedutil: Add iowait boosting").
>>> with it more or less following intel_pstate's approach to increase
>>> frequency after an iowait wakeup.
>>> Behaviour that is piggy-backed onto iowait boost is problematic
>>> due to a lot of reasons, so remove it.
>>>
>>> For schedutil specifically these are some of the reasons:
>>> 1. Boosting is applied even in scenarios where it doesn't improve
>>> throughput.
>>
>> Well, I wouldn't argue this way because it is kind of like saying that
>> air conditioning is used even when it doesn't really help.  It is
>> sometimes hard to know in advance whether or not it will help though.
>>
>>> 2. The boost is not accounted for in EAS: a) feec() will only consider
>>>  the actual task utilization for task placement, but another CPU might
>>>  be more energy-efficient at that capacity than the boosted one.)
>>>  b) When placing a non-IO task while a CPU is boosted compute_energy()
>>>  assumes a lower OPP than what is actually applied. This leads to
>>>  wrong EAS decisions.
>>
>> That's a very good point IMV and so is the one regarding UCLAMP_MAX (8
>> in your list).
> 
> I would actually argue that this is also an implementation problem
> rather than something fundamental about boosting. EAS could be taught
> about iowait boosting and factor that into the decisions.

Definitely, and I did do exactly that.

> 
>> If the goal is to set the adequate performance for a given utilization
>> level (either actual or prescribed), boosting doesn't really play well
>> with this and it shouldn't be used at least in these cases.
> 
> There's plenty of cases where EAS will correctly understand that
> migrating a task away will not reduce the OPP (e.g. another task on the
> rq has a uclamp_min request, or another CPU in the perf domain has a
> higher request), so iowait boosting could probably be added.
> 
> In fact if the iowait boost was made a task property, EAS could easily
> understand the effect of migrating that boost with the task (it's not
> fundamentally different from migrating a task with a high uclamp_min
> from the energy model perspective).

True.
> 
>>> 3. Actual IO heavy workloads are hardly distinguished from infrequent
>>> in_iowait wakeups.
>>
>> Do infrequent in_iowait wakeups really cause the boosting to be
>> applied at full swing?
>>
>>> 4. The boost isn't accounted for in task placement.
>>
>> I'm not sure what exactly this means.  "Big" vs "little" or something else?
>>
>>> 5. The boost isn't associated with a task, it therefore lingers on the
>>> rq even after the responsible task has migrated / stopped.
>>
>> Fair enough, but this is rather a problem with the implementation of
>> boosting and not with the basic idea of it.
> 
> +1
> 
>>> 6. The boost isn't associated with a task, it therefore needs to ramp
>>> up again when migrated.
>>
>> Well, that again is somewhat implementation-related IMV, and it need
>> not be problematic in principle.  Namely, if a task migrates and it is
>> not the only one in the "new" CPUs runqueue, and the other tasks in
>> there don't use in_iowait, maybe it's better to not boost it?
>>
>> It also means that boosting is not very consistent, though, which is a
>> valid point.
>>
>>> 7. Since schedutil doesn't know which task is getting woken up,
>>> multiple unrelated in_iowait tasks lead to boosting.
>>
>> Well, that's by design: it boosts, when "there is enough IO pressure
>> in the runqueue", so to speak.
>>
>> Basically, it is a departure from the "make performance follow
>> utilization" general idea and it is based on the observation that in
>> some cases performance can be improved by taking additional
>> information into account.
>>
>> It is also about pure performance, not about energy efficiency.
>>
>>> 8. Boosting is hard to control with UCLAMP_MAX (which is only active
>>> when the task is on the rq, which for boosted tasks is usually not
>>> the case for most of the time).
> 
> Sounds like another reason to make iowait boosting per-task to me :-)
> 
> I've always thought that turning iowait boosting into some sort of
> in-kernel uclamp_min request would be a good approach for most of the
> issues mentioned above. Note that I'm not necessarily saying to use the
> actual uclamp infrastructure (though it's valid option), I'm really just
> talking about the concept. Is that something you've considered?
> 
> I presume we could even factor out the 'logic' part of the code that
> decides out to request the boost into its own thing, and possibly have
> different policies for different use-cases, but that might be overkill.

See the cover-letter part on per-task iowait boosting, specifically:
[1]
v1 per-task io boost
https://lore.kernel.org/lkml/[email protected]/
v2 per-task io boost
https://lore.kernel.org/lkml/[email protected]/
[2]
OSPM24 discussion iowait boosting
https://www.youtube.com/watch?v=MSQGEsSziZ4

These are the main issues with transforming the existing mechanism into
a per-task attribute.
Almost unsolvable is: Does reducing "iowait pressure" (be it per-task or per-rq)
actually improve throughput even (assuming for now that this throughput is
something we care about, I'm sure you know that isn't always the case, e.g.
background tasks). With MCQ devices and some reasonable IO workload that is
IO-bound our iowait boosting is often just boosting CPU frequency (which uses
power obviously) to queue in yet another request for a device which has essentially
endless pending requests. If pending request N+1 arrives x usecs earlier or
later at the device then makes no difference in IO throughput.
If boosting would improve e.g. IOPS (of that device) is something the block layer
(with a lot of added infrastructure, but at least in theory it would know what
device we're iowaiting on, unlike the scheduler) could tell us about. If that is
actually useful for user experience (i.e. worth the power) only userspace can decide
(and then we're back at uclamp_min anyway).
(The above all assumes that iowait even means "is waiting for block IO and
about to send another block IO" which is far from reality.)

Thanks Quentin for getting involved, your input is very much appreciated!

Regards,
Christian

  reply	other threads:[~2024-10-03 10:30 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-05  9:26 [RFT RFC PATCH 0/8] cpufreq: cpuidle: Remove iowait behaviour Christian Loehle
2024-09-05  9:26 ` [RFC PATCH 1/8] cpuidle: menu: Remove iowait influence Christian Loehle
2024-09-30 14:58   ` Rafael J. Wysocki
2024-09-05  9:26 ` [RFC PATCH 2/8] cpuidle: Prefer teo over menu governor Christian Loehle
2024-09-30 15:06   ` Rafael J. Wysocki
2024-09-30 16:12     ` Christian Loehle
2024-09-30 16:42       ` Rafael J. Wysocki
2024-09-05  9:26 ` [RFC PATCH 3/8] TEST: cpufreq/schedutil: Linear iowait boost step Christian Loehle
2024-09-05  9:26 ` [RFC PATCH 4/8] TEST: cpufreq/schedutil: iowait boost cap sysfs Christian Loehle
2024-09-05  9:26 ` [RFC PATCH 5/8] cpufreq/schedutil: Remove iowait boost Christian Loehle
2024-09-30 16:34   ` Rafael J. Wysocki
2024-10-03  9:10     ` Christian Loehle
2024-10-03  9:47     ` Quentin Perret
2024-10-03 10:30       ` Christian Loehle [this message]
2024-10-05  0:39         ` Andres Freund
2024-10-09  9:54           ` Christian Loehle
2024-09-05  9:26 ` [RFC PATCH 6/8] cpufreq: intel_pstate: " Christian Loehle
2024-09-12 11:22   ` [RFC PATCH] TEST: cpufreq: intel_pstate: sysfs iowait_boost_cap Christian Loehle
2024-09-30 18:03   ` [RFC PATCH 6/8] cpufreq: intel_pstate: Remove iowait boost Rafael J. Wysocki
2024-09-30 20:35     ` srinivas pandruvada
2024-10-01  9:57       ` Christian Loehle
2024-10-01 14:46         ` srinivas pandruvada
2024-09-05  9:26 ` [RFC PATCH 7/8] cpufreq: Remove SCHED_CPUFREQ_IOWAIT update Christian Loehle
2024-09-05  9:26 ` [RFC PATCH 8/8] io_uring: Do not set iowait before sleeping Christian Loehle
2024-09-05 12:31 ` [RFT RFC PATCH 0/8] cpufreq: cpuidle: Remove iowait behaviour Christian Loehle

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox