public inbox for [email protected]
 help / color / mirror / Atom feed
* [RFC PATCH v1] io_uring: only account cqring wait time as iowait if enabled for a ring
@ 2024-02-23  5:40 David Wei
  2024-02-23 14:31 ` Jens Axboe
  0 siblings, 1 reply; 3+ messages in thread
From: David Wei @ 2024-02-23  5:40 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Pavel Begunkov

Currently we unconditionally account time spent waiting for events in CQ
ring as iowait time.

Some userspace tools consider iowait time to be CPU util/load which can
be misleading as the process is sleeping. High iowait time might be
indicative of issues for storage IO, but for network IO e.g. socket
recv() we do not control when the completions happen.

This patch gates the previously unconditional iowait accounting behind a
new IORING_REGISTER opcode. By default time is not accounted as iowait,
unless this is explicitly enabled for a ring. Thus userspace can decide,
depending on the type of work it expects to do, whether it wants to
consider cqring wait time as iowait or not.

I've marked the patch as RFC because it is lacking tests. I will add
them in the final patch, but for now I'd like to get some thoughts on
the approach. For example, does the API need an unregister, or take a
bool?

Signed-off-by: David Wei <[email protected]>
---
 include/linux/io_uring_types.h |  3 +++
 include/uapi/linux/io_uring.h  |  3 +++
 io_uring/io_uring.c            |  9 +++++----
 io_uring/register.c            | 12 ++++++++++++
 4 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index bd7071aeec5d..57318fc01379 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -425,6 +425,9 @@ struct io_ring_ctx {
 	DECLARE_HASHTABLE(napi_ht, 4);
 #endif
 
+	/* iowait accounting */
+	bool				iowait_enabled;
+
 	/* protected by ->completion_lock */
 	unsigned			evfd_last_cq_tail;
 
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 7bd10201a02b..b068898c2283 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -575,6 +575,9 @@ enum {
 	IORING_REGISTER_NAPI			= 27,
 	IORING_UNREGISTER_NAPI			= 28,
 
+	/* account time spent in cqring wait as iowait */
+	IORING_REGISTER_IOWAIT			= 29,
+
 	/* this goes last */
 	IORING_REGISTER_LAST,
 
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index cf2f514b7cc0..7f8d2a03cce6 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2533,12 +2533,13 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 		return 0;
 
 	/*
-	 * Mark us as being in io_wait if we have pending requests, so cpufreq
-	 * can take into account that the task is waiting for IO - turns out
-	 * to be important for low QD IO.
+	 * Mark us as being in io_wait if we have pending requests if enabled
+	 * via IORING_REGISTER_IOWAIT, so cpufreq can take into account that
+	 * the task is waiting for IO - turns out to be important for low QD
+	 * IO.
 	 */
 	io_wait = current->in_iowait;
-	if (current_pending_io())
+	if (ctx->iowait_enabled && current_pending_io())
 		current->in_iowait = 1;
 	ret = 0;
 	if (iowq->timeout == KTIME_MAX)
diff --git a/io_uring/register.c b/io_uring/register.c
index 99c37775f974..7cbc08544c4c 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -387,6 +387,12 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
 	return ret;
 }
 
+static int io_register_iowait(struct io_ring_ctx *ctx)
+{
+	ctx->iowait_enabled = true;
+	return 0;
+}
+
 static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			       void __user *arg, unsigned nr_args)
 	__releases(ctx->uring_lock)
@@ -563,6 +569,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			break;
 		ret = io_unregister_napi(ctx, arg);
 		break;
+	case IORING_REGISTER_IOWAIT:
+		ret = -EINVAL;
+		if (arg || nr_args)
+			break;
+		ret = io_register_iowait(ctx);
+		break;
 	default:
 		ret = -EINVAL;
 		break;
-- 
2.39.3


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH v1] io_uring: only account cqring wait time as iowait if enabled for a ring
  2024-02-23  5:40 [RFC PATCH v1] io_uring: only account cqring wait time as iowait if enabled for a ring David Wei
@ 2024-02-23 14:31 ` Jens Axboe
  2024-02-23 17:17   ` David Wei
  0 siblings, 1 reply; 3+ messages in thread
From: Jens Axboe @ 2024-02-23 14:31 UTC (permalink / raw)
  To: David Wei, io-uring; +Cc: Pavel Begunkov

On 2/22/24 10:40 PM, David Wei wrote:
> diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
> index bd7071aeec5d..57318fc01379 100644
> --- a/include/linux/io_uring_types.h
> +++ b/include/linux/io_uring_types.h
> @@ -425,6 +425,9 @@ struct io_ring_ctx {
>  	DECLARE_HASHTABLE(napi_ht, 4);
>  #endif
>  
> +	/* iowait accounting */
> +	bool				iowait_enabled;
> +

Since this is just a single bit, you should put it in the top section
where we have other single bits for hot / read-mostly data. This avoids
needing something many cache lines away for the hotter wait path, and it
avoids growing the struct as there's still plenty of space there for
this.

> diff --git a/io_uring/register.c b/io_uring/register.c
> index 99c37775f974..7cbc08544c4c 100644
> --- a/io_uring/register.c
> +++ b/io_uring/register.c
> @@ -387,6 +387,12 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
>  	return ret;
>  }
>  
> +static int io_register_iowait(struct io_ring_ctx *ctx)
> +{
> +	ctx->iowait_enabled = true;
> +	return 0;
> +}
> +
>  static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>  			       void __user *arg, unsigned nr_args)
>  	__releases(ctx->uring_lock)
> @@ -563,6 +569,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>  			break;
>  		ret = io_unregister_napi(ctx, arg);
>  		break;
> +	case IORING_REGISTER_IOWAIT:
> +		ret = -EINVAL;
> +		if (arg || nr_args)
> +			break;
> +		ret = io_register_iowait(ctx);
> +		break;


This only allows you to set it, not clear it. I think we want to make it
pass in the value, and pass back the previous. Something ala:

static int io_register_iowait(struct io_ring_ctx *ctx, int val)
{
	int was_enabled = ctx->iowait_enabled;

	if (val)
		ctx->iowait_enabled = 1;
	else
		ctx->iowait_enabled = 0;
	return was_enabled;
}

and then:

	case IORING_REGISTER_IOWAIT:
		ret = -EINVAL;
		if (arg)
			break;
		ret = io_register_iowait(ctx, nr_args);
		break;

I'd also add a:

Fixes: 8a796565cec3 ("io_uring: Use io_schedule* in cqring wait")

and mark it for stable, so we at least attempt to make it something that
can be depended on.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH v1] io_uring: only account cqring wait time as iowait if enabled for a ring
  2024-02-23 14:31 ` Jens Axboe
@ 2024-02-23 17:17   ` David Wei
  0 siblings, 0 replies; 3+ messages in thread
From: David Wei @ 2024-02-23 17:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: Pavel Begunkov

On 2024-02-23 06:31, Jens Axboe wrote:
> On 2/22/24 10:40 PM, David Wei wrote:
>> diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
>> index bd7071aeec5d..57318fc01379 100644
>> --- a/include/linux/io_uring_types.h
>> +++ b/include/linux/io_uring_types.h
>> @@ -425,6 +425,9 @@ struct io_ring_ctx {
>>  	DECLARE_HASHTABLE(napi_ht, 4);
>>  #endif
>>  
>> +	/* iowait accounting */
>> +	bool				iowait_enabled;
>> +
> 
> Since this is just a single bit, you should put it in the top section
> where we have other single bits for hot / read-mostly data. This avoids
> needing something many cache lines away for the hotter wait path, and it
> avoids growing the struct as there's still plenty of space there for
> this.

Got it, moved it to the cacheline aligned hot/read-only struct.
$current_year is when I learnt about C bitfields.

> 
>> diff --git a/io_uring/register.c b/io_uring/register.c
>> index 99c37775f974..7cbc08544c4c 100644
>> --- a/io_uring/register.c
>> +++ b/io_uring/register.c
>> @@ -387,6 +387,12 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
>>  	return ret;
>>  }
>>  
>> +static int io_register_iowait(struct io_ring_ctx *ctx)
>> +{
>> +	ctx->iowait_enabled = true;
>> +	return 0;
>> +}
>> +
>>  static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>>  			       void __user *arg, unsigned nr_args)
>>  	__releases(ctx->uring_lock)
>> @@ -563,6 +569,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>>  			break;
>>  		ret = io_unregister_napi(ctx, arg);
>>  		break;
>> +	case IORING_REGISTER_IOWAIT:
>> +		ret = -EINVAL;
>> +		if (arg || nr_args)
>> +			break;
>> +		ret = io_register_iowait(ctx);
>> +		break;
> 
> 
> This only allows you to set it, not clear it. I think we want to make it
> pass in the value, and pass back the previous. Something ala:
> 
> static int io_register_iowait(struct io_ring_ctx *ctx, int val)
> {
> 	int was_enabled = ctx->iowait_enabled;
> 
> 	if (val)
> 		ctx->iowait_enabled = 1;
> 	else
> 		ctx->iowait_enabled = 0;
> 	return was_enabled;
> }
> 
> and then:
> 
> 	case IORING_REGISTER_IOWAIT:
> 		ret = -EINVAL;
> 		if (arg)
> 			break;
> 		ret = io_register_iowait(ctx, nr_args);
> 		break;
> 

When I first thought about this I wondered how to pass a value like int
val through arg which is a void*. That's a clever use of nr_args.

> I'd also add a:
> 
> Fixes: 8a796565cec3 ("io_uring: Use io_schedule* in cqring wait")
> 
> and mark it for stable, so we at least attempt to make it something that
> can be depended on.

Sorry, what does "mark it for stable" mean?

> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-02-23 17:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-23  5:40 [RFC PATCH v1] io_uring: only account cqring wait time as iowait if enabled for a ring David Wei
2024-02-23 14:31 ` Jens Axboe
2024-02-23 17:17   ` David Wei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox