* [PATCH 5.15] io_uring: apply max_workers limit to all future users
@ 2021-10-19 22:43 Pavel Begunkov
2021-10-19 23:14 ` Jens Axboe
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Pavel Begunkov @ 2021-10-19 22:43 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, Pavel Begunkov
Currently, IORING_REGISTER_IOWQ_MAX_WORKERS applies only to the task
that issued it, it's unexpected for users. If one task creates a ring,
limits workers and then passes it to another task the limit won't be
applied to the other task.
Another pitfall is that a task should either create a ring or submit at
least one request for IORING_REGISTER_IOWQ_MAX_WORKERS to work at all,
furher complicating the picture.
Change the API, save the limits and apply to all future users. Note, it
should be done first before giving away the ring or submitting new
requests otherwise the result is not guaranteed.
Signed-off-by: Pavel Begunkov <[email protected]>
---
Change the API as it was introduced in this cycles. Tested by hand
observing the number of workers created, but there are no regression
tests.
fs/io_uring.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index e68d27829bb2..e8b71f14ac8b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -456,6 +456,8 @@ struct io_ring_ctx {
struct work_struct exit_work;
struct list_head tctx_list;
struct completion ref_comp;
+ u32 iowq_limits[2];
+ bool iowq_limits_set;
};
};
@@ -9638,7 +9640,16 @@ static int __io_uring_add_tctx_node(struct io_ring_ctx *ctx)
ret = io_uring_alloc_task_context(current, ctx);
if (unlikely(ret))
return ret;
+
tctx = current->io_uring;
+ if (ctx->iowq_limits_set) {
+ unsigned int limits[2] = { ctx->iowq_limits[0],
+ ctx->iowq_limits[1], };
+
+ ret = io_wq_max_workers(tctx->io_wq, limits);
+ if (ret)
+ return ret;
+ }
}
if (!xa_load(&tctx->xa, (unsigned long)ctx)) {
node = kmalloc(sizeof(*node), GFP_KERNEL);
@@ -10674,13 +10685,19 @@ static int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
tctx = current->io_uring;
}
- ret = -EINVAL;
- if (!tctx || !tctx->io_wq)
- goto err;
+ BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
- ret = io_wq_max_workers(tctx->io_wq, new_count);
- if (ret)
- goto err;
+ memcpy(ctx->iowq_limits, new_count, sizeof(new_count));
+ ctx->iowq_limits_set = true;
+
+ ret = -EINVAL;
+ if (tctx && tctx->io_wq) {
+ ret = io_wq_max_workers(tctx->io_wq, new_count);
+ if (ret)
+ goto err;
+ } else {
+ memset(new_count, 0, sizeof(new_count));
+ }
if (sqd) {
mutex_unlock(&sqd->lock);
--
2.33.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 5.15] io_uring: apply max_workers limit to all future users
2021-10-19 22:43 [PATCH 5.15] io_uring: apply max_workers limit to all future users Pavel Begunkov
@ 2021-10-19 23:14 ` Jens Axboe
2021-10-20 0:23 ` Jens Axboe
2021-10-20 8:52 ` Hao Xu
2 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2021-10-19 23:14 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 10/19/21 4:43 PM, Pavel Begunkov wrote:
> Currently, IORING_REGISTER_IOWQ_MAX_WORKERS applies only to the task
> that issued it, it's unexpected for users. If one task creates a ring,
> limits workers and then passes it to another task the limit won't be
> applied to the other task.
>
> Another pitfall is that a task should either create a ring or submit at
> least one request for IORING_REGISTER_IOWQ_MAX_WORKERS to work at all,
> furher complicating the picture.
>
> Change the API, save the limits and apply to all future users. Note, it
> should be done first before giving away the ring or submitting new
> requests otherwise the result is not guaranteed.
Thanks, let's do this for 5.15. I've added:
Fixes: 2e480058ddc2 ("io-wq: provide a way to limit max number of workers")
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 5.15] io_uring: apply max_workers limit to all future users
2021-10-19 22:43 [PATCH 5.15] io_uring: apply max_workers limit to all future users Pavel Begunkov
2021-10-19 23:14 ` Jens Axboe
@ 2021-10-20 0:23 ` Jens Axboe
2021-10-20 8:52 ` Hao Xu
2 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2021-10-20 0:23 UTC (permalink / raw)
To: io-uring, Pavel Begunkov; +Cc: Jens Axboe
On Tue, 19 Oct 2021 23:43:46 +0100, Pavel Begunkov wrote:
> Currently, IORING_REGISTER_IOWQ_MAX_WORKERS applies only to the task
> that issued it, it's unexpected for users. If one task creates a ring,
> limits workers and then passes it to another task the limit won't be
> applied to the other task.
>
> Another pitfall is that a task should either create a ring or submit at
> least one request for IORING_REGISTER_IOWQ_MAX_WORKERS to work at all,
> furher complicating the picture.
>
> [...]
Applied, thanks!
[1/1] io_uring: apply max_workers limit to all future users
(no commit info)
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 5.15] io_uring: apply max_workers limit to all future users
2021-10-19 22:43 [PATCH 5.15] io_uring: apply max_workers limit to all future users Pavel Begunkov
2021-10-19 23:14 ` Jens Axboe
2021-10-20 0:23 ` Jens Axboe
@ 2021-10-20 8:52 ` Hao Xu
2021-10-20 10:48 ` Pavel Begunkov
2 siblings, 1 reply; 5+ messages in thread
From: Hao Xu @ 2021-10-20 8:52 UTC (permalink / raw)
To: Pavel Begunkov, io-uring; +Cc: Jens Axboe, Joseph Qi
在 2021/10/20 上午6:43, Pavel Begunkov 写道:
> Currently, IORING_REGISTER_IOWQ_MAX_WORKERS applies only to the task
> that issued it, it's unexpected for users. If one task creates a ring,
> limits workers and then passes it to another task the limit won't be
> applied to the other task.
>
> Another pitfall is that a task should either create a ring or submit at
> least one request for IORING_REGISTER_IOWQ_MAX_WORKERS to work at all,
> furher complicating the picture.
>
> Change the API, save the limits and apply to all future users. Note, it
> should be done first before giving away the ring or submitting new
> requests otherwise the result is not guaranteed.
>
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
>
> Change the API as it was introduced in this cycles. Tested by hand
> observing the number of workers created, but there are no regression
> tests.
>
> fs/io_uring.c | 29 +++++++++++++++++++++++------
> 1 file changed, 23 insertions(+), 6 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index e68d27829bb2..e8b71f14ac8b 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -456,6 +456,8 @@ struct io_ring_ctx {
> struct work_struct exit_work;
> struct list_head tctx_list;
> struct completion ref_comp;
> + u32 iowq_limits[2];
> + bool iowq_limits_set;
> };
> };
>
> @@ -9638,7 +9640,16 @@ static int __io_uring_add_tctx_node(struct io_ring_ctx *ctx)
> ret = io_uring_alloc_task_context(current, ctx);
> if (unlikely(ret))
> return ret;
> +
> tctx = current->io_uring;
> + if (ctx->iowq_limits_set) {
> + unsigned int limits[2] = { ctx->iowq_limits[0],
> + ctx->iowq_limits[1], };
> +
> + ret = io_wq_max_workers(tctx->io_wq, limits);
> + if (ret)
> + return ret;
> + }
> }
> if (!xa_load(&tctx->xa, (unsigned long)ctx)) {
> node = kmalloc(sizeof(*node), GFP_KERNEL);
> @@ -10674,13 +10685,19 @@ static int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
> tctx = current->io_uring;
> }
>
> - ret = -EINVAL;
> - if (!tctx || !tctx->io_wq)
> - goto err;
> + BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
>
> - ret = io_wq_max_workers(tctx->io_wq, new_count);
> - if (ret)
> - goto err;
> + memcpy(ctx->iowq_limits, new_count, sizeof(new_count));
> + ctx->iowq_limits_set = true;
> +
> + ret = -EINVAL;
> + if (tctx && tctx->io_wq) {
> + ret = io_wq_max_workers(tctx->io_wq, new_count);
Hi Pavel,
The ctx->iowq_limits_set limits the future ctx users, but not the past
ones, how about update the numbers for all the current ctx users here?
I know the number of workers that a current user uses may already
exceeds the newest limitation, but at least this will make it not to
grow any more, and may decrement it to the limitation some time later.
Regards,
Hao
> + if (ret)
> + goto err;
> + } else {
> + memset(new_count, 0, sizeof(new_count));
> + }
>
> if (sqd) {
> mutex_unlock(&sqd->lock);
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 5.15] io_uring: apply max_workers limit to all future users
2021-10-20 8:52 ` Hao Xu
@ 2021-10-20 10:48 ` Pavel Begunkov
0 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2021-10-20 10:48 UTC (permalink / raw)
To: Hao Xu, io-uring; +Cc: Jens Axboe, Joseph Qi
On 10/20/21 09:52, Hao Xu wrote:
> 在 2021/10/20 上午6:43, Pavel Begunkov 写道:
> Hi Pavel,
> The ctx->iowq_limits_set limits the future ctx users, but not the past
> ones, how about update the numbers for all the current ctx users here?
> I know the number of workers that a current user uses may already
> exceeds the newest limitation, but at least this will make it not to
> grow any more, and may decrement it to the limitation some time later.
Indeed, that was the idea! Though it's a bit trickier, would need
ctx->tctx_list traversal, and before putting it in I wanted to take
a look at another problem related to ->tctx_list. I hope to get that
done asap for 5.15.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-10-20 10:48 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-10-19 22:43 [PATCH 5.15] io_uring: apply max_workers limit to all future users Pavel Begunkov
2021-10-19 23:14 ` Jens Axboe
2021-10-20 0:23 ` Jens Axboe
2021-10-20 8:52 ` Hao Xu
2021-10-20 10:48 ` Pavel Begunkov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox