* [PATCH v2 0/3] iopoll improvements
@ 2020-07-06 14:59 Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 1/3] io_uring: don't delay iopoll'ed req completion Pavel Begunkov
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-06 14:59 UTC (permalink / raw)
To: Jens Axboe, io-uring
iopoll'ing should be more responsive with these.
v2: [3/3] extra need_resched() for reaping (Jens)
Pavel Begunkov (3):
io_uring: don't delay iopoll'ed req completion
io_uring: fix stopping iopoll'ing too early
io_uring: briefly loose locks while reaping events
fs/io_uring.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
--
2.24.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 1/3] io_uring: don't delay iopoll'ed req completion
2020-07-06 14:59 [PATCH v2 0/3] iopoll improvements Pavel Begunkov
@ 2020-07-06 14:59 ` Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 2/3] io_uring: fix stopping iopoll'ing too early Pavel Begunkov
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-06 14:59 UTC (permalink / raw)
To: Jens Axboe, io-uring
->iopoll() may have completed current request, but instead of reaping
it, io_do_iopoll() just continues with the next request in the list.
As a result it can leave just polled and completed request in the list
up until next syscall. Even outer loop in io_iopoll_getevents() doesn't
help the situation.
E.g. poll_list: req0 -> req1
If req0->iopoll() completed both requests, and @min<=1,
then @req0 will be left behind.
Check whether a req was completed after ->iopoll().
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index a2459504b371..50f9260eea9b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2008,6 +2008,10 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
if (ret < 0)
break;
+ /* iopoll may have completed current req */
+ if (READ_ONCE(req->iopoll_completed))
+ list_move_tail(&req->list, &done);
+
if (ret && spin)
spin = false;
ret = 0;
--
2.24.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 2/3] io_uring: fix stopping iopoll'ing too early
2020-07-06 14:59 [PATCH v2 0/3] iopoll improvements Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 1/3] io_uring: don't delay iopoll'ed req completion Pavel Begunkov
@ 2020-07-06 14:59 ` Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 3/3] io_uring: briefly loose locks while reaping events Pavel Begunkov
2020-07-06 15:44 ` [PATCH v2 0/3] iopoll improvements Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-06 14:59 UTC (permalink / raw)
To: Jens Axboe, io-uring
Nobody adjusts *nr_events (number of completed requests) before calling
io_iopoll_getevents(), so the passed @min shouldn't be adjusted as well.
Othewise it can return less than initially asked @min without hitting
need_resched().
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 50f9260eea9b..020944a193d0 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2037,7 +2037,7 @@ static int io_iopoll_getevents(struct io_ring_ctx *ctx, unsigned int *nr_events,
ret = io_do_iopoll(ctx, nr_events, min);
if (ret < 0)
return ret;
- if (!min || *nr_events >= min)
+ if (*nr_events >= min)
return 0;
}
@@ -2080,8 +2080,6 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
*/
mutex_lock(&ctx->uring_lock);
do {
- int tmin = 0;
-
/*
* Don't enter poll loop if we already have events pending.
* If we do, we can potentially be spinning for commands that
@@ -2106,10 +2104,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
mutex_lock(&ctx->uring_lock);
}
- if (*nr_events < min)
- tmin = min - *nr_events;
-
- ret = io_iopoll_getevents(ctx, nr_events, tmin);
+ ret = io_iopoll_getevents(ctx, nr_events, min);
if (ret <= 0)
break;
ret = 0;
--
2.24.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 3/3] io_uring: briefly loose locks while reaping events
2020-07-06 14:59 [PATCH v2 0/3] iopoll improvements Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 1/3] io_uring: don't delay iopoll'ed req completion Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 2/3] io_uring: fix stopping iopoll'ing too early Pavel Begunkov
@ 2020-07-06 14:59 ` Pavel Begunkov
2020-07-06 15:44 ` [PATCH v2 0/3] iopoll improvements Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-06 14:59 UTC (permalink / raw)
To: Jens Axboe, io-uring
It's not nice to hold @uring_lock for too long io_iopoll_reap_events().
For instance, the lock is needed to publish requests to @poll_list, and
that locks out tasks doing that for no good reason. Loose it
occasionally.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 020944a193d0..54756ae94bcd 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2062,8 +2062,13 @@ static void io_iopoll_reap_events(struct io_ring_ctx *ctx)
/*
* Ensure we allow local-to-the-cpu processing to take place,
* in this case we need to ensure that we reap all events.
+ * Also let task_work, etc. to progress by releasing the mutex
*/
- cond_resched();
+ if (need_resched()) {
+ mutex_unlock(&ctx->uring_lock);
+ cond_resched();
+ mutex_lock(&ctx->uring_lock);
+ }
}
mutex_unlock(&ctx->uring_lock);
}
--
2.24.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 0/3] iopoll improvements
2020-07-06 14:59 [PATCH v2 0/3] iopoll improvements Pavel Begunkov
` (2 preceding siblings ...)
2020-07-06 14:59 ` [PATCH v2 3/3] io_uring: briefly loose locks while reaping events Pavel Begunkov
@ 2020-07-06 15:44 ` Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2020-07-06 15:44 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 7/6/20 8:59 AM, Pavel Begunkov wrote:
> iopoll'ing should be more responsive with these.
>
> v2: [3/3] extra need_resched() for reaping (Jens)
>
> Pavel Begunkov (3):
> io_uring: don't delay iopoll'ed req completion
> io_uring: fix stopping iopoll'ing too early
> io_uring: briefly loose locks while reaping events
>
> fs/io_uring.c | 20 ++++++++++++--------
> 1 file changed, 12 insertions(+), 8 deletions(-)
Thanks, nice work. Applied for 5.9.
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-07-06 15:44 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-07-06 14:59 [PATCH v2 0/3] iopoll improvements Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 1/3] io_uring: don't delay iopoll'ed req completion Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 2/3] io_uring: fix stopping iopoll'ing too early Pavel Begunkov
2020-07-06 14:59 ` [PATCH v2 3/3] io_uring: briefly loose locks while reaping events Pavel Begunkov
2020-07-06 15:44 ` [PATCH v2 0/3] iopoll improvements Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox