* [PATCH for-next 0/4] some 6.2 cleanups
@ 2022-12-02 17:47 Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 1/4] io_uring: protect cq_timeouts with timeout_lock Pavel Begunkov
` (4 more replies)
0 siblings, 5 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 17:47 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Random cleanups, mostly around locking and timeouts. Even though 1/4 is
makred for stable it shouldn't be too important.
Pavel Begunkov (4):
io_uring: protect cq_timeouts with timeout_lock
io_uring: revise completion_lock locking
io_uring: ease timeout flush locking requirements
io_uring: rename __io_fill_cqe_req
io_uring/io_uring.c | 29 ++++++++++++++++++++---------
io_uring/io_uring.h | 13 +------------
io_uring/rw.c | 2 +-
io_uring/timeout.c | 14 +++++++++-----
4 files changed, 31 insertions(+), 27 deletions(-)
--
2.38.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH for-next 1/4] io_uring: protect cq_timeouts with timeout_lock
2022-12-02 17:47 [PATCH for-next 0/4] some 6.2 cleanups Pavel Begunkov
@ 2022-12-02 17:47 ` Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 2/4] io_uring: revise completion_lock locking Pavel Begunkov
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 17:47 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Read cq_timeouts in io_flush_timeouts() only after taking the
timeout_lock, as it's protected by it. There are many places where we
also grab ->completion_lock, but for instance io_timeout_fn() doesn't
and still modifies cq_timeouts.
Cc: [email protected]
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/timeout.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 5b4bc93fd6e0..4c6a5666541c 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -72,10 +72,12 @@ static bool io_kill_timeout(struct io_kiocb *req, int status)
__cold void io_flush_timeouts(struct io_ring_ctx *ctx)
__must_hold(&ctx->completion_lock)
{
- u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+ u32 seq;
struct io_timeout *timeout, *tmp;
spin_lock_irq(&ctx->timeout_lock);
+ seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+
list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) {
struct io_kiocb *req = cmd_to_io_kiocb(timeout);
u32 events_needed, events_got;
--
2.38.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH for-next 2/4] io_uring: revise completion_lock locking
2022-12-02 17:47 [PATCH for-next 0/4] some 6.2 cleanups Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 1/4] io_uring: protect cq_timeouts with timeout_lock Pavel Begunkov
@ 2022-12-02 17:47 ` Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 3/4] io_uring: ease timeout flush locking requirements Pavel Begunkov
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 17:47 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
io_kill_timeouts() doesn't post any events but queues everything to
task_work. Locking there is needed for protecting linked requests
traversing, we should grab completion_lock directly instead of using
io_cq_[un]lock helpers. Same goes for __io_req_find_next_prep().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 16 ++++++++++++++--
io_uring/io_uring.h | 11 -----------
io_uring/timeout.c | 8 ++++++--
3 files changed, 20 insertions(+), 15 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c30765579a8e..57c1c0da7648 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -582,6 +582,18 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
io_eventfd_flush_signal(ctx);
}
+static inline void io_cq_lock(struct io_ring_ctx *ctx)
+ __acquires(ctx->completion_lock)
+{
+ spin_lock(&ctx->completion_lock);
+}
+
+static inline void io_cq_unlock(struct io_ring_ctx *ctx)
+ __releases(ctx->completion_lock)
+{
+ spin_unlock(&ctx->completion_lock);
+}
+
/* keep it inlined for io_submit_flush_completions() */
static inline void io_cq_unlock_post_inline(struct io_ring_ctx *ctx)
__releases(ctx->completion_lock)
@@ -1038,9 +1050,9 @@ static void __io_req_find_next_prep(struct io_kiocb *req)
{
struct io_ring_ctx *ctx = req->ctx;
- io_cq_lock(ctx);
+ spin_lock(&ctx->completion_lock);
io_disarm_next(req);
- io_cq_unlock_post(ctx);
+ spin_unlock(&ctx->completion_lock);
}
static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 2277c05f52a6..ff84c0cfa2f2 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -87,17 +87,6 @@ static inline void io_req_task_work_add(struct io_kiocb *req)
#define io_for_each_link(pos, head) \
for (pos = (head); pos; pos = pos->link)
-static inline void io_cq_lock(struct io_ring_ctx *ctx)
- __acquires(ctx->completion_lock)
-{
- spin_lock(&ctx->completion_lock);
-}
-
-static inline void io_cq_unlock(struct io_ring_ctx *ctx)
-{
- spin_unlock(&ctx->completion_lock);
-}
-
void io_cq_unlock_post(struct io_ring_ctx *ctx);
static inline struct io_uring_cqe *io_get_cqe_overflow(struct io_ring_ctx *ctx,
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 4c6a5666541c..eae005b2d1d2 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -624,7 +624,11 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
struct io_timeout *timeout, *tmp;
int canceled = 0;
- io_cq_lock(ctx);
+ /*
+ * completion_lock is needed for io_match_task(). Take it before
+ * timeout_lockfirst to keep locking ordering.
+ */
+ spin_lock(&ctx->completion_lock);
spin_lock_irq(&ctx->timeout_lock);
list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) {
struct io_kiocb *req = cmd_to_io_kiocb(timeout);
@@ -634,6 +638,6 @@ __cold bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
canceled++;
}
spin_unlock_irq(&ctx->timeout_lock);
- io_cq_unlock_post(ctx);
+ spin_unlock(&ctx->completion_lock);
return canceled != 0;
}
--
2.38.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH for-next 3/4] io_uring: ease timeout flush locking requirements
2022-12-02 17:47 [PATCH for-next 0/4] some 6.2 cleanups Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 1/4] io_uring: protect cq_timeouts with timeout_lock Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 2/4] io_uring: revise completion_lock locking Pavel Begunkov
@ 2022-12-02 17:47 ` Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 4/4] io_uring: rename __io_fill_cqe_req Pavel Begunkov
2022-12-02 20:03 ` [PATCH for-next 0/4] some 6.2 cleanups Jens Axboe
4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 17:47 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We don't need completion_lock for timeout flushing, don't take it.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 9 ++++-----
io_uring/timeout.c | 2 --
2 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 57c1c0da7648..4593016c6e37 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -570,12 +570,11 @@ static void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
{
- if (ctx->off_timeout_used || ctx->drain_active) {
+ if (ctx->off_timeout_used)
+ io_flush_timeouts(ctx);
+ if (ctx->drain_active) {
spin_lock(&ctx->completion_lock);
- if (ctx->off_timeout_used)
- io_flush_timeouts(ctx);
- if (ctx->drain_active)
- io_queue_deferred(ctx);
+ io_queue_deferred(ctx);
spin_unlock(&ctx->completion_lock);
}
if (ctx->has_evfd)
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index eae005b2d1d2..826a51bca3e4 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -50,7 +50,6 @@ static inline void io_put_req(struct io_kiocb *req)
}
static bool io_kill_timeout(struct io_kiocb *req, int status)
- __must_hold(&req->ctx->completion_lock)
__must_hold(&req->ctx->timeout_lock)
{
struct io_timeout_data *io = req->async_data;
@@ -70,7 +69,6 @@ static bool io_kill_timeout(struct io_kiocb *req, int status)
}
__cold void io_flush_timeouts(struct io_ring_ctx *ctx)
- __must_hold(&ctx->completion_lock)
{
u32 seq;
struct io_timeout *timeout, *tmp;
--
2.38.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH for-next 4/4] io_uring: rename __io_fill_cqe_req
2022-12-02 17:47 [PATCH for-next 0/4] some 6.2 cleanups Pavel Begunkov
` (2 preceding siblings ...)
2022-12-02 17:47 ` [PATCH for-next 3/4] io_uring: ease timeout flush locking requirements Pavel Begunkov
@ 2022-12-02 17:47 ` Pavel Begunkov
2022-12-02 20:03 ` [PATCH for-next 0/4] some 6.2 cleanups Jens Axboe
4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 17:47 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
There is no io_fill_cqe_req(), remove the previx from
__io_fill_cqe_req().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 4 ++--
io_uring/io_uring.h | 2 +-
io_uring/rw.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 4593016c6e37..436b1ac8f6d0 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -894,7 +894,7 @@ static void __io_req_complete_post(struct io_kiocb *req)
io_cq_lock(ctx);
if (!(req->flags & REQ_F_CQE_SKIP))
- __io_fill_cqe_req(ctx, req);
+ io_fill_cqe_req(ctx, req);
/*
* If we're the last reference to this request, add to our locked
@@ -1405,7 +1405,7 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
comp_list);
if (!(req->flags & REQ_F_CQE_SKIP))
- __io_fill_cqe_req(ctx, req);
+ io_fill_cqe_req(ctx, req);
}
io_cq_unlock_post_inline(ctx);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index ff84c0cfa2f2..62227ec3260c 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -110,7 +110,7 @@ static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
return io_get_cqe_overflow(ctx, false);
}
-static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx,
+static inline bool io_fill_cqe_req(struct io_ring_ctx *ctx,
struct io_kiocb *req)
{
struct io_uring_cqe *cqe;
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 1ce065709724..1ecce80508ee 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -1061,7 +1061,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
continue;
req->cqe.flags = io_put_kbuf(req, 0);
- __io_fill_cqe_req(req->ctx, req);
+ io_fill_cqe_req(req->ctx, req);
}
if (unlikely(!nr_events))
--
2.38.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH for-next 0/4] some 6.2 cleanups
2022-12-02 17:47 [PATCH for-next 0/4] some 6.2 cleanups Pavel Begunkov
` (3 preceding siblings ...)
2022-12-02 17:47 ` [PATCH for-next 4/4] io_uring: rename __io_fill_cqe_req Pavel Begunkov
@ 2022-12-02 20:03 ` Jens Axboe
2022-12-14 16:06 ` Jens Axboe
4 siblings, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2022-12-02 20:03 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
On Fri, 02 Dec 2022 17:47:21 +0000, Pavel Begunkov wrote:
> Random cleanups, mostly around locking and timeouts. Even though 1/4 is
> makred for stable it shouldn't be too important.
>
> Pavel Begunkov (4):
> io_uring: protect cq_timeouts with timeout_lock
> io_uring: revise completion_lock locking
> io_uring: ease timeout flush locking requirements
> io_uring: rename __io_fill_cqe_req
>
> [...]
Applied, thanks!
[1/4] io_uring: protect cq_timeouts with timeout_lock
commit: f9df7554e30aa244a860a09ba8f68d9d25f5d1fb
[2/4] io_uring: revise completion_lock locking
commit: f12da342ec9c81fd109dfbafa05f3b17ddd88b2a
[3/4] io_uring: ease timeout flush locking requirements
commit: 4124d26a5930e5e259ea5452866749dc385b5144
[4/4] io_uring: rename __io_fill_cqe_req
commit: 03d5549e3cb7b7f26147fd27f9627c1b4851807b
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH for-next 0/4] some 6.2 cleanups
2022-12-02 20:03 ` [PATCH for-next 0/4] some 6.2 cleanups Jens Axboe
@ 2022-12-14 16:06 ` Jens Axboe
0 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2022-12-14 16:06 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
On 12/2/22 1:03?PM, Jens Axboe wrote:
>
> On Fri, 02 Dec 2022 17:47:21 +0000, Pavel Begunkov wrote:
>> Random cleanups, mostly around locking and timeouts. Even though 1/4 is
>> makred for stable it shouldn't be too important.
>>
>> Pavel Begunkov (4):
>> io_uring: protect cq_timeouts with timeout_lock
>> io_uring: revise completion_lock locking
>> io_uring: ease timeout flush locking requirements
>> io_uring: rename __io_fill_cqe_req
>>
>> [...]
>
> Applied, thanks!
>
> [1/4] io_uring: protect cq_timeouts with timeout_lock
> commit: f9df7554e30aa244a860a09ba8f68d9d25f5d1fb
> [2/4] io_uring: revise completion_lock locking
> commit: f12da342ec9c81fd109dfbafa05f3b17ddd88b2a
> [3/4] io_uring: ease timeout flush locking requirements
> commit: 4124d26a5930e5e259ea5452866749dc385b5144
> [4/4] io_uring: rename __io_fill_cqe_req
> commit: 03d5549e3cb7b7f26147fd27f9627c1b4851807b
I apparently fat fingered applying these as part of moving things
around, and as a result, there were not in the tree for 6.2. I've
applied them again in io_uring-6.2 - please take a look, as I needed to
hand apply 1 and 4 because of the other locking changes.
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-12-14 16:07 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-12-02 17:47 [PATCH for-next 0/4] some 6.2 cleanups Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 1/4] io_uring: protect cq_timeouts with timeout_lock Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 2/4] io_uring: revise completion_lock locking Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 3/4] io_uring: ease timeout flush locking requirements Pavel Begunkov
2022-12-02 17:47 ` [PATCH for-next 4/4] io_uring: rename __io_fill_cqe_req Pavel Begunkov
2022-12-02 20:03 ` [PATCH for-next 0/4] some 6.2 cleanups Jens Axboe
2022-12-14 16:06 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox