* [PATCH for-next 1/7] io_uring: add completion locking for iopoll
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
@ 2022-11-23 11:33 ` Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 2/7] io_uring: hold locks for io_req_complete_failed Pavel Begunkov
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2022-11-23 11:33 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
There are pieces of code that may allow iopoll to race filling cqes,
temporarily add spinlocking around posting events.
Cc: [email protected]
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/rw.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 1ce065709724..61c326831949 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -1049,6 +1049,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
else if (!pos)
return 0;
+ spin_lock(&ctx->completion_lock);
prev = start;
wq_list_for_each_resume(pos, prev) {
struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
@@ -1063,11 +1064,11 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
req->cqe.flags = io_put_kbuf(req, 0);
__io_fill_cqe_req(req->ctx, req);
}
-
+ io_commit_cqring(ctx);
+ spin_unlock(&ctx->completion_lock);
if (unlikely(!nr_events))
return 0;
- io_commit_cqring(ctx);
io_cqring_ev_posted_iopoll(ctx);
pos = start ? start->next : ctx->iopoll_list.first;
wq_list_cut(&ctx->iopoll_list, prev, start);
--
2.38.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH for-next 2/7] io_uring: hold locks for io_req_complete_failed
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 1/7] io_uring: add completion locking for iopoll Pavel Begunkov
@ 2022-11-23 11:33 ` Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 3/7] io_uring: use io_req_task_complete() in timeout Pavel Begunkov
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2022-11-23 11:33 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
A preparation patch, make sure we always hold uring_lock around
io_req_complete_failed(). The only place deviating from the rule
is io_cancel_defer_files(), queue a tw instead.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2087ada65284..a4e6866f24c8 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -861,9 +861,12 @@ inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags)
}
void io_req_complete_failed(struct io_kiocb *req, s32 res)
+ __must_hold(&ctx->uring_lock)
{
const struct io_op_def *def = &io_op_defs[req->opcode];
+ lockdep_assert_held(&req->ctx->uring_lock);
+
req_set_fail(req);
io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
if (def->fail)
@@ -1614,6 +1617,7 @@ static u32 io_get_sequence(struct io_kiocb *req)
}
static __cold void io_drain_req(struct io_kiocb *req)
+ __must_hold(&ctx->uring_lock)
{
struct io_ring_ctx *ctx = req->ctx;
struct io_defer_entry *de;
@@ -2847,7 +2851,7 @@ static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,
while (!list_empty(&list)) {
de = list_first_entry(&list, struct io_defer_entry, list);
list_del_init(&de->list);
- io_req_complete_failed(de->req, -ECANCELED);
+ io_req_task_queue_fail(de->req, -ECANCELED);
kfree(de);
}
return true;
--
2.38.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH for-next 3/7] io_uring: use io_req_task_complete() in timeout
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 1/7] io_uring: add completion locking for iopoll Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 2/7] io_uring: hold locks for io_req_complete_failed Pavel Begunkov
@ 2022-11-23 11:33 ` Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 4/7] io_uring: remove io_req_tw_post_queue Pavel Begunkov
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2022-11-23 11:33 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Use a more generic io_req_task_complete() in timeout completion
task_work instead of io_req_complete_post().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/timeout.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index e8a8c2099480..a819818df7b3 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -282,11 +282,11 @@ static void io_req_task_link_timeout(struct io_kiocb *req, bool *locked)
ret = io_try_cancel(req->task->io_uring, &cd, issue_flags);
}
io_req_set_res(req, ret ?: -ETIME, 0);
- io_req_complete_post(req);
+ io_req_task_complete(req, locked);
io_put_req(prev);
} else {
io_req_set_res(req, -ETIME, 0);
- io_req_complete_post(req);
+ io_req_task_complete(req, locked);
}
}
--
2.38.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH for-next 4/7] io_uring: remove io_req_tw_post_queue
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
` (2 preceding siblings ...)
2022-11-23 11:33 ` [PATCH for-next 3/7] io_uring: use io_req_task_complete() in timeout Pavel Begunkov
@ 2022-11-23 11:33 ` Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 5/7] io_uring: inline __io_req_complete_put() Pavel Begunkov
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2022-11-23 11:33 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Remove io_req_tw_post() and io_req_tw_post_queue(), we can use
io_req_task_complete() instead.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 12 ------------
io_uring/io_uring.h | 8 +++++++-
io_uring/timeout.c | 6 +++---
3 files changed, 10 insertions(+), 16 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index a4e6866f24c8..81e7e51816fb 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1225,18 +1225,6 @@ int io_run_local_work(struct io_ring_ctx *ctx)
return ret;
}
-static void io_req_tw_post(struct io_kiocb *req, bool *locked)
-{
- io_req_complete_post(req);
-}
-
-void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags)
-{
- io_req_set_res(req, res, cflags);
- req->io_task_work.func = io_req_tw_post;
- io_req_task_work_add(req);
-}
-
static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
{
/* not needed for normal modes, but SQPOLL depends on it */
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index af3f82bd4017..002b6cc842a5 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -50,7 +50,6 @@ static inline bool io_req_ffs_set(struct io_kiocb *req)
void __io_req_task_work_add(struct io_kiocb *req, bool allow_local);
bool io_is_uring_fops(struct file *file);
bool io_alloc_async_data(struct io_kiocb *req);
-void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags);
void io_req_task_queue(struct io_kiocb *req);
void io_queue_iowq(struct io_kiocb *req, bool *dont_use);
void io_req_task_complete(struct io_kiocb *req, bool *locked);
@@ -366,4 +365,11 @@ static inline bool io_allowed_run_tw(struct io_ring_ctx *ctx)
ctx->submitter_task == current);
}
+static inline void io_req_queue_tw_complete(struct io_kiocb *req, s32 res)
+{
+ io_req_set_res(req, res, 0);
+ req->io_task_work.func = io_req_task_complete;
+ io_req_task_work_add(req);
+}
+
#endif
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index a819818df7b3..5b4bc93fd6e0 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -63,7 +63,7 @@ static bool io_kill_timeout(struct io_kiocb *req, int status)
atomic_set(&req->ctx->cq_timeouts,
atomic_read(&req->ctx->cq_timeouts) + 1);
list_del_init(&timeout->list);
- io_req_tw_post_queue(req, status, 0);
+ io_req_queue_tw_complete(req, status);
return true;
}
return false;
@@ -159,7 +159,7 @@ void io_disarm_next(struct io_kiocb *req)
req->flags &= ~REQ_F_ARM_LTIMEOUT;
if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
io_remove_next_linked(req);
- io_req_tw_post_queue(link, -ECANCELED, 0);
+ io_req_queue_tw_complete(link, -ECANCELED);
}
} else if (req->flags & REQ_F_LINK_TIMEOUT) {
struct io_ring_ctx *ctx = req->ctx;
@@ -168,7 +168,7 @@ void io_disarm_next(struct io_kiocb *req)
link = io_disarm_linked_timeout(req);
spin_unlock_irq(&ctx->timeout_lock);
if (link)
- io_req_tw_post_queue(link, -ECANCELED, 0);
+ io_req_queue_tw_complete(link, -ECANCELED);
}
if (unlikely((req->flags & REQ_F_FAIL) &&
!(req->flags & REQ_F_HARDLINK)))
--
2.38.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH for-next 5/7] io_uring: inline __io_req_complete_put()
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
` (3 preceding siblings ...)
2022-11-23 11:33 ` [PATCH for-next 4/7] io_uring: remove io_req_tw_post_queue Pavel Begunkov
@ 2022-11-23 11:33 ` Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 6/7] io_uring: iopoll protect complete_post Pavel Begunkov
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2022-11-23 11:33 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Inline __io_req_complete_put() into io_req_complete_post(), there are no
other users.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 81e7e51816fb..bd9b286eb031 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -813,15 +813,19 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx,
return filled;
}
-static void __io_req_complete_put(struct io_kiocb *req)
+void io_req_complete_post(struct io_kiocb *req)
{
+ struct io_ring_ctx *ctx = req->ctx;
+
+ io_cq_lock(ctx);
+ if (!(req->flags & REQ_F_CQE_SKIP))
+ __io_fill_cqe_req(ctx, req);
+
/*
* If we're the last reference to this request, add to our locked
* free_list cache.
*/
if (req_ref_put_and_test(req)) {
- struct io_ring_ctx *ctx = req->ctx;
-
if (req->flags & IO_REQ_LINK_FLAGS) {
if (req->flags & IO_DISARM_MASK)
io_disarm_next(req);
@@ -842,16 +846,6 @@ static void __io_req_complete_put(struct io_kiocb *req)
wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
ctx->locked_free_nr++;
}
-}
-
-void io_req_complete_post(struct io_kiocb *req)
-{
- struct io_ring_ctx *ctx = req->ctx;
-
- io_cq_lock(ctx);
- if (!(req->flags & REQ_F_CQE_SKIP))
- __io_fill_cqe_req(ctx, req);
- __io_req_complete_put(req);
io_cq_unlock_post(ctx);
}
--
2.38.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH for-next 6/7] io_uring: iopoll protect complete_post
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
` (4 preceding siblings ...)
2022-11-23 11:33 ` [PATCH for-next 5/7] io_uring: inline __io_req_complete_put() Pavel Begunkov
@ 2022-11-23 11:33 ` Pavel Begunkov
2022-11-23 11:33 ` [PATCH for-next 7/7] io_uring: remove iopoll spinlock Pavel Begunkov
2022-11-23 17:51 ` [PATCH for-next 0/7] iopoll cqe posting fixes Jens Axboe
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2022-11-23 11:33 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
io_req_complete_post() may be used by iopoll enabled rings, grab locks
in this case. That requires to pass issue_flags to propagate the locking
state.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 21 +++++++++++++++------
io_uring/io_uring.h | 10 ++++++++--
io_uring/kbuf.c | 4 ++--
io_uring/poll.c | 2 +-
io_uring/uring_cmd.c | 2 +-
5 files changed, 27 insertions(+), 12 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index bd9b286eb031..54a9966bbb47 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -813,7 +813,7 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx,
return filled;
}
-void io_req_complete_post(struct io_kiocb *req)
+static void __io_req_complete_post(struct io_kiocb *req)
{
struct io_ring_ctx *ctx = req->ctx;
@@ -849,9 +849,18 @@ void io_req_complete_post(struct io_kiocb *req)
io_cq_unlock_post(ctx);
}
-inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags)
+void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
{
- io_req_complete_post(req);
+ if (!(issue_flags & IO_URING_F_UNLOCKED) ||
+ !(req->ctx->flags & IORING_SETUP_IOPOLL)) {
+ __io_req_complete_post(req);
+ } else {
+ struct io_ring_ctx *ctx = req->ctx;
+
+ mutex_lock(&ctx->uring_lock);
+ __io_req_complete_post(req);
+ mutex_unlock(&ctx->uring_lock);
+ }
}
void io_req_complete_failed(struct io_kiocb *req, s32 res)
@@ -865,7 +874,7 @@ void io_req_complete_failed(struct io_kiocb *req, s32 res)
io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
if (def->fail)
def->fail(req);
- io_req_complete_post(req);
+ io_req_complete_post(req, 0);
}
/*
@@ -1449,7 +1458,7 @@ void io_req_task_complete(struct io_kiocb *req, bool *locked)
if (*locked)
io_req_complete_defer(req);
else
- io_req_complete_post(req);
+ io_req_complete_post_tw(req, locked);
}
/*
@@ -1717,7 +1726,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
if (issue_flags & IO_URING_F_COMPLETE_DEFER)
io_req_complete_defer(req);
else
- io_req_complete_post(req);
+ io_req_complete_post(req, issue_flags);
} else if (ret != IOU_ISSUE_SKIP_COMPLETE)
return ret;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 002b6cc842a5..e908966f081a 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -30,12 +30,18 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx);
int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked);
int io_run_local_work(struct io_ring_ctx *ctx);
void io_req_complete_failed(struct io_kiocb *req, s32 res);
-void __io_req_complete(struct io_kiocb *req, unsigned issue_flags);
-void io_req_complete_post(struct io_kiocb *req);
+void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags);
bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
+static inline void io_req_complete_post_tw(struct io_kiocb *req, bool *locked)
+{
+ unsigned flags = *locked ? 0 : IO_URING_F_UNLOCKED;
+
+ io_req_complete_post(req, flags);
+}
+
struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages);
struct file *io_file_get_normal(struct io_kiocb *req, int fd);
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index 25cd724ade18..c33b53b7f435 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -311,7 +311,7 @@ int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
/* complete before unlock, IOPOLL may need the lock */
io_req_set_res(req, ret, 0);
- __io_req_complete(req, issue_flags);
+ io_req_complete_post(req, 0);
io_ring_submit_unlock(ctx, issue_flags);
return IOU_ISSUE_SKIP_COMPLETE;
}
@@ -460,7 +460,7 @@ int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
req_set_fail(req);
/* complete before unlock, IOPOLL may need the lock */
io_req_set_res(req, ret, 0);
- __io_req_complete(req, issue_flags);
+ io_req_complete_post(req, 0);
io_ring_submit_unlock(ctx, issue_flags);
return IOU_ISSUE_SKIP_COMPLETE;
}
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 2830b7daf952..583fc0d745a6 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -298,7 +298,7 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
io_poll_tw_hash_eject(req, locked);
if (ret == IOU_POLL_REMOVE_POLL_USE_RES)
- io_req_complete_post(req);
+ io_req_complete_post_tw(req, locked);
else if (ret == IOU_POLL_DONE)
io_req_task_submit(req, locked);
else
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index e50de0b6b9f8..446a189b78b0 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -56,7 +56,7 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2)
/* order with io_iopoll_req_issued() checking ->iopoll_complete */
smp_store_release(&req->iopoll_completed, 1);
else
- __io_req_complete(req, 0);
+ io_req_complete_post(req, 0);
}
EXPORT_SYMBOL_GPL(io_uring_cmd_done);
--
2.38.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH for-next 7/7] io_uring: remove iopoll spinlock
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
` (5 preceding siblings ...)
2022-11-23 11:33 ` [PATCH for-next 6/7] io_uring: iopoll protect complete_post Pavel Begunkov
@ 2022-11-23 11:33 ` Pavel Begunkov
2022-11-23 17:51 ` [PATCH for-next 0/7] iopoll cqe posting fixes Jens Axboe
7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2022-11-23 11:33 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
This reverts commit 85f0e5da546bb215a27103ac8c698b8f60309ee0
io_req_complete_post() should now behave well even in case of IOPOLL, we
can remove completion_lock locking.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/rw.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 61c326831949..1ce065709724 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -1049,7 +1049,6 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
else if (!pos)
return 0;
- spin_lock(&ctx->completion_lock);
prev = start;
wq_list_for_each_resume(pos, prev) {
struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
@@ -1064,11 +1063,11 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
req->cqe.flags = io_put_kbuf(req, 0);
__io_fill_cqe_req(req->ctx, req);
}
- io_commit_cqring(ctx);
- spin_unlock(&ctx->completion_lock);
+
if (unlikely(!nr_events))
return 0;
+ io_commit_cqring(ctx);
io_cqring_ev_posted_iopoll(ctx);
pos = start ? start->next : ctx->iopoll_list.first;
wq_list_cut(&ctx->iopoll_list, prev, start);
--
2.38.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH for-next 0/7] iopoll cqe posting fixes
2022-11-23 11:33 [PATCH for-next 0/7] iopoll cqe posting fixes Pavel Begunkov
` (6 preceding siblings ...)
2022-11-23 11:33 ` [PATCH for-next 7/7] io_uring: remove iopoll spinlock Pavel Begunkov
@ 2022-11-23 17:51 ` Jens Axboe
7 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2022-11-23 17:51 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
On Wed, 23 Nov 2022 11:33:35 +0000, Pavel Begunkov wrote:
> We need to fix up a few more spots for IOPOLL. 1/7 adds locking
> and is intended to be backported, all 2-5 prepare the code and 5/6,
> fixes the problem and 7/7 reverts the first patch for-next.
>
> Pavel Begunkov (7):
> io_uring: add completion locking for iopoll
> io_uring: hold locks for io_req_complete_failed
> io_uring: use io_req_task_complete() in timeout
> io_uring: remove io_req_tw_post_queue
> io_uring: inline __io_req_complete_put()
> io_uring: iopoll protect complete_post
> io_uring: remove iopoll spinlock
>
> [...]
Applied, thanks!
[1/7] io_uring: add completion locking for iopoll
commit: 2ccc92f4effcfa1c51c4fcf1e34d769099d3cad4
[2/7] io_uring: hold locks for io_req_complete_failed
commit: e276ae344a770f91912a81c6a338d92efd319be2
[3/7] io_uring: use io_req_task_complete() in timeout
commit: 624fd779fd869bdcb2c0ccca0f09456eed71ed52
[4/7] io_uring: remove io_req_tw_post_queue
commit: 833b5dfffc26c81835ce38e2a5df9ac5fa142735
[5/7] io_uring: inline __io_req_complete_put()
commit: fa18fa2272c7469e470dcb7bf838ea50a25494ca
[6/7] io_uring: iopoll protect complete_post
commit: 1bec951c3809051f64a6957fe86d1b4786cc0313
[7/7] io_uring: remove iopoll spinlock
commit: 2dac1a159216b39ced8d78dba590c5d2f4249586
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread