* [PATCH 00/14] submission path refactoring
@ 2022-04-15 21:08 Pavel Begunkov
2022-04-15 21:08 ` [PATCH 01/14] io_uring: clean poll tw PF_EXITING handling Pavel Begunkov
` (14 more replies)
0 siblings, 15 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Lots of cleanups, most of the patches improve the submission path.
Pavel Begunkov (14):
io_uring: clean poll tw PF_EXITING handling
io_uring: add a hepler for putting rsrc nodes
io_uring: minor refactoring for some tw handlers
io_uring: kill io_put_req_deferred()
io_uring: inline io_free_req()
io_uring: helper for prep+queuing linked timeouts
io_uring: inline io_queue_sqe()
io_uring: rename io_queue_async_work()
io_uring: refactor io_queue_sqe()
io_uring: introduce IO_REQ_LINK_FLAGS
io_uring: refactor lazy link fail
io_uring: refactor io_submit_sqe()
io_uring: inline io_req_complete_fail_submit()
io_uring: add data_race annotations
fs/io_uring.c | 287 +++++++++++++++++++++++---------------------------
1 file changed, 134 insertions(+), 153 deletions(-)
--
2.35.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 01/14] io_uring: clean poll tw PF_EXITING handling
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes Pavel Begunkov
` (13 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
When we meet PF_EXITING in io_poll_check_events(), don't overcomplicate
the code with io_poll_mark_cancelled() but just return -ECANCELED and
the callers will deal with the rest.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 99e6c14d2a47..4bc3b20b7f85 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5899,7 +5899,7 @@ static int io_poll_check_events(struct io_kiocb *req, bool locked)
/* req->task == current here, checking PF_EXITING is safe */
if (unlikely(req->task->flags & PF_EXITING))
- io_poll_mark_cancelled(req);
+ return -ECANCELED;
do {
v = atomic_read(&req->poll_refs);
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
2022-04-15 21:08 ` [PATCH 01/14] io_uring: clean poll tw PF_EXITING handling Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-18 0:05 ` Jens Axboe
2022-04-15 21:08 ` [PATCH 03/14] io_uring: minor refactoring for some tw handlers Pavel Begunkov
` (12 subsequent siblings)
14 siblings, 1 reply; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Add a simple helper to encapsulating dropping rsrc nodes references,
it's cleaner and will help if we'd change rsrc refcounting or play with
percpu_ref_put() [no]inlining.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4bc3b20b7f85..b24d65480c08 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1327,6 +1327,11 @@ static inline void io_req_set_refcount(struct io_kiocb *req)
#define IO_RSRC_REF_BATCH 100
+static void io_rsrc_put_node(struct io_rsrc_node *node, int nr)
+{
+ percpu_ref_put_many(&node->refs, nr);
+}
+
static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
struct io_ring_ctx *ctx)
__must_hold(&ctx->uring_lock)
@@ -1337,21 +1342,21 @@ static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
if (node == ctx->rsrc_node)
ctx->rsrc_cached_refs++;
else
- percpu_ref_put(&node->refs);
+ io_rsrc_put_node(node, 1);
}
}
static inline void io_req_put_rsrc(struct io_kiocb *req, struct io_ring_ctx *ctx)
{
if (req->rsrc_node)
- percpu_ref_put(&req->rsrc_node->refs);
+ io_rsrc_put_node(req->rsrc_node, 1);
}
static __cold void io_rsrc_refs_drop(struct io_ring_ctx *ctx)
__must_hold(&ctx->uring_lock)
{
if (ctx->rsrc_cached_refs) {
- percpu_ref_put_many(&ctx->rsrc_node->refs, ctx->rsrc_cached_refs);
+ io_rsrc_put_node(ctx->rsrc_node, ctx->rsrc_cached_refs);
ctx->rsrc_cached_refs = 0;
}
}
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 03/14] io_uring: minor refactoring for some tw handlers
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
2022-04-15 21:08 ` [PATCH 01/14] io_uring: clean poll tw PF_EXITING handling Pavel Begunkov
2022-04-15 21:08 ` [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 04/14] io_uring: kill io_put_req_deferred() Pavel Begunkov
` (11 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Get rid of some useless local variables
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 14 +++++---------
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index b24d65480c08..986a2d640702 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1734,7 +1734,6 @@ static inline void io_req_add_compl_list(struct io_kiocb *req)
static void io_queue_async_work(struct io_kiocb *req, bool *dont_use)
{
- struct io_ring_ctx *ctx = req->ctx;
struct io_kiocb *link = io_prep_linked_timeout(req);
struct io_uring_task *tctx = req->task->io_uring;
@@ -1754,8 +1753,9 @@ static void io_queue_async_work(struct io_kiocb *req, bool *dont_use)
if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
req->work.flags |= IO_WQ_WORK_CANCEL;
- trace_io_uring_queue_async_work(ctx, req, req->cqe.user_data, req->opcode, req->flags,
- &req->work, io_wq_is_hashed(&req->work));
+ trace_io_uring_queue_async_work(req->ctx, req, req->cqe.user_data,
+ req->opcode, req->flags, &req->work,
+ io_wq_is_hashed(&req->work));
io_wq_enqueue(tctx->io_wq, &req->work);
if (link)
io_queue_linked_timeout(link);
@@ -2647,18 +2647,14 @@ static void io_req_task_work_add(struct io_kiocb *req, bool priority)
static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
{
- struct io_ring_ctx *ctx = req->ctx;
-
/* not needed for normal modes, but SQPOLL depends on it */
- io_tw_lock(ctx, locked);
+ io_tw_lock(req->ctx, locked);
io_req_complete_failed(req, req->cqe.res);
}
static void io_req_task_submit(struct io_kiocb *req, bool *locked)
{
- struct io_ring_ctx *ctx = req->ctx;
-
- io_tw_lock(ctx, locked);
+ io_tw_lock(req->ctx, locked);
/* req->task == current here, checking PF_EXITING is safe */
if (likely(!(req->task->flags & PF_EXITING)))
__io_queue_sqe(req);
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 04/14] io_uring: kill io_put_req_deferred()
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (2 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 03/14] io_uring: minor refactoring for some tw handlers Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 05/14] io_uring: inline io_free_req() Pavel Begunkov
` (10 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We have several spots where a call to io_fill_cqe_req() is immediately
followed by io_put_req_deferred(). Replace them with
__io_req_complete_post() and get rid of io_put_req_deferred() and
io_fill_cqe_req().
> size ./fs/io_uring.o
text data bss dec hex filename
86942 13734 8 100684 1894c ./fs/io_uring.o
> size ./fs/io_uring.o
text data bss dec hex filename
86438 13654 8 100100 18704 ./fs/io_uring.o
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 42 ++++++++----------------------------------
1 file changed, 8 insertions(+), 34 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 986a2d640702..92d7c7a0d234 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1188,10 +1188,8 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
bool cancel_all);
static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd);
-static void io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags);
-
+static void __io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags);
static void io_put_req(struct io_kiocb *req);
-static void io_put_req_deferred(struct io_kiocb *req);
static void io_dismantle_req(struct io_kiocb *req);
static void io_queue_linked_timeout(struct io_kiocb *req);
static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
@@ -1773,8 +1771,7 @@ static void io_kill_timeout(struct io_kiocb *req, int status)
atomic_set(&req->ctx->cq_timeouts,
atomic_read(&req->ctx->cq_timeouts) + 1);
list_del_init(&req->timeout.list);
- io_fill_cqe_req(req, status, 0);
- io_put_req_deferred(req);
+ __io_req_complete_post(req, status, 0);
}
}
@@ -2137,12 +2134,6 @@ static inline bool __io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags)
return __io_fill_cqe(req->ctx, req->cqe.user_data, res, cflags);
}
-static noinline void io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags)
-{
- if (!(req->flags & REQ_F_CQE_SKIP))
- __io_fill_cqe_req(req, res, cflags);
-}
-
static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
s32 res, u32 cflags)
{
@@ -2376,9 +2367,7 @@ static bool io_kill_linked_timeout(struct io_kiocb *req)
link->timeout.head = NULL;
if (hrtimer_try_to_cancel(&io->timer) != -1) {
list_del(&link->timeout.list);
- /* leave REQ_F_CQE_SKIP to io_fill_cqe_req */
- io_fill_cqe_req(link, -ECANCELED, 0);
- io_put_req_deferred(link);
+ __io_req_complete_post(link, -ECANCELED, 0);
return true;
}
}
@@ -2404,11 +2393,11 @@ static void io_fail_links(struct io_kiocb *req)
trace_io_uring_fail_link(req->ctx, req, req->cqe.user_data,
req->opcode, link);
- if (!ignore_cqes) {
+ if (ignore_cqes)
+ link->flags |= REQ_F_CQE_SKIP;
+ else
link->flags &= ~REQ_F_CQE_SKIP;
- io_fill_cqe_req(link, res, 0);
- }
- io_put_req_deferred(link);
+ __io_req_complete_post(link, res, 0);
link = nxt;
}
}
@@ -2424,9 +2413,7 @@ static bool io_disarm_next(struct io_kiocb *req)
req->flags &= ~REQ_F_ARM_LTIMEOUT;
if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
io_remove_next_linked(req);
- /* leave REQ_F_CQE_SKIP to io_fill_cqe_req */
- io_fill_cqe_req(link, -ECANCELED, 0);
- io_put_req_deferred(link);
+ __io_req_complete_post(link, -ECANCELED, 0);
posted = true;
}
} else if (req->flags & REQ_F_LINK_TIMEOUT) {
@@ -2695,11 +2682,6 @@ static void io_free_req(struct io_kiocb *req)
__io_free_req(req);
}
-static void io_free_req_work(struct io_kiocb *req, bool *locked)
-{
- io_free_req(req);
-}
-
static void io_free_batch_list(struct io_ring_ctx *ctx,
struct io_wq_work_node *node)
__must_hold(&ctx->uring_lock)
@@ -2799,14 +2781,6 @@ static inline void io_put_req(struct io_kiocb *req)
io_free_req(req);
}
-static inline void io_put_req_deferred(struct io_kiocb *req)
-{
- if (req_ref_put_and_test(req)) {
- req->io_task_work.func = io_free_req_work;
- io_req_task_work_add(req, false);
- }
-}
-
static unsigned io_cqring_events(struct io_ring_ctx *ctx)
{
/* See comment at the top of this file */
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 05/14] io_uring: inline io_free_req()
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (3 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 04/14] io_uring: kill io_put_req_deferred() Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 06/14] io_uring: helper for prep+queuing linked timeouts Pavel Begunkov
` (9 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Inline io_free_req() into its only user and remove an underscore prefix
from __io_free_req().
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 92d7c7a0d234..d872c9b5885d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1189,7 +1189,6 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd);
static void __io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags);
-static void io_put_req(struct io_kiocb *req);
static void io_dismantle_req(struct io_kiocb *req);
static void io_queue_linked_timeout(struct io_kiocb *req);
static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
@@ -2332,7 +2331,7 @@ static inline void io_dismantle_req(struct io_kiocb *req)
io_put_file(req->file);
}
-static __cold void __io_free_req(struct io_kiocb *req)
+static __cold void io_free_req(struct io_kiocb *req)
{
struct io_ring_ctx *ctx = req->ctx;
@@ -2676,12 +2675,6 @@ static void io_queue_next(struct io_kiocb *req)
io_req_task_queue(nxt);
}
-static void io_free_req(struct io_kiocb *req)
-{
- io_queue_next(req);
- __io_free_req(req);
-}
-
static void io_free_batch_list(struct io_ring_ctx *ctx,
struct io_wq_work_node *node)
__must_hold(&ctx->uring_lock)
@@ -2770,15 +2763,17 @@ static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
if (req_ref_put_and_test(req)) {
if (unlikely(req->flags & (REQ_F_LINK|REQ_F_HARDLINK)))
nxt = io_req_find_next(req);
- __io_free_req(req);
+ io_free_req(req);
}
return nxt;
}
static inline void io_put_req(struct io_kiocb *req)
{
- if (req_ref_put_and_test(req))
+ if (req_ref_put_and_test(req)) {
+ io_queue_next(req);
io_free_req(req);
+ }
}
static unsigned io_cqring_events(struct io_ring_ctx *ctx)
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 06/14] io_uring: helper for prep+queuing linked timeouts
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (4 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 05/14] io_uring: inline io_free_req() Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 07/14] io_uring: inline io_queue_sqe() Pavel Begunkov
` (8 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We try to aggresively inline the submission path, so it's a good idea to
not pollute it with colder code. One of them is linked timeout
preparation + queue, which can be extracted into a function.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index d872c9b5885d..df588e4d3bee 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1679,6 +1679,17 @@ static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
return __io_prep_linked_timeout(req);
}
+static noinline void __io_arm_ltimeout(struct io_kiocb *req)
+{
+ io_queue_linked_timeout(__io_prep_linked_timeout(req));
+}
+
+static inline void io_arm_ltimeout(struct io_kiocb *req)
+{
+ if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT))
+ __io_arm_ltimeout(req);
+}
+
static void io_prep_async_work(struct io_kiocb *req)
{
const struct io_op_def *def = &io_op_defs[req->opcode];
@@ -7283,7 +7294,6 @@ static void io_wq_submit_work(struct io_wq_work *work)
const struct io_op_def *def = &io_op_defs[req->opcode];
unsigned int issue_flags = IO_URING_F_UNLOCKED;
bool needs_poll = false;
- struct io_kiocb *timeout;
int ret = 0, err = -ECANCELED;
/* one will be dropped by ->io_free_work() after returning to io-wq */
@@ -7292,10 +7302,7 @@ static void io_wq_submit_work(struct io_wq_work *work)
else
req_ref_get(req);
- timeout = io_prep_linked_timeout(req);
- if (timeout)
- io_queue_linked_timeout(timeout);
-
+ io_arm_ltimeout(req);
/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
if (work->flags & IO_WQ_WORK_CANCEL) {
@@ -7508,7 +7515,6 @@ static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
static inline void __io_queue_sqe(struct io_kiocb *req)
__must_hold(&req->ctx->uring_lock)
{
- struct io_kiocb *linked_timeout;
int ret;
ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
@@ -7522,9 +7528,7 @@ static inline void __io_queue_sqe(struct io_kiocb *req)
* doesn't support non-blocking read/write attempts
*/
if (likely(!ret)) {
- linked_timeout = io_prep_linked_timeout(req);
- if (linked_timeout)
- io_queue_linked_timeout(linked_timeout);
+ io_arm_ltimeout(req);
} else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
io_queue_sqe_arm_apoll(req);
} else {
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 07/14] io_uring: inline io_queue_sqe()
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (5 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 06/14] io_uring: helper for prep+queuing linked timeouts Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 08/14] io_uring: rename io_queue_async_work() Pavel Begunkov
` (7 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Inline io_queue_sqe() as there is only one caller left, and rename
__io_queue_sqe().
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 21 ++++++++-------------
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index df588e4d3bee..959e244cb01d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1200,7 +1200,7 @@ static inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
static inline struct file *io_file_get_normal(struct io_kiocb *req, int fd);
static void io_drop_inflight_file(struct io_kiocb *req);
static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags);
-static void __io_queue_sqe(struct io_kiocb *req);
+static void io_queue_sqe(struct io_kiocb *req);
static void io_rsrc_put_work(struct work_struct *work);
static void io_req_task_queue(struct io_kiocb *req);
@@ -2654,7 +2654,7 @@ static void io_req_task_submit(struct io_kiocb *req, bool *locked)
io_tw_lock(req->ctx, locked);
/* req->task == current here, checking PF_EXITING is safe */
if (likely(!(req->task->flags & PF_EXITING)))
- __io_queue_sqe(req);
+ io_queue_sqe(req);
else
io_req_complete_failed(req, -EFAULT);
}
@@ -7512,7 +7512,7 @@ static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
io_queue_linked_timeout(linked_timeout);
}
-static inline void __io_queue_sqe(struct io_kiocb *req)
+static inline void io_queue_sqe(struct io_kiocb *req)
__must_hold(&req->ctx->uring_lock)
{
int ret;
@@ -7553,15 +7553,6 @@ static void io_queue_sqe_fallback(struct io_kiocb *req)
}
}
-static inline void io_queue_sqe(struct io_kiocb *req)
- __must_hold(&req->ctx->uring_lock)
-{
- if (likely(!(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))))
- __io_queue_sqe(req);
- else
- io_queue_sqe_fallback(req);
-}
-
/*
* Check SQE restrictions (opcode and flags).
*
@@ -7762,7 +7753,11 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
return 0;
}
- io_queue_sqe(req);
+ if (likely(!(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))))
+ io_queue_sqe(req);
+ else
+ io_queue_sqe_fallback(req);
+
return 0;
}
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 08/14] io_uring: rename io_queue_async_work()
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (6 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 07/14] io_uring: inline io_queue_sqe() Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 09/14] io_uring: refactor io_queue_sqe() Pavel Begunkov
` (6 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Rename io_queue_async_work(). The name is pretty old but now doesn't
reflect well what the function is doing.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 959e244cb01d..71d79442d52a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1740,7 +1740,7 @@ static inline void io_req_add_compl_list(struct io_kiocb *req)
wq_list_add_tail(&req->comp_list, &state->compl_reqs);
}
-static void io_queue_async_work(struct io_kiocb *req, bool *dont_use)
+static void io_queue_iowq(struct io_kiocb *req, bool *dont_use)
{
struct io_kiocb *link = io_prep_linked_timeout(req);
struct io_uring_task *tctx = req->task->io_uring;
@@ -2674,7 +2674,7 @@ static void io_req_task_queue(struct io_kiocb *req)
static void io_req_task_queue_reissue(struct io_kiocb *req)
{
- req->io_task_work.func = io_queue_async_work;
+ req->io_task_work.func = io_queue_iowq;
io_req_task_work_add(req, false);
}
@@ -7502,7 +7502,7 @@ static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
* Queued up for async execution, worker will release
* submit reference when the iocb is actually submitted.
*/
- io_queue_async_work(req, NULL);
+ io_queue_iowq(req, NULL);
break;
case IO_APOLL_OK:
break;
@@ -7549,7 +7549,7 @@ static void io_queue_sqe_fallback(struct io_kiocb *req)
if (unlikely(ret))
io_req_complete_failed(req, ret);
else
- io_queue_async_work(req, NULL);
+ io_queue_iowq(req, NULL);
}
}
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 09/14] io_uring: refactor io_queue_sqe()
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (7 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 08/14] io_uring: rename io_queue_async_work() Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 10/14] io_uring: introduce IO_REQ_LINK_FLAGS Pavel Begunkov
` (5 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
io_queue_sqe() is a part of the submission path and we try hard to keep
it inlined, so shed some extra bytes from it by moving the error
checking part into io_queue_sqe_arm_apoll() and renaming it accordingly.
note: io_queue_sqe_arm_apoll() is not inlined, thus the patch doesn't
change the number of function calls for the apoll path.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 71d79442d52a..ef7bee562fa2 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7488,10 +7488,17 @@ static void io_queue_linked_timeout(struct io_kiocb *req)
io_put_req(req);
}
-static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
+static void io_queue_async(struct io_kiocb *req, int ret)
__must_hold(&req->ctx->uring_lock)
{
- struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
+ struct io_kiocb *linked_timeout;
+
+ if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
+ io_req_complete_failed(req, ret);
+ return;
+ }
+
+ linked_timeout = io_prep_linked_timeout(req);
switch (io_arm_poll_handler(req, 0)) {
case IO_APOLL_READY:
@@ -7527,13 +7534,10 @@ static inline void io_queue_sqe(struct io_kiocb *req)
* We async punt it if the file wasn't marked NOWAIT, or if the file
* doesn't support non-blocking read/write attempts
*/
- if (likely(!ret)) {
+ if (likely(!ret))
io_arm_ltimeout(req);
- } else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
- io_queue_sqe_arm_apoll(req);
- } else {
- io_req_complete_failed(req, ret);
- }
+ else
+ io_queue_async(req, ret);
}
static void io_queue_sqe_fallback(struct io_kiocb *req)
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 10/14] io_uring: introduce IO_REQ_LINK_FLAGS
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (8 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 09/14] io_uring: refactor io_queue_sqe() Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 11/14] io_uring: refactor lazy link fail Pavel Begunkov
` (4 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Add a macro for all link request flags to avoid duplication.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ef7bee562fa2..3b9fcadb3895 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1180,6 +1180,7 @@ static const struct io_op_def io_op_defs[] = {
/* requests with any of those set should undergo io_disarm_next() */
#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
+#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
static bool io_disarm_next(struct io_kiocb *req);
static void io_uring_del_tctx_node(unsigned long index);
@@ -2164,7 +2165,7 @@ static void __io_req_complete_post(struct io_kiocb *req, s32 res,
* free_list cache.
*/
if (req_ref_put_and_test(req)) {
- if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
+ if (req->flags & IO_REQ_LINK_FLAGS) {
if (req->flags & IO_DISARM_MASK)
io_disarm_next(req);
if (req->link) {
@@ -2712,7 +2713,7 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
&ctx->apoll_cache);
req->flags &= ~REQ_F_POLLED;
}
- if (req->flags & (REQ_F_LINK|REQ_F_HARDLINK))
+ if (req->flags & IO_REQ_LINK_FLAGS)
io_queue_next(req);
if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
io_clean_op(req);
@@ -2772,7 +2773,7 @@ static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
struct io_kiocb *nxt = NULL;
if (req_ref_put_and_test(req)) {
- if (unlikely(req->flags & (REQ_F_LINK|REQ_F_HARDLINK)))
+ if (unlikely(req->flags & IO_REQ_LINK_FLAGS))
nxt = io_req_find_next(req);
io_free_req(req);
}
@@ -7708,7 +7709,7 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
*/
if (!(link->head->flags & REQ_F_FAIL))
req_fail_link_node(link->head, -ECANCELED);
- } else if (!(req->flags & (REQ_F_LINK | REQ_F_HARDLINK))) {
+ } else if (!(req->flags & IO_REQ_LINK_FLAGS)) {
/*
* the current req is a normal req, we should return
* error and thus break the submittion loop.
@@ -7746,12 +7747,12 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
link->last->link = req;
link->last = req;
- if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK))
+ if (req->flags & IO_REQ_LINK_FLAGS)
return 0;
/* last request of a link, enqueue the link */
link->head = NULL;
req = head;
- } else if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
+ } else if (req->flags & IO_REQ_LINK_FLAGS) {
link->head = req;
link->last = req;
return 0;
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 11/14] io_uring: refactor lazy link fail
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (9 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 10/14] io_uring: introduce IO_REQ_LINK_FLAGS Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 12/14] io_uring: refactor io_submit_sqe() Pavel Begunkov
` (3 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Remove the lazy link fail logic from io_submit_sqe() and hide it into a
helper. It simplifies the code and will be needed in next patches.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 87 ++++++++++++++++++++++++++++-----------------------
1 file changed, 47 insertions(+), 40 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3b9fcadb3895..9356e6ee8a97 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7685,7 +7685,44 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
return io_req_prep(req, sqe);
}
-static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
+ struct io_kiocb *req, int ret)
+{
+ struct io_ring_ctx *ctx = req->ctx;
+ struct io_submit_link *link = &ctx->submit_state.link;
+ struct io_kiocb *head = link->head;
+
+ trace_io_uring_req_failed(sqe, ctx, req, ret);
+
+ /*
+ * Avoid breaking links in the middle as it renders links with SQPOLL
+ * unusable. Instead of failing eagerly, continue assembling the link if
+ * applicable and mark the head with REQ_F_FAIL. The link flushing code
+ * should find the flag and handle the rest.
+ */
+ req_fail_link_node(req, ret);
+ if (head && !(head->flags & REQ_F_FAIL))
+ req_fail_link_node(head, -ECANCELED);
+
+ if (!(req->flags & IO_REQ_LINK_FLAGS)) {
+ if (head) {
+ link->last->link = req;
+ link->head = NULL;
+ req = head;
+ }
+ io_queue_sqe_fallback(req);
+ return ret;
+ }
+
+ if (head)
+ link->last->link = req;
+ else
+ link->head = req;
+ link->last = req;
+ return 0;
+}
+
+static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
const struct io_uring_sqe *sqe)
__must_hold(&ctx->uring_lock)
{
@@ -7693,32 +7730,8 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
int ret;
ret = io_init_req(ctx, req, sqe);
- if (unlikely(ret)) {
- trace_io_uring_req_failed(sqe, ctx, req, ret);
-
- /* fail even hard links since we don't submit */
- if (link->head) {
- /*
- * we can judge a link req is failed or cancelled by if
- * REQ_F_FAIL is set, but the head is an exception since
- * it may be set REQ_F_FAIL because of other req's failure
- * so let's leverage req->cqe.res to distinguish if a head
- * is set REQ_F_FAIL because of its failure or other req's
- * failure so that we can set the correct ret code for it.
- * init result here to avoid affecting the normal path.
- */
- if (!(link->head->flags & REQ_F_FAIL))
- req_fail_link_node(link->head, -ECANCELED);
- } else if (!(req->flags & IO_REQ_LINK_FLAGS)) {
- /*
- * the current req is a normal req, we should return
- * error and thus break the submittion loop.
- */
- io_req_complete_failed(req, ret);
- return ret;
- }
- req_fail_link_node(req, ret);
- }
+ if (unlikely(ret))
+ return io_submit_fail_init(sqe, req, ret);
/* don't need @sqe from now on */
trace_io_uring_submit_sqe(ctx, req, req->cqe.user_data, req->opcode,
@@ -7733,25 +7746,19 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
* conditions are true (normal request), then just queue it.
*/
if (link->head) {
- struct io_kiocb *head = link->head;
-
- if (!(req->flags & REQ_F_FAIL)) {
- ret = io_req_prep_async(req);
- if (unlikely(ret)) {
- req_fail_link_node(req, ret);
- if (!(head->flags & REQ_F_FAIL))
- req_fail_link_node(head, -ECANCELED);
- }
- }
- trace_io_uring_link(ctx, req, head);
+ ret = io_req_prep_async(req);
+ if (unlikely(ret))
+ return io_submit_fail_init(sqe, req, ret);
+
+ trace_io_uring_link(ctx, req, link->head);
link->last->link = req;
link->last = req;
if (req->flags & IO_REQ_LINK_FLAGS)
return 0;
- /* last request of a link, enqueue the link */
+ /* last request of the link, flush it */
+ req = link->head;
link->head = NULL;
- req = head;
} else if (req->flags & IO_REQ_LINK_FLAGS) {
link->head = req;
link->last = req;
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 12/14] io_uring: refactor io_submit_sqe()
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (10 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 11/14] io_uring: refactor lazy link fail Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 13/14] io_uring: inline io_req_complete_fail_submit() Pavel Begunkov
` (2 subsequent siblings)
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Remove one extra if for non-linked path of io_submit_sqe().
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 9356e6ee8a97..0806ac554bcf 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7745,7 +7745,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
* submitted sync once the chain is complete. If none of those
* conditions are true (normal request), then just queue it.
*/
- if (link->head) {
+ if (unlikely(link->head)) {
ret = io_req_prep_async(req);
if (unlikely(ret))
return io_submit_fail_init(sqe, req, ret);
@@ -7759,17 +7759,22 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
/* last request of the link, flush it */
req = link->head;
link->head = NULL;
- } else if (req->flags & IO_REQ_LINK_FLAGS) {
- link->head = req;
- link->last = req;
+ if (req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))
+ goto fallback;
+
+ } else if (unlikely(req->flags & (IO_REQ_LINK_FLAGS |
+ REQ_F_FORCE_ASYNC | REQ_F_FAIL))) {
+ if (req->flags & IO_REQ_LINK_FLAGS) {
+ link->head = req;
+ link->last = req;
+ } else {
+fallback:
+ io_queue_sqe_fallback(req);
+ }
return 0;
}
- if (likely(!(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))))
- io_queue_sqe(req);
- else
- io_queue_sqe_fallback(req);
-
+ io_queue_sqe(req);
return 0;
}
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 13/14] io_uring: inline io_req_complete_fail_submit()
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (11 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 12/14] io_uring: refactor io_submit_sqe() Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 14/14] io_uring: add data_race annotations Pavel Begunkov
2022-04-18 1:24 ` (subset) [PATCH 00/14] submission path refactoring Jens Axboe
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Inline io_req_complete_fail_submit(), there is only one caller and the
name doesn't tell us much.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 21 ++++++++-------------
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 0806ac554bcf..a828ac740fb6 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2227,17 +2227,6 @@ static void io_req_complete_failed(struct io_kiocb *req, s32 res)
io_req_complete_post(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
}
-static void io_req_complete_fail_submit(struct io_kiocb *req)
-{
- /*
- * We don't submit, fail them all, for that replace hardlinks with
- * normal links. Extra REQ_F_LINK is tolerated.
- */
- req->flags &= ~REQ_F_HARDLINK;
- req->flags |= REQ_F_LINK;
- io_req_complete_failed(req, req->cqe.res);
-}
-
/*
* Don't initialise the fields below on every allocation, but do that in
* advance and keep them valid across allocations.
@@ -7544,8 +7533,14 @@ static inline void io_queue_sqe(struct io_kiocb *req)
static void io_queue_sqe_fallback(struct io_kiocb *req)
__must_hold(&req->ctx->uring_lock)
{
- if (req->flags & REQ_F_FAIL) {
- io_req_complete_fail_submit(req);
+ if (unlikely(req->flags & REQ_F_FAIL)) {
+ /*
+ * We don't submit, fail them all, for that replace hardlinks
+ * with normal links. Extra REQ_F_LINK is tolerated.
+ */
+ req->flags &= ~REQ_F_HARDLINK;
+ req->flags |= REQ_F_LINK;
+ io_req_complete_failed(req, req->cqe.res);
} else if (unlikely(req->ctx->drain_active)) {
io_drain_req(req);
} else {
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 14/14] io_uring: add data_race annotations
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (12 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 13/14] io_uring: inline io_req_complete_fail_submit() Pavel Begunkov
@ 2022-04-15 21:08 ` Pavel Begunkov
2022-04-18 1:24 ` (subset) [PATCH 00/14] submission path refactoring Jens Axboe
14 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-15 21:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We have several racy reads, mark them with data_race() to demonstrate
this fact.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index a828ac740fb6..0fc6135d43d7 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2272,7 +2272,7 @@ static __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
* locked cache, grab the lock and move them over to our submission
* side cache.
*/
- if (READ_ONCE(ctx->locked_free_nr) > IO_COMPL_BATCH) {
+ if (data_race(ctx->locked_free_nr) > IO_COMPL_BATCH) {
io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
if (!io_req_cache_empty(ctx))
return true;
@@ -2566,8 +2566,8 @@ static void tctx_task_work(struct callback_head *cb)
handle_tw_list(node2, &ctx, &uring_locked);
cond_resched();
- if (!tctx->task_list.first &&
- !tctx->prior_task_list.first && uring_locked)
+ if (data_race(!tctx->task_list.first) &&
+ data_race(!tctx->prior_task_list.first) && uring_locked)
io_submit_flush_completions(ctx);
}
--
2.35.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes
2022-04-15 21:08 ` [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes Pavel Begunkov
@ 2022-04-18 0:05 ` Jens Axboe
2022-04-18 1:22 ` Jens Axboe
0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2022-04-18 0:05 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 4/15/22 3:08 PM, Pavel Begunkov wrote:
> @@ -1337,21 +1342,21 @@ static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
> if (node == ctx->rsrc_node)
> ctx->rsrc_cached_refs++;
> else
> - percpu_ref_put(&node->refs);
> + io_rsrc_put_node(node, 1);
> }
> }
>
> static inline void io_req_put_rsrc(struct io_kiocb *req, struct io_ring_ctx *ctx)
> {
> if (req->rsrc_node)
> - percpu_ref_put(&req->rsrc_node->refs);
> + io_rsrc_put_node(req->rsrc_node, 1);
> }
What's this against? I have req->fixed_rsrc_refs here.
Also, typo in subject s/hepler/helper.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes
2022-04-18 0:05 ` Jens Axboe
@ 2022-04-18 1:22 ` Jens Axboe
2022-04-18 9:08 ` Pavel Begunkov
0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2022-04-18 1:22 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 4/17/22 6:05 PM, Jens Axboe wrote:
> On 4/15/22 3:08 PM, Pavel Begunkov wrote:
>> @@ -1337,21 +1342,21 @@ static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
>> if (node == ctx->rsrc_node)
>> ctx->rsrc_cached_refs++;
>> else
>> - percpu_ref_put(&node->refs);
>> + io_rsrc_put_node(node, 1);
>> }
>> }
>>
>> static inline void io_req_put_rsrc(struct io_kiocb *req, struct io_ring_ctx *ctx)
>> {
>> if (req->rsrc_node)
>> - percpu_ref_put(&req->rsrc_node->refs);
>> + io_rsrc_put_node(req->rsrc_node, 1);
>> }
>
> What's this against? I have req->fixed_rsrc_refs here.
>
> Also, typo in subject s/hepler/helper.
As far as I can tell, this patch doesn't belong in this series and not
sure what happened here?
But for that series, let's drop 'ctx' from io_req_put_rsrc() as well as
it's unused.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: (subset) [PATCH 00/14] submission path refactoring
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
` (13 preceding siblings ...)
2022-04-15 21:08 ` [PATCH 14/14] io_uring: add data_race annotations Pavel Begunkov
@ 2022-04-18 1:24 ` Jens Axboe
14 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2022-04-18 1:24 UTC (permalink / raw)
To: io-uring, asml.silence
On Fri, 15 Apr 2022 22:08:19 +0100, Pavel Begunkov wrote:
> Lots of cleanups, most of the patches improve the submission path.
>
> Pavel Begunkov (14):
> io_uring: clean poll tw PF_EXITING handling
> io_uring: add a hepler for putting rsrc nodes
> io_uring: minor refactoring for some tw handlers
> io_uring: kill io_put_req_deferred()
> io_uring: inline io_free_req()
> io_uring: helper for prep+queuing linked timeouts
> io_uring: inline io_queue_sqe()
> io_uring: rename io_queue_async_work()
> io_uring: refactor io_queue_sqe()
> io_uring: introduce IO_REQ_LINK_FLAGS
> io_uring: refactor lazy link fail
> io_uring: refactor io_submit_sqe()
> io_uring: inline io_req_complete_fail_submit()
> io_uring: add data_race annotations
>
> [...]
Applied, thanks!
[01/14] io_uring: clean poll tw PF_EXITING handling
commit: c68356048b630cb40f9a50aa7dd25a301bc5da9e
[03/14] io_uring: minor refactoring for some tw handlers
commit: b03080f869e11b96ca080dac354c0bf6b361a30c
[04/14] io_uring: kill io_put_req_deferred()
commit: 78bfbdd1a4977df1dded20f9783a6ec174e67ef8
[05/14] io_uring: inline io_free_req()
commit: aeedb0f3f9938b2084fe8c912782b031a37161fa
[06/14] io_uring: helper for prep+queuing linked timeouts
commit: 65e46eb620ad7fa187415b25638a7b3fb1bc0be2
[07/14] io_uring: inline io_queue_sqe()
commit: 4736d36c3adc99d7ab399c406fcd04a8666e9615
[08/14] io_uring: rename io_queue_async_work()
commit: 6c8d43e0f1375748e788d70cdecdf8ce9721e8fa
[09/14] io_uring: refactor io_queue_sqe()
commit: ceba3567006f5e932521b93d327d8626a0078be1
[10/14] io_uring: introduce IO_REQ_LINK_FLAGS
commit: ba0a753a0b63739fc7d53f9cab0f71b22c2afd92
[11/14] io_uring: refactor lazy link fail
commit: 0e2aeac59ae13b868836072f6840e0993c8991d3
[12/14] io_uring: refactor io_submit_sqe()
commit: b68c8c0108f53daad9a3ced38653fc586466f4ad
[13/14] io_uring: inline io_req_complete_fail_submit()
commit: aacdc8a67f52dc96bd9e36f85e7a99705020be3d
[14/14] io_uring: add data_race annotations
commit: e27fc3fb3d4722e2faa67a9bb340ac42d6bbdbea
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes
2022-04-18 1:22 ` Jens Axboe
@ 2022-04-18 9:08 ` Pavel Begunkov
0 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2022-04-18 9:08 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 4/18/22 02:22, Jens Axboe wrote:
> On 4/17/22 6:05 PM, Jens Axboe wrote:
>> On 4/15/22 3:08 PM, Pavel Begunkov wrote:
>>> @@ -1337,21 +1342,21 @@ static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
>>> if (node == ctx->rsrc_node)
>>> ctx->rsrc_cached_refs++;
>>> else
>>> - percpu_ref_put(&node->refs);
>>> + io_rsrc_put_node(node, 1);
>>> }
>>> }
>>>
>>> static inline void io_req_put_rsrc(struct io_kiocb *req, struct io_ring_ctx *ctx)
>>> {
>>> if (req->rsrc_node)
>>> - percpu_ref_put(&req->rsrc_node->refs);
>>> + io_rsrc_put_node(req->rsrc_node, 1);
>>> }
>>
>> What's this against? I have req->fixed_rsrc_refs here.
>>
>> Also, typo in subject s/hepler/helper.
>
> As far as I can tell, this patch doesn't belong in this series and not
> sure what happened here?
Turns out 3 patches are missing from the series,
sorry, will resend what's left
> But for that series, let's drop 'ctx' from io_req_put_rsrc() as well as
> it's unused.
ok
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2022-04-18 9:09 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-04-15 21:08 [PATCH 00/14] submission path refactoring Pavel Begunkov
2022-04-15 21:08 ` [PATCH 01/14] io_uring: clean poll tw PF_EXITING handling Pavel Begunkov
2022-04-15 21:08 ` [PATCH 02/14] io_uring: add a hepler for putting rsrc nodes Pavel Begunkov
2022-04-18 0:05 ` Jens Axboe
2022-04-18 1:22 ` Jens Axboe
2022-04-18 9:08 ` Pavel Begunkov
2022-04-15 21:08 ` [PATCH 03/14] io_uring: minor refactoring for some tw handlers Pavel Begunkov
2022-04-15 21:08 ` [PATCH 04/14] io_uring: kill io_put_req_deferred() Pavel Begunkov
2022-04-15 21:08 ` [PATCH 05/14] io_uring: inline io_free_req() Pavel Begunkov
2022-04-15 21:08 ` [PATCH 06/14] io_uring: helper for prep+queuing linked timeouts Pavel Begunkov
2022-04-15 21:08 ` [PATCH 07/14] io_uring: inline io_queue_sqe() Pavel Begunkov
2022-04-15 21:08 ` [PATCH 08/14] io_uring: rename io_queue_async_work() Pavel Begunkov
2022-04-15 21:08 ` [PATCH 09/14] io_uring: refactor io_queue_sqe() Pavel Begunkov
2022-04-15 21:08 ` [PATCH 10/14] io_uring: introduce IO_REQ_LINK_FLAGS Pavel Begunkov
2022-04-15 21:08 ` [PATCH 11/14] io_uring: refactor lazy link fail Pavel Begunkov
2022-04-15 21:08 ` [PATCH 12/14] io_uring: refactor io_submit_sqe() Pavel Begunkov
2022-04-15 21:08 ` [PATCH 13/14] io_uring: inline io_req_complete_fail_submit() Pavel Begunkov
2022-04-15 21:08 ` [PATCH 14/14] io_uring: add data_race annotations Pavel Begunkov
2022-04-18 1:24 ` (subset) [PATCH 00/14] submission path refactoring Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox