* [PATCH V3 1/9] io_uring: add io_link_req() helper
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-05-11 0:12 ` [PATCH V3 2/9] io_uring: add io_submit_fail_link() helper Ming Lei
` (8 subsequent siblings)
9 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
Add io_link_req() helper, so that io_submit_sqe() becomes more readable.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/io_uring.c | 41 +++++++++++++++++++++++++++--------------
1 file changed, 27 insertions(+), 14 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2675cffbd9a4..c02c9291a2df 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2172,19 +2172,11 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
return 0;
}
-static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
- const struct io_uring_sqe *sqe)
- __must_hold(&ctx->uring_lock)
+/*
+ * Return NULL if nothing to be queued, otherwise return request for queueing */
+static struct io_kiocb *io_link_sqe(struct io_submit_link *link,
+ struct io_kiocb *req)
{
- struct io_submit_link *link = &ctx->submit_state.link;
- int ret;
-
- ret = io_init_req(ctx, req, sqe);
- if (unlikely(ret))
- return io_submit_fail_init(sqe, req, ret);
-
- trace_io_uring_submit_req(req);
-
/*
* If we already have a head request, queue this one for async
* submittal once the head completes. If we don't have a head but
@@ -2198,7 +2190,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
link->last = req;
if (req->flags & IO_REQ_LINK_FLAGS)
- return 0;
+ return NULL;
/* last request of the link, flush it */
req = link->head;
link->head = NULL;
@@ -2214,9 +2206,30 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
fallback:
io_queue_sqe_fallback(req);
}
- return 0;
+ return NULL;
}
+ return req;
+}
+
+static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ const struct io_uring_sqe *sqe)
+ __must_hold(&ctx->uring_lock)
+{
+ struct io_submit_link *link = &ctx->submit_state.link;
+ int ret;
+ ret = io_init_req(ctx, req, sqe);
+ if (unlikely(ret))
+ return io_submit_fail_init(sqe, req, ret);
+
+ trace_io_uring_submit_req(req);
+
+ if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
+ REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
+ req = io_link_sqe(link, req);
+ if (!req)
+ return 0;
+ }
io_queue_sqe(req);
return 0;
}
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH V3 2/9] io_uring: add io_submit_fail_link() helper
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
2024-05-11 0:12 ` [PATCH V3 1/9] io_uring: add io_link_req() helper Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-05-11 0:12 ` [PATCH V3 3/9] io_uring: add helper of io_req_commit_cqe() Ming Lei
` (7 subsequent siblings)
9 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
Add io_submit_fail_link() helper and put linking fail logic into this
helper.
This way simplifies io_submit_fail_init(), and becomes easier to add
sqe group failing logic.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/io_uring.c | 22 ++++++++++++++++------
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c02c9291a2df..d3b9988cdae4 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2135,22 +2135,17 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
return def->prep(req, sqe);
}
-static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
+static __cold int io_submit_fail_link(struct io_submit_link *link,
struct io_kiocb *req, int ret)
{
- struct io_ring_ctx *ctx = req->ctx;
- struct io_submit_link *link = &ctx->submit_state.link;
struct io_kiocb *head = link->head;
- trace_io_uring_req_failed(sqe, req, ret);
-
/*
* Avoid breaking links in the middle as it renders links with SQPOLL
* unusable. Instead of failing eagerly, continue assembling the link if
* applicable and mark the head with REQ_F_FAIL. The link flushing code
* should find the flag and handle the rest.
*/
- req_fail_link_node(req, ret);
if (head && !(head->flags & REQ_F_FAIL))
req_fail_link_node(head, -ECANCELED);
@@ -2169,9 +2164,24 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
else
link->head = req;
link->last = req;
+
return 0;
}
+static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
+ struct io_kiocb *req, int ret)
+{
+ struct io_ring_ctx *ctx = req->ctx;
+ struct io_submit_link *link = &ctx->submit_state.link;
+
+ trace_io_uring_req_failed(sqe, req, ret);
+
+ req_fail_link_node(req, ret);
+
+ /* cover both linked and non-linked request */
+ return io_submit_fail_link(link, req, ret);
+}
+
/*
* Return NULL if nothing to be queued, otherwise return request for queueing */
static struct io_kiocb *io_link_sqe(struct io_submit_link *link,
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH V3 3/9] io_uring: add helper of io_req_commit_cqe()
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
2024-05-11 0:12 ` [PATCH V3 1/9] io_uring: add io_link_req() helper Ming Lei
2024-05-11 0:12 ` [PATCH V3 2/9] io_uring: add io_submit_fail_link() helper Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-06-10 1:18 ` Pavel Begunkov
2024-05-11 0:12 ` [PATCH V3 4/9] io_uring: move marking REQ_F_CQE_SKIP out of io_free_req() Ming Lei
` (6 subsequent siblings)
9 siblings, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
Add helper of io_req_commit_cqe() which can be used in posting CQE
from both __io_submit_flush_completions() and io_req_complete_post().
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/io_uring.c | 34 ++++++++++++++++++++--------------
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index d3b9988cdae4..e4be930e0f1e 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -910,6 +910,22 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
return posted;
}
+static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
+ bool lockless_cq)
+{
+ struct io_ring_ctx *ctx = req->ctx;
+
+ if (unlikely(!io_fill_cqe_req(ctx, req))) {
+ if (lockless_cq) {
+ spin_lock(&ctx->completion_lock);
+ io_req_cqe_overflow(req);
+ spin_unlock(&ctx->completion_lock);
+ } else {
+ io_req_cqe_overflow(req);
+ }
+ }
+}
+
static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
{
struct io_ring_ctx *ctx = req->ctx;
@@ -932,10 +948,8 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
}
io_cq_lock(ctx);
- if (!(req->flags & REQ_F_CQE_SKIP)) {
- if (!io_fill_cqe_req(ctx, req))
- io_req_cqe_overflow(req);
- }
+ if (!(req->flags & REQ_F_CQE_SKIP))
+ io_req_commit_cqe(req, false);
io_cq_unlock_post(ctx);
/*
@@ -1454,16 +1468,8 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
struct io_kiocb *req = container_of(node, struct io_kiocb,
comp_list);
- if (!(req->flags & REQ_F_CQE_SKIP) &&
- unlikely(!io_fill_cqe_req(ctx, req))) {
- if (ctx->lockless_cq) {
- spin_lock(&ctx->completion_lock);
- io_req_cqe_overflow(req);
- spin_unlock(&ctx->completion_lock);
- } else {
- io_req_cqe_overflow(req);
- }
- }
+ if (!(req->flags & REQ_F_CQE_SKIP))
+ io_req_commit_cqe(req, ctx->lockless_cq);
}
__io_cq_unlock_post(ctx);
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH V3 3/9] io_uring: add helper of io_req_commit_cqe()
2024-05-11 0:12 ` [PATCH V3 3/9] io_uring: add helper of io_req_commit_cqe() Ming Lei
@ 2024-06-10 1:18 ` Pavel Begunkov
2024-06-11 13:21 ` Ming Lei
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-10 1:18 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Kevin Wolf
On 5/11/24 01:12, Ming Lei wrote:
> Add helper of io_req_commit_cqe() which can be used in posting CQE
> from both __io_submit_flush_completions() and io_req_complete_post().
Please drop this patch and inline further changes into this
two callers. There are different locking rules, different
hotness, and should better be left duplicated until cleaned
up in a proper way.
> Signed-off-by: Ming Lei <[email protected]>
> ---
> io_uring/io_uring.c | 34 ++++++++++++++++++++--------------
> 1 file changed, 20 insertions(+), 14 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index d3b9988cdae4..e4be930e0f1e 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -910,6 +910,22 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
> return posted;
> }
>
> +static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
> + bool lockless_cq)
> +{
> + struct io_ring_ctx *ctx = req->ctx;
> +
> + if (unlikely(!io_fill_cqe_req(ctx, req))) {
> + if (lockless_cq) {
> + spin_lock(&ctx->completion_lock);
> + io_req_cqe_overflow(req);
> + spin_unlock(&ctx->completion_lock);
> + } else {
> + io_req_cqe_overflow(req);
> + }
> + }
> +}
> +
> static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> {
> struct io_ring_ctx *ctx = req->ctx;
> @@ -932,10 +948,8 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> }
>
> io_cq_lock(ctx);
> - if (!(req->flags & REQ_F_CQE_SKIP)) {
> - if (!io_fill_cqe_req(ctx, req))
> - io_req_cqe_overflow(req);
> - }
> + if (!(req->flags & REQ_F_CQE_SKIP))
> + io_req_commit_cqe(req, false);
> io_cq_unlock_post(ctx);
>
> /*
> @@ -1454,16 +1468,8 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> struct io_kiocb *req = container_of(node, struct io_kiocb,
> comp_list);
>
> - if (!(req->flags & REQ_F_CQE_SKIP) &&
> - unlikely(!io_fill_cqe_req(ctx, req))) {
> - if (ctx->lockless_cq) {
> - spin_lock(&ctx->completion_lock);
> - io_req_cqe_overflow(req);
> - spin_unlock(&ctx->completion_lock);
> - } else {
> - io_req_cqe_overflow(req);
> - }
> - }
> + if (!(req->flags & REQ_F_CQE_SKIP))
> + io_req_commit_cqe(req, ctx->lockless_cq);
> }
> __io_cq_unlock_post(ctx);
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 3/9] io_uring: add helper of io_req_commit_cqe()
2024-06-10 1:18 ` Pavel Begunkov
@ 2024-06-11 13:21 ` Ming Lei
0 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-06-11 13:21 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf
On Mon, Jun 10, 2024 at 02:18:34AM +0100, Pavel Begunkov wrote:
> On 5/11/24 01:12, Ming Lei wrote:
> > Add helper of io_req_commit_cqe() which can be used in posting CQE
> > from both __io_submit_flush_completions() and io_req_complete_post().
>
> Please drop this patch and inline further changes into this
> two callers. There are different locking rules, different
> hotness, and should better be left duplicated until cleaned
> up in a proper way.
Yes, the helper is just for making following code more clean & readable.
Actually it changes nothing for __io_submit_flush_completions(), but
io_req_complete_post() can be thought as non-fast path. And we may
keep it only friendly for __io_submit_flush_completions(), meantime
just cover io_req_complete_post().
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH V3 4/9] io_uring: move marking REQ_F_CQE_SKIP out of io_free_req()
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
` (2 preceding siblings ...)
2024-05-11 0:12 ` [PATCH V3 3/9] io_uring: add helper of io_req_commit_cqe() Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-06-10 1:23 ` Pavel Begunkov
2024-05-11 0:12 ` [PATCH V3 5/9] io_uring: support SQE group Ming Lei
` (5 subsequent siblings)
9 siblings, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
Prepare for supporting sqe group, which requires to post group leader's
CQE after all members' CQEs are posted. For group leader request, we can't
do that in io_req_complete_post, and REQ_F_CQE_SKIP can't be set in
io_free_req().
So move marking REQ_F_CQE_SKIP out of io_free_req().
No functional change.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/io_uring.c | 5 +++--
io_uring/timeout.c | 3 +++
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index e4be930e0f1e..c184c9a312df 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1027,8 +1027,6 @@ __cold void io_free_req(struct io_kiocb *req)
{
/* refs were already put, restore them for io_req_task_complete() */
req->flags &= ~REQ_F_REFCOUNT;
- /* we only want to free it, don't post CQEs */
- req->flags |= REQ_F_CQE_SKIP;
req->io_task_work.func = io_req_task_complete;
io_req_task_work_add(req);
}
@@ -1797,6 +1795,9 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
if (req_ref_put_and_test(req)) {
if (req->flags & IO_REQ_LINK_FLAGS)
nxt = io_req_find_next(req);
+
+ /* we have posted CQEs in io_req_complete_post() */
+ req->flags |= REQ_F_CQE_SKIP;
io_free_req(req);
}
return nxt ? &nxt->work : NULL;
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 1c9bf07499b1..202f540aa314 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -47,6 +47,9 @@ static inline void io_put_req(struct io_kiocb *req)
{
if (req_ref_put_and_test(req)) {
io_queue_next(req);
+
+ /* we only want to free it, don't post CQEs */
+ req->flags |= REQ_F_CQE_SKIP;
io_free_req(req);
}
}
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH V3 4/9] io_uring: move marking REQ_F_CQE_SKIP out of io_free_req()
2024-05-11 0:12 ` [PATCH V3 4/9] io_uring: move marking REQ_F_CQE_SKIP out of io_free_req() Ming Lei
@ 2024-06-10 1:23 ` Pavel Begunkov
2024-06-11 13:28 ` Ming Lei
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-10 1:23 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Kevin Wolf
On 5/11/24 01:12, Ming Lei wrote:
> Prepare for supporting sqe group, which requires to post group leader's
> CQE after all members' CQEs are posted. For group leader request, we can't
> do that in io_req_complete_post, and REQ_F_CQE_SKIP can't be set in
> io_free_req().
Can you elaborate what exactly we can't do and why?
> So move marking REQ_F_CQE_SKIP out of io_free_req().
That makes io_free_req() a very confusing function, it tells
that it just frees the request but in reality can post a
CQE. If you really need it, just add a new function.
> No functional change.
>
> Signed-off-by: Ming Lei <[email protected]>
> ---
> io_uring/io_uring.c | 5 +++--
> io_uring/timeout.c | 3 +++
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index e4be930e0f1e..c184c9a312df 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -1027,8 +1027,6 @@ __cold void io_free_req(struct io_kiocb *req)
> {
> /* refs were already put, restore them for io_req_task_complete() */
> req->flags &= ~REQ_F_REFCOUNT;
> - /* we only want to free it, don't post CQEs */
> - req->flags |= REQ_F_CQE_SKIP;
> req->io_task_work.func = io_req_task_complete;
> io_req_task_work_add(req);
> }
> @@ -1797,6 +1795,9 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> if (req_ref_put_and_test(req)) {
> if (req->flags & IO_REQ_LINK_FLAGS)
> nxt = io_req_find_next(req);
> +
> + /* we have posted CQEs in io_req_complete_post() */
> + req->flags |= REQ_F_CQE_SKIP;
> io_free_req(req);
> }
> return nxt ? &nxt->work : NULL;
> diff --git a/io_uring/timeout.c b/io_uring/timeout.c
> index 1c9bf07499b1..202f540aa314 100644
> --- a/io_uring/timeout.c
> +++ b/io_uring/timeout.c
> @@ -47,6 +47,9 @@ static inline void io_put_req(struct io_kiocb *req)
> {
> if (req_ref_put_and_test(req)) {
> io_queue_next(req);
> +
> + /* we only want to free it, don't post CQEs */
> + req->flags |= REQ_F_CQE_SKIP;
> io_free_req(req);
> }
> }
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 4/9] io_uring: move marking REQ_F_CQE_SKIP out of io_free_req()
2024-06-10 1:23 ` Pavel Begunkov
@ 2024-06-11 13:28 ` Ming Lei
2024-06-16 18:08 ` Pavel Begunkov
0 siblings, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-06-11 13:28 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf
On Mon, Jun 10, 2024 at 02:23:50AM +0100, Pavel Begunkov wrote:
> On 5/11/24 01:12, Ming Lei wrote:
> > Prepare for supporting sqe group, which requires to post group leader's
> > CQE after all members' CQEs are posted. For group leader request, we can't
> > do that in io_req_complete_post, and REQ_F_CQE_SKIP can't be set in
> > io_free_req().
>
> Can you elaborate what exactly we can't do and why?
group leader's CQE is always posted after other members are posted.
>
> > So move marking REQ_F_CQE_SKIP out of io_free_req().
>
> That makes io_free_req() a very confusing function, it tells
> that it just frees the request but in reality can post a
> CQE. If you really need it, just add a new function.
io_free_req() never posts CQE.
This patch can help to move setting REQ_F_CQE_SKIP around
real post code, and it can make current code more readable.
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 4/9] io_uring: move marking REQ_F_CQE_SKIP out of io_free_req()
2024-06-11 13:28 ` Ming Lei
@ 2024-06-16 18:08 ` Pavel Begunkov
0 siblings, 0 replies; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-16 18:08 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf
On 6/11/24 14:28, Ming Lei wrote:
> On Mon, Jun 10, 2024 at 02:23:50AM +0100, Pavel Begunkov wrote:
>> On 5/11/24 01:12, Ming Lei wrote:
>>> Prepare for supporting sqe group, which requires to post group leader's
>>> CQE after all members' CQEs are posted. For group leader request, we can't
>>> do that in io_req_complete_post, and REQ_F_CQE_SKIP can't be set in
>>> io_free_req().
>>
>> Can you elaborate what exactly we can't do and why?
>
> group leader's CQE is always posted after other members are posted.
>
>>
>>> So move marking REQ_F_CQE_SKIP out of io_free_req().
>>
>> That makes io_free_req() a very confusing function, it tells
>> that it just frees the request but in reality can post a
>> CQE. If you really need it, just add a new function.
>
> io_free_req() never posts CQE.
Right, that's the intention and that's why it sets
REQ_F_CQE_SKIP. Without it, even if you patch all call sites
that they set it themselves, it turns into a misleading
function.
> This patch can help to move setting REQ_F_CQE_SKIP around
> real post code, and it can make current code more readable.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH V3 5/9] io_uring: support SQE group
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
` (3 preceding siblings ...)
2024-05-11 0:12 ` [PATCH V3 4/9] io_uring: move marking REQ_F_CQE_SKIP out of io_free_req() Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-05-21 2:58 ` Ming Lei
2024-06-10 2:53 ` Pavel Begunkov
2024-05-11 0:12 ` [PATCH V3 6/9] io_uring: support sqe group with members depending on leader Ming Lei
` (4 subsequent siblings)
9 siblings, 2 replies; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
SQE group is defined as one chain of SQEs starting with the first SQE that
has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
doesn't have it set, and it is similar with chain of linked SQEs.
Not like linked SQEs, each sqe is issued after the previous one is completed.
All SQEs in one group are submitted in parallel, so there isn't any dependency
among SQEs in one group.
The 1st SQE is group leader, and the other SQEs are group member. The whole
group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
the two flags are ignored for group members.
When the group is in one link chain, this group isn't submitted until the
previous SQE or group is completed. And the following SQE or group can't
be started if this group isn't completed. Failure from any group member will
fail the group leader, then the link chain can be terminated.
When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
group leader only, we respect IO_DRAIN by always completing group leader as
the last one in the group.
Working together with IOSQE_IO_LINK, SQE group provides flexible way to
support N:M dependency, such as:
- group A is chained with group B together
- group A has N SQEs
- group B has M SQEs
then M SQEs in group B depend on N SQEs in group A.
N:M dependency can support some interesting use cases in efficient way:
1) read from multiple files, then write the read data into single file
2) read from single file, and write the read data into multiple files
3) write same data into multiple files, and read data from multiple files and
compare if correct data is written
Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
extend sqe->flags with one uring context flag, such as use __pad3 for
non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
Suggested-by: Kevin Wolf <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring_types.h | 12 ++
include/uapi/linux/io_uring.h | 4 +
io_uring/io_uring.c | 255 +++++++++++++++++++++++++++++++--
io_uring/io_uring.h | 16 +++
io_uring/timeout.c | 2 +
5 files changed, 277 insertions(+), 12 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 7a6b190c7da7..62311b0f0e0b 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -202,6 +202,8 @@ struct io_submit_state {
/* batch completion logic */
struct io_wq_work_list compl_reqs;
struct io_submit_link link;
+ /* points to current group */
+ struct io_submit_link group;
bool plug_started;
bool need_plug;
@@ -442,6 +444,7 @@ enum {
REQ_F_FORCE_ASYNC_BIT = IOSQE_ASYNC_BIT,
REQ_F_BUFFER_SELECT_BIT = IOSQE_BUFFER_SELECT_BIT,
REQ_F_CQE_SKIP_BIT = IOSQE_CQE_SKIP_SUCCESS_BIT,
+ REQ_F_SQE_GROUP_BIT = IOSQE_SQE_GROUP_BIT,
/* first byte is taken by user flags, shift it to not overlap */
REQ_F_FAIL_BIT = 8,
@@ -473,6 +476,7 @@ enum {
REQ_F_BL_EMPTY_BIT,
REQ_F_BL_NO_RECYCLE_BIT,
REQ_F_BUFFERS_COMMIT_BIT,
+ REQ_F_SQE_GROUP_LEADER_BIT,
/* not a real bit, just to check we're not overflowing the space */
__REQ_F_LAST_BIT,
@@ -496,6 +500,8 @@ enum {
REQ_F_BUFFER_SELECT = IO_REQ_FLAG(REQ_F_BUFFER_SELECT_BIT),
/* IOSQE_CQE_SKIP_SUCCESS */
REQ_F_CQE_SKIP = IO_REQ_FLAG(REQ_F_CQE_SKIP_BIT),
+ /* IOSQE_SQE_GROUP */
+ REQ_F_SQE_GROUP = IO_REQ_FLAG(REQ_F_SQE_GROUP_BIT),
/* fail rest of links */
REQ_F_FAIL = IO_REQ_FLAG(REQ_F_FAIL_BIT),
@@ -553,6 +559,8 @@ enum {
REQ_F_BL_NO_RECYCLE = IO_REQ_FLAG(REQ_F_BL_NO_RECYCLE_BIT),
/* buffer ring head needs incrementing on put */
REQ_F_BUFFERS_COMMIT = IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT),
+ /* sqe group lead */
+ REQ_F_SQE_GROUP_LEADER = IO_REQ_FLAG(REQ_F_SQE_GROUP_LEADER_BIT),
};
typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
@@ -666,6 +674,10 @@ struct io_kiocb {
u64 extra1;
u64 extra2;
} big_cqe;
+
+ /* all SQE group members linked here for group lead */
+ struct io_kiocb *grp_link;
+ int grp_refs;
};
struct io_overflow_cqe {
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 994bf7af0efe..2b99d9d0b93e 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -124,6 +124,7 @@ enum io_uring_sqe_flags_bit {
IOSQE_ASYNC_BIT,
IOSQE_BUFFER_SELECT_BIT,
IOSQE_CQE_SKIP_SUCCESS_BIT,
+ IOSQE_SQE_GROUP_BIT,
};
/*
@@ -143,6 +144,8 @@ enum io_uring_sqe_flags_bit {
#define IOSQE_BUFFER_SELECT (1U << IOSQE_BUFFER_SELECT_BIT)
/* don't post CQE if request succeeded */
#define IOSQE_CQE_SKIP_SUCCESS (1U << IOSQE_CQE_SKIP_SUCCESS_BIT)
+/* defines sqe group */
+#define IOSQE_SQE_GROUP (1U << IOSQE_SQE_GROUP_BIT)
/*
* io_uring_setup() flags
@@ -540,6 +543,7 @@ struct io_uring_params {
#define IORING_FEAT_LINKED_FILE (1U << 12)
#define IORING_FEAT_REG_REG_RING (1U << 13)
#define IORING_FEAT_RECVSEND_BUNDLE (1U << 14)
+#define IORING_FEAT_SQE_GROUP (1U << 15)
/*
* io_uring_register(2) opcodes and arguments
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c184c9a312df..b87c5452de43 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -109,7 +109,8 @@
IOSQE_IO_HARDLINK | IOSQE_ASYNC)
#define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
- IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
+ IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS | \
+ IOSQE_SQE_GROUP)
#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
@@ -915,6 +916,13 @@ static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
{
struct io_ring_ctx *ctx = req->ctx;
+ /*
+ * For group leader, cqe has to be committed after all members are
+ * committed, when the request becomes normal one.
+ */
+ if (unlikely(req_is_group_leader(req)))
+ return;
+
if (unlikely(!io_fill_cqe_req(ctx, req))) {
if (lockless_cq) {
spin_lock(&ctx->completion_lock);
@@ -926,6 +934,116 @@ static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
}
}
+static inline bool need_queue_group_members(struct io_kiocb *req)
+{
+ return req_is_group_leader(req) && req->grp_link;
+}
+
+/* Can only be called after this request is issued */
+static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
+{
+ if (req->flags & REQ_F_SQE_GROUP) {
+ if (req_is_group_leader(req))
+ return req;
+ return req->grp_link;
+ }
+ return NULL;
+}
+
+void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes)
+{
+ struct io_kiocb *member = req->grp_link;
+
+ while (member) {
+ struct io_kiocb *next = member->grp_link;
+
+ if (ignore_cqes)
+ member->flags |= REQ_F_CQE_SKIP;
+ if (!(member->flags & REQ_F_FAIL)) {
+ req_set_fail(member);
+ io_req_set_res(member, -ECANCELED, 0);
+ }
+ member = next;
+ }
+}
+
+void io_queue_group_members(struct io_kiocb *req, bool async)
+{
+ struct io_kiocb *member = req->grp_link;
+
+ if (!member)
+ return;
+
+ while (member) {
+ struct io_kiocb *next = member->grp_link;
+
+ member->grp_link = req;
+ if (async)
+ member->flags |= REQ_F_FORCE_ASYNC;
+
+ if (unlikely(member->flags & REQ_F_FAIL)) {
+ io_req_task_queue_fail(member, member->cqe.res);
+ } else if (member->flags & REQ_F_FORCE_ASYNC) {
+ io_req_task_queue(member);
+ } else {
+ io_queue_sqe(member);
+ }
+ member = next;
+ }
+ req->grp_link = NULL;
+}
+
+static inline bool __io_complete_group_req(struct io_kiocb *req,
+ struct io_kiocb *lead)
+{
+ WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
+
+ if (WARN_ON_ONCE(lead->grp_refs <= 0))
+ return false;
+
+ /*
+ * Set linked leader as failed if any member is failed, so
+ * the remained link chain can be terminated
+ */
+ if (unlikely((req->flags & REQ_F_FAIL) &&
+ ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
+ req_set_fail(lead);
+ return !--lead->grp_refs;
+}
+
+/* Complete group request and collect completed leader for freeing */
+static inline void io_complete_group_req(struct io_kiocb *req,
+ struct io_wq_work_list *grp_list)
+{
+ struct io_kiocb *lead = get_group_leader(req);
+
+ if (__io_complete_group_req(req, lead)) {
+ req->flags &= ~REQ_F_SQE_GROUP;
+ lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
+ if (!(lead->flags & REQ_F_CQE_SKIP))
+ io_req_commit_cqe(lead, lead->ctx->lockless_cq);
+
+ if (req != lead) {
+ /*
+ * Add leader to free list if it isn't there
+ * otherwise clearing group flag for freeing it
+ * in current batch
+ */
+ if (!(lead->flags & REQ_F_SQE_GROUP))
+ wq_list_add_tail(&lead->comp_list, grp_list);
+ else
+ lead->flags &= ~REQ_F_SQE_GROUP;
+ }
+ } else if (req != lead) {
+ req->flags &= ~REQ_F_SQE_GROUP;
+ } else {
+ /*
+ * Leader's group flag clearing is delayed until it is
+ * removed from free list
+ */
+ }
+}
+
static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
{
struct io_ring_ctx *ctx = req->ctx;
@@ -1427,6 +1545,17 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
comp_list);
if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
+ /*
+ * Group leader may be removed twice, don't free it
+ * if group flag isn't cleared, when some members
+ * aren't completed yet
+ */
+ if (req->flags & REQ_F_SQE_GROUP) {
+ node = req->comp_list.next;
+ req->flags &= ~REQ_F_SQE_GROUP;
+ continue;
+ }
+
if (req->flags & REQ_F_REFCOUNT) {
node = req->comp_list.next;
if (!req_ref_put_and_test(req))
@@ -1459,6 +1588,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
__must_hold(&ctx->uring_lock)
{
struct io_submit_state *state = &ctx->submit_state;
+ struct io_wq_work_list grp_list = {NULL};
struct io_wq_work_node *node;
__io_cq_lock(ctx);
@@ -1468,9 +1598,15 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
if (!(req->flags & REQ_F_CQE_SKIP))
io_req_commit_cqe(req, ctx->lockless_cq);
+
+ if (req->flags & REQ_F_SQE_GROUP)
+ io_complete_group_req(req, &grp_list);
}
__io_cq_unlock_post(ctx);
+ if (!wq_list_empty(&grp_list))
+ __wq_list_splice(&grp_list, state->compl_reqs.first);
+
if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
io_free_batch_list(ctx, state->compl_reqs.first);
INIT_WQ_LIST(&state->compl_reqs);
@@ -1677,8 +1813,12 @@ static u32 io_get_sequence(struct io_kiocb *req)
struct io_kiocb *cur;
/* need original cached_sq_head, but it was increased for each req */
- io_for_each_link(cur, req)
- seq--;
+ io_for_each_link(cur, req) {
+ if (req_is_group_leader(cur))
+ seq -= cur->grp_refs;
+ else
+ seq--;
+ }
return seq;
}
@@ -1793,11 +1933,20 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
struct io_kiocb *nxt = NULL;
if (req_ref_put_and_test(req)) {
- if (req->flags & IO_REQ_LINK_FLAGS)
- nxt = io_req_find_next(req);
+ /*
+ * CQEs have been posted in io_req_complete_post() except
+ * for group leader, and we can't advance the link for
+ * group leader until its CQE is posted.
+ *
+ * TODO: try to avoid defer and complete leader in io_wq
+ * context directly
+ */
+ if (!req_is_group_leader(req)) {
+ req->flags |= REQ_F_CQE_SKIP;
+ if (req->flags & IO_REQ_LINK_FLAGS)
+ nxt = io_req_find_next(req);
+ }
- /* we have posted CQEs in io_req_complete_post() */
- req->flags |= REQ_F_CQE_SKIP;
io_free_req(req);
}
return nxt ? &nxt->work : NULL;
@@ -1863,6 +2012,8 @@ void io_wq_submit_work(struct io_wq_work *work)
}
}
+ if (need_queue_group_members(req))
+ io_queue_group_members(req, true);
do {
ret = io_issue_sqe(req, issue_flags);
if (ret != -EAGAIN)
@@ -1977,6 +2128,9 @@ static inline void io_queue_sqe(struct io_kiocb *req)
*/
if (unlikely(ret))
io_queue_async(req, ret);
+
+ if (need_queue_group_members(req))
+ io_queue_group_members(req, false);
}
static void io_queue_sqe_fallback(struct io_kiocb *req)
@@ -2142,6 +2296,56 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
return def->prep(req, sqe);
}
+static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
+ struct io_kiocb *req)
+{
+ /*
+ * Group chain is similar with link chain: starts with 1st sqe with
+ * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
+ */
+ if (group->head) {
+ struct io_kiocb *lead = group->head;
+
+ /* members can't be in link chain, can't be drained */
+ req->flags &= ~(IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN);
+ lead->grp_refs += 1;
+ group->last->grp_link = req;
+ group->last = req;
+
+ if (req->flags & REQ_F_SQE_GROUP)
+ return NULL;
+
+ req->grp_link = NULL;
+ req->flags |= REQ_F_SQE_GROUP;
+ group->head = NULL;
+ return lead;
+ } else if (req->flags & REQ_F_SQE_GROUP) {
+ group->head = req;
+ group->last = req;
+ req->grp_refs = 1;
+ req->flags |= REQ_F_SQE_GROUP_LEADER;
+ return NULL;
+ } else {
+ return req;
+ }
+}
+
+static __cold struct io_kiocb *io_submit_fail_group(
+ struct io_submit_link *link, struct io_kiocb *req)
+{
+ struct io_kiocb *lead = link->head;
+
+ /*
+ * Instead of failing eagerly, continue assembling the group link
+ * if applicable and mark the leader with REQ_F_FAIL. The group
+ * flushing code should find the flag and handle the rest
+ */
+ if (lead && (lead->flags & IO_REQ_LINK_FLAGS) && !(lead->flags & REQ_F_FAIL))
+ req_fail_link_node(lead, -ECANCELED);
+
+ return io_group_sqe(link, req);
+}
+
static __cold int io_submit_fail_link(struct io_submit_link *link,
struct io_kiocb *req, int ret)
{
@@ -2180,11 +2384,18 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
{
struct io_ring_ctx *ctx = req->ctx;
struct io_submit_link *link = &ctx->submit_state.link;
+ struct io_submit_link *group = &ctx->submit_state.group;
trace_io_uring_req_failed(sqe, req, ret);
req_fail_link_node(req, ret);
+ if (group->head || (req->flags & REQ_F_SQE_GROUP)) {
+ req = io_submit_fail_group(group, req);
+ if (!req)
+ return 0;
+ }
+
/* cover both linked and non-linked request */
return io_submit_fail_link(link, req, ret);
}
@@ -2232,7 +2443,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
const struct io_uring_sqe *sqe)
__must_hold(&ctx->uring_lock)
{
- struct io_submit_link *link = &ctx->submit_state.link;
+ struct io_submit_state *state = &ctx->submit_state;
int ret;
ret = io_init_req(ctx, req, sqe);
@@ -2241,9 +2452,17 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
trace_io_uring_submit_req(req);
- if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
- REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
- req = io_link_sqe(link, req);
+ if (unlikely(state->group.head ||
+ (req->flags & REQ_F_SQE_GROUP))) {
+ req = io_group_sqe(&state->group, req);
+ if (!req)
+ return 0;
+ }
+
+ if (unlikely(state->link.head ||
+ (req->flags & (IO_REQ_LINK_FLAGS | REQ_F_FORCE_ASYNC |
+ REQ_F_FAIL)))) {
+ req = io_link_sqe(&state->link, req);
if (!req)
return 0;
}
@@ -2258,6 +2477,17 @@ static void io_submit_state_end(struct io_ring_ctx *ctx)
{
struct io_submit_state *state = &ctx->submit_state;
+ /* the last member must set REQ_F_SQE_GROUP */
+ if (unlikely(state->group.head)) {
+ struct io_kiocb *lead = state->group.head;
+
+ state->group.last->grp_link = NULL;
+ if (lead->flags & IO_REQ_LINK_FLAGS)
+ io_link_sqe(&state->link, lead);
+ else
+ io_queue_sqe_fallback(lead);
+ }
+
if (unlikely(state->link.head))
io_queue_sqe_fallback(state->link.head);
/* flush only after queuing links as they can generate completions */
@@ -2277,6 +2507,7 @@ static void io_submit_state_start(struct io_submit_state *state,
state->submit_nr = max_ios;
/* set only head, no need to init link_last in advance */
state->link.head = NULL;
+ state->group.head = NULL;
}
static void io_commit_sqring(struct io_ring_ctx *ctx)
@@ -3601,7 +3832,7 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING |
- IORING_FEAT_RECVSEND_BUNDLE;
+ IORING_FEAT_RECVSEND_BUNDLE | IORING_FEAT_SQE_GROUP;
if (copy_to_user(params, p, sizeof(*p))) {
ret = -EFAULT;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 624ca9076a50..b11db3bdd8d8 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -67,6 +67,8 @@ void io_req_defer_failed(struct io_kiocb *req, s32 res);
bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags);
void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
+void io_queue_group_members(struct io_kiocb *req, bool async);
+void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes);
struct file *io_file_get_normal(struct io_kiocb *req, int fd);
struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
@@ -342,6 +344,16 @@ static inline void io_tw_lock(struct io_ring_ctx *ctx, struct io_tw_state *ts)
lockdep_assert_held(&ctx->uring_lock);
}
+static inline bool req_is_group_leader(struct io_kiocb *req)
+{
+ return req->flags & REQ_F_SQE_GROUP_LEADER;
+}
+
+static inline bool req_is_group_member(struct io_kiocb *req)
+{
+ return !req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP);
+}
+
/*
* Don't complete immediately but use deferred completion infrastructure.
* Protected by ->uring_lock and can only be used either with
@@ -355,6 +367,10 @@ static inline void io_req_complete_defer(struct io_kiocb *req)
lockdep_assert_held(&req->ctx->uring_lock);
wq_list_add_tail(&req->comp_list, &state->compl_reqs);
+
+ /* members may not be issued when leader is completed */
+ if (unlikely(req_is_group_leader(req) && req->grp_link))
+ io_queue_group_members(req, false);
}
static inline void io_commit_cqring_flush(struct io_ring_ctx *ctx)
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 202f540aa314..2496bfa64948 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -171,6 +171,8 @@ static void io_fail_links(struct io_kiocb *req)
link->flags |= REQ_F_CQE_SKIP;
else
link->flags &= ~REQ_F_CQE_SKIP;
+ if (req_is_group_leader(link))
+ io_cancel_group_members(link, ignore_cqes);
trace_io_uring_fail_link(req, link);
link = link->link;
}
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-05-11 0:12 ` [PATCH V3 5/9] io_uring: support SQE group Ming Lei
@ 2024-05-21 2:58 ` Ming Lei
2024-06-10 1:55 ` Pavel Begunkov
2024-06-10 2:53 ` Pavel Begunkov
1 sibling, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-05-21 2:58 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, ming.lei
On Sat, May 11, 2024 at 08:12:08AM +0800, Ming Lei wrote:
> SQE group is defined as one chain of SQEs starting with the first SQE that
> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> doesn't have it set, and it is similar with chain of linked SQEs.
>
> Not like linked SQEs, each sqe is issued after the previous one is completed.
> All SQEs in one group are submitted in parallel, so there isn't any dependency
> among SQEs in one group.
>
> The 1st SQE is group leader, and the other SQEs are group member. The whole
> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> the two flags are ignored for group members.
>
> When the group is in one link chain, this group isn't submitted until the
> previous SQE or group is completed. And the following SQE or group can't
> be started if this group isn't completed. Failure from any group member will
> fail the group leader, then the link chain can be terminated.
>
> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> group leader only, we respect IO_DRAIN by always completing group leader as
> the last one in the group.
>
> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> support N:M dependency, such as:
>
> - group A is chained with group B together
> - group A has N SQEs
> - group B has M SQEs
>
> then M SQEs in group B depend on N SQEs in group A.
>
> N:M dependency can support some interesting use cases in efficient way:
>
> 1) read from multiple files, then write the read data into single file
>
> 2) read from single file, and write the read data into multiple files
>
> 3) write same data into multiple files, and read data from multiple files and
> compare if correct data is written
>
> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> extend sqe->flags with one uring context flag, such as use __pad3 for
> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
>
> Suggested-by: Kevin Wolf <[email protected]>
> Signed-off-by: Ming Lei <[email protected]>
BTW, I wrote one link-grp-cp.c liburing/example which is based on sqe group,
and keep QD not changed, just re-organize IOs in the following ways:
- each group have 4 READ IOs, linked by one single write IO for writing
the read data in sqe group to destination file
- the 1st 12 groups have (4 + 1) IOs, and the last group have (3 + 1)
IOs
Run the example for copying two block device(from virtio-blk to
virtio-scsi in my test VM):
1) buffered copy:
- perf is improved by 5%
2) direct IO mode
- perf is improved by 27%
[1] link-grp-cp.c example
https://github.com/ming1/liburing/commits/sqe_group_v2/
[2] one bug fixes(top commit) against V3
https://github.com/ming1/linux/commits/io_uring_sqe_group_v3/
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-05-21 2:58 ` Ming Lei
@ 2024-06-10 1:55 ` Pavel Begunkov
2024-06-11 13:32 ` Ming Lei
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-10 1:55 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Kevin Wolf
On 5/21/24 03:58, Ming Lei wrote:
> On Sat, May 11, 2024 at 08:12:08AM +0800, Ming Lei wrote:
>> SQE group is defined as one chain of SQEs starting with the first SQE that
>> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
>> doesn't have it set, and it is similar with chain of linked SQEs.
>>
>> Not like linked SQEs, each sqe is issued after the previous one is completed.
>> All SQEs in one group are submitted in parallel, so there isn't any dependency
>> among SQEs in one group.
>>
>> The 1st SQE is group leader, and the other SQEs are group member. The whole
>> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
>> the two flags are ignored for group members.
>>
>> When the group is in one link chain, this group isn't submitted until the
>> previous SQE or group is completed. And the following SQE or group can't
>> be started if this group isn't completed. Failure from any group member will
>> fail the group leader, then the link chain can be terminated.
>>
>> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
>> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
>> group leader only, we respect IO_DRAIN by always completing group leader as
>> the last one in the group.
>>
>> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
>> support N:M dependency, such as:
>>
>> - group A is chained with group B together
>> - group A has N SQEs
>> - group B has M SQEs
>>
>> then M SQEs in group B depend on N SQEs in group A.
>>
>> N:M dependency can support some interesting use cases in efficient way:
>>
>> 1) read from multiple files, then write the read data into single file
>>
>> 2) read from single file, and write the read data into multiple files
>>
>> 3) write same data into multiple files, and read data from multiple files and
>> compare if correct data is written
>>
>> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
>> extend sqe->flags with one uring context flag, such as use __pad3 for
>> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
>>
>> Suggested-by: Kevin Wolf <[email protected]>
>> Signed-off-by: Ming Lei <[email protected]>
>
> BTW, I wrote one link-grp-cp.c liburing/example which is based on sqe group,
> and keep QD not changed, just re-organize IOs in the following ways:
>
> - each group have 4 READ IOs, linked by one single write IO for writing
> the read data in sqe group to destination file
IIUC it's comparing 1 large write request with 4 small, and
it's not exactly anything close to fair. And you can do same
in userspace (without links). And having control in userspace
you can do more fun tricks, like interleaving writes for one
batch with reads from the next batch.
> - the 1st 12 groups have (4 + 1) IOs, and the last group have (3 + 1)
> IOs
>
>
> Run the example for copying two block device(from virtio-blk to
> virtio-scsi in my test VM):
>
> 1) buffered copy:
> - perf is improved by 5%
>
> 2) direct IO mode
> - perf is improved by 27%
>
>
> [1] link-grp-cp.c example
>
> https://github.com/ming1/liburing/commits/sqe_group_v2/
>
>
> [2] one bug fixes(top commit) against V3
>
> https://github.com/ming1/linux/commits/io_uring_sqe_group_v3/
>
>
>
> Thanks,
> Ming
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-06-10 1:55 ` Pavel Begunkov
@ 2024-06-11 13:32 ` Ming Lei
2024-06-16 18:14 ` Pavel Begunkov
0 siblings, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-06-11 13:32 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf
On Mon, Jun 10, 2024 at 02:55:22AM +0100, Pavel Begunkov wrote:
> On 5/21/24 03:58, Ming Lei wrote:
> > On Sat, May 11, 2024 at 08:12:08AM +0800, Ming Lei wrote:
> > > SQE group is defined as one chain of SQEs starting with the first SQE that
> > > has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> > > doesn't have it set, and it is similar with chain of linked SQEs.
> > >
> > > Not like linked SQEs, each sqe is issued after the previous one is completed.
> > > All SQEs in one group are submitted in parallel, so there isn't any dependency
> > > among SQEs in one group.
> > >
> > > The 1st SQE is group leader, and the other SQEs are group member. The whole
> > > group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> > > the two flags are ignored for group members.
> > >
> > > When the group is in one link chain, this group isn't submitted until the
> > > previous SQE or group is completed. And the following SQE or group can't
> > > be started if this group isn't completed. Failure from any group member will
> > > fail the group leader, then the link chain can be terminated.
> > >
> > > When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> > > previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> > > group leader only, we respect IO_DRAIN by always completing group leader as
> > > the last one in the group.
> > >
> > > Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> > > support N:M dependency, such as:
> > >
> > > - group A is chained with group B together
> > > - group A has N SQEs
> > > - group B has M SQEs
> > >
> > > then M SQEs in group B depend on N SQEs in group A.
> > >
> > > N:M dependency can support some interesting use cases in efficient way:
> > >
> > > 1) read from multiple files, then write the read data into single file
> > >
> > > 2) read from single file, and write the read data into multiple files
> > >
> > > 3) write same data into multiple files, and read data from multiple files and
> > > compare if correct data is written
> > >
> > > Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> > > extend sqe->flags with one uring context flag, such as use __pad3 for
> > > non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
> > >
> > > Suggested-by: Kevin Wolf <[email protected]>
> > > Signed-off-by: Ming Lei <[email protected]>
> >
> > BTW, I wrote one link-grp-cp.c liburing/example which is based on sqe group,
> > and keep QD not changed, just re-organize IOs in the following ways:
> >
> > - each group have 4 READ IOs, linked by one single write IO for writing
> > the read data in sqe group to destination file
>
> IIUC it's comparing 1 large write request with 4 small, and
It is actually reasonable from storage device viewpoint, concurrent
small READs are often fast than single big READ, but concurrent small
writes are usually slower.
> it's not exactly anything close to fair. And you can do same
> in userspace (without links). And having control in userspace
No, you can't do it with single syscall.
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-06-11 13:32 ` Ming Lei
@ 2024-06-16 18:14 ` Pavel Begunkov
2024-06-17 1:42 ` Ming Lei
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-16 18:14 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf
On 6/11/24 14:32, Ming Lei wrote:
> On Mon, Jun 10, 2024 at 02:55:22AM +0100, Pavel Begunkov wrote:
>> On 5/21/24 03:58, Ming Lei wrote:
>>> On Sat, May 11, 2024 at 08:12:08AM +0800, Ming Lei wrote:
>>>> SQE group is defined as one chain of SQEs starting with the first SQE that
>>>> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
>>>> doesn't have it set, and it is similar with chain of linked SQEs.
>>>>
>>>> Not like linked SQEs, each sqe is issued after the previous one is completed.
>>>> All SQEs in one group are submitted in parallel, so there isn't any dependency
>>>> among SQEs in one group.
>>>>
>>>> The 1st SQE is group leader, and the other SQEs are group member. The whole
>>>> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
>>>> the two flags are ignored for group members.
>>>>
>>>> When the group is in one link chain, this group isn't submitted until the
>>>> previous SQE or group is completed. And the following SQE or group can't
>>>> be started if this group isn't completed. Failure from any group member will
>>>> fail the group leader, then the link chain can be terminated.
>>>>
>>>> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
>>>> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
>>>> group leader only, we respect IO_DRAIN by always completing group leader as
>>>> the last one in the group.
>>>>
>>>> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
>>>> support N:M dependency, such as:
>>>>
>>>> - group A is chained with group B together
>>>> - group A has N SQEs
>>>> - group B has M SQEs
>>>>
>>>> then M SQEs in group B depend on N SQEs in group A.
>>>>
>>>> N:M dependency can support some interesting use cases in efficient way:
>>>>
>>>> 1) read from multiple files, then write the read data into single file
>>>>
>>>> 2) read from single file, and write the read data into multiple files
>>>>
>>>> 3) write same data into multiple files, and read data from multiple files and
>>>> compare if correct data is written
>>>>
>>>> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
>>>> extend sqe->flags with one uring context flag, such as use __pad3 for
>>>> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
>>>>
>>>> Suggested-by: Kevin Wolf <[email protected]>
>>>> Signed-off-by: Ming Lei <[email protected]>
>>>
>>> BTW, I wrote one link-grp-cp.c liburing/example which is based on sqe group,
>>> and keep QD not changed, just re-organize IOs in the following ways:
>>>
>>> - each group have 4 READ IOs, linked by one single write IO for writing
>>> the read data in sqe group to destination file
>>
>> IIUC it's comparing 1 large write request with 4 small, and
>
> It is actually reasonable from storage device viewpoint, concurrent
> small READs are often fast than single big READ, but concurrent small
> writes are usually slower.
It is, but that doesn't make the comparison apple to apple.
Even what I described, even though it's better (same number
of syscalls but better parallelism as you don't block next
batch of reads by writes), you can argues it's not a
completely fair comparison either since needs different number
of buffers, etc.
>> it's not exactly anything close to fair. And you can do same
>> in userspace (without links). And having control in userspace
>
> No, you can't do it with single syscall.
That's called you _can_ do it. And syscalls is not everything,
context switching turned to be a bigger problem, and to execute
links it does exactly that.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-06-16 18:14 ` Pavel Begunkov
@ 2024-06-17 1:42 ` Ming Lei
0 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-06-17 1:42 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf, ming.lei
On Sun, Jun 16, 2024 at 07:14:37PM +0100, Pavel Begunkov wrote:
> On 6/11/24 14:32, Ming Lei wrote:
> > On Mon, Jun 10, 2024 at 02:55:22AM +0100, Pavel Begunkov wrote:
> > > On 5/21/24 03:58, Ming Lei wrote:
> > > > On Sat, May 11, 2024 at 08:12:08AM +0800, Ming Lei wrote:
> > > > > SQE group is defined as one chain of SQEs starting with the first SQE that
> > > > > has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> > > > > doesn't have it set, and it is similar with chain of linked SQEs.
> > > > >
> > > > > Not like linked SQEs, each sqe is issued after the previous one is completed.
> > > > > All SQEs in one group are submitted in parallel, so there isn't any dependency
> > > > > among SQEs in one group.
> > > > >
> > > > > The 1st SQE is group leader, and the other SQEs are group member. The whole
> > > > > group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> > > > > the two flags are ignored for group members.
> > > > >
> > > > > When the group is in one link chain, this group isn't submitted until the
> > > > > previous SQE or group is completed. And the following SQE or group can't
> > > > > be started if this group isn't completed. Failure from any group member will
> > > > > fail the group leader, then the link chain can be terminated.
> > > > >
> > > > > When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> > > > > previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> > > > > group leader only, we respect IO_DRAIN by always completing group leader as
> > > > > the last one in the group.
> > > > >
> > > > > Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> > > > > support N:M dependency, such as:
> > > > >
> > > > > - group A is chained with group B together
> > > > > - group A has N SQEs
> > > > > - group B has M SQEs
> > > > >
> > > > > then M SQEs in group B depend on N SQEs in group A.
> > > > >
> > > > > N:M dependency can support some interesting use cases in efficient way:
> > > > >
> > > > > 1) read from multiple files, then write the read data into single file
> > > > >
> > > > > 2) read from single file, and write the read data into multiple files
> > > > >
> > > > > 3) write same data into multiple files, and read data from multiple files and
> > > > > compare if correct data is written
> > > > >
> > > > > Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> > > > > extend sqe->flags with one uring context flag, such as use __pad3 for
> > > > > non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
> > > > >
> > > > > Suggested-by: Kevin Wolf <[email protected]>
> > > > > Signed-off-by: Ming Lei <[email protected]>
> > > >
> > > > BTW, I wrote one link-grp-cp.c liburing/example which is based on sqe group,
> > > > and keep QD not changed, just re-organize IOs in the following ways:
> > > >
> > > > - each group have 4 READ IOs, linked by one single write IO for writing
> > > > the read data in sqe group to destination file
> > >
> > > IIUC it's comparing 1 large write request with 4 small, and
> >
> > It is actually reasonable from storage device viewpoint, concurrent
> > small READs are often fast than single big READ, but concurrent small
> > writes are usually slower.
>
> It is, but that doesn't make the comparison apple to apple.
> Even what I described, even though it's better (same number
> of syscalls but better parallelism as you don't block next
> batch of reads by writes), you can argues it's not a
> completely fair comparison either since needs different number
> of buffers, etc.
>
> > > it's not exactly anything close to fair. And you can do same
> > > in userspace (without links). And having control in userspace
> >
> > No, you can't do it with single syscall.
>
> That's called you _can_ do it. And syscalls is not everything,
For ublk, syscall does mean something, because each ublk IO is
handled by io_uring, if more syscalls are introduced for each ublk IO,
performance definitely degrades a lot because IOPS can be million level.
Now syscall PTI overhead does make difference, please see:
https://lwn.net/Articles/752587/
> context switching turned to be a bigger problem, and to execute
> links it does exactly that.
If that is true, IO_LINK shouldn't have been needed, cause you can model
dependency via io_uring syscall, unfortunately it isn't true. IO_LINK not
only simplifies application programming, but also avoids extra syscall.
If you compare io_uring-cp.c(282 LOC) with link-cp.c(193 LOC) in
liburing/examples, you can see io_uring-cp.c is more complicated. Adding
one extra syscall(wait point) makes application hard to write, especially in
modern async/.await programming environment.
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-05-11 0:12 ` [PATCH V3 5/9] io_uring: support SQE group Ming Lei
2024-05-21 2:58 ` Ming Lei
@ 2024-06-10 2:53 ` Pavel Begunkov
2024-06-13 1:45 ` Ming Lei
1 sibling, 1 reply; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-10 2:53 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Kevin Wolf
On 5/11/24 01:12, Ming Lei wrote:
> SQE group is defined as one chain of SQEs starting with the first SQE that
> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> doesn't have it set, and it is similar with chain of linked SQEs.
The main concern stays same, it adds overhead nearly to every
single hot function I can think of, as well as lots of
complexity.
Another minor issue is REQ_F_INFLIGHT, as explained before,
cancellation has to be able to find all REQ_F_INFLIGHT
requests. Requests you add to a group can have that flag
but are not discoverable by core io_uring code.
Another note, I'll be looking deeper into this patch, there
is too much of random tossing around of requests / refcounting
and other dependencies, as well as odd intertwinings with
other parts.
> Not like linked SQEs, each sqe is issued after the previous one is completed.
> All SQEs in one group are submitted in parallel, so there isn't any dependency
> among SQEs in one group.
>
> The 1st SQE is group leader, and the other SQEs are group member. The whole
> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> the two flags are ignored for group members.
>
> When the group is in one link chain, this group isn't submitted until the
> previous SQE or group is completed. And the following SQE or group can't
> be started if this group isn't completed. Failure from any group member will
> fail the group leader, then the link chain can be terminated.
>
> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> group leader only, we respect IO_DRAIN by always completing group leader as
> the last one in the group.
>
> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> support N:M dependency, such as:
>
> - group A is chained with group B together
> - group A has N SQEs
> - group B has M SQEs
>
> then M SQEs in group B depend on N SQEs in group A.
>
> N:M dependency can support some interesting use cases in efficient way:
>
> 1) read from multiple files, then write the read data into single file
>
> 2) read from single file, and write the read data into multiple files
>
> 3) write same data into multiple files, and read data from multiple files and
> compare if correct data is written
>
> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> extend sqe->flags with one uring context flag, such as use __pad3 for
> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
>
> Suggested-by: Kevin Wolf <[email protected]>
> Signed-off-by: Ming Lei <[email protected]>
> ---
> include/linux/io_uring_types.h | 12 ++
> include/uapi/linux/io_uring.h | 4 +
> io_uring/io_uring.c | 255 +++++++++++++++++++++++++++++++--
> io_uring/io_uring.h | 16 +++
> io_uring/timeout.c | 2 +
> 5 files changed, 277 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
> index 7a6b190c7da7..62311b0f0e0b 100644
> --- a/include/linux/io_uring_types.h
> +++ b/include/linux/io_uring_types.h
> @@ -666,6 +674,10 @@ struct io_kiocb {
> u64 extra1;
> u64 extra2;
> } big_cqe;
> +
> + /* all SQE group members linked here for group lead */
> + struct io_kiocb *grp_link;
> + int grp_refs;
> };
>
> struct io_overflow_cqe {
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index c184c9a312df..b87c5452de43 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -109,7 +109,8 @@
> IOSQE_IO_HARDLINK | IOSQE_ASYNC)
>
> #define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
> - IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
> + IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS | \
> + IOSQE_SQE_GROUP)
>
> #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
> REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
> @@ -915,6 +916,13 @@ static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
> {
> struct io_ring_ctx *ctx = req->ctx;
>
> + /*
> + * For group leader, cqe has to be committed after all members are
> + * committed, when the request becomes normal one.
> + */
> + if (unlikely(req_is_group_leader(req)))
> + return;
The copy of it inlined into flush_completions should
maintain a proper fast path.
if (req->flags & (CQE_SKIP | GROUP)) {
if (req->flags & CQE_SKIP)
continue;
if (req->flags & GROUP) {}
}
> +
> if (unlikely(!io_fill_cqe_req(ctx, req))) {
> if (lockless_cq) {
> spin_lock(&ctx->completion_lock);
> @@ -926,6 +934,116 @@ static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
> }
> }
>
> +static inline bool need_queue_group_members(struct io_kiocb *req)
> +{
> + return req_is_group_leader(req) && req->grp_link;
> +}
> +
> +/* Can only be called after this request is issued */
> +static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
> +{
> + if (req->flags & REQ_F_SQE_GROUP) {
> + if (req_is_group_leader(req))
> + return req;
> + return req->grp_link;
I'm missing something, it seems io_group_sqe() adding all
requests of a group into a singly linked list via ->grp_link,
but here we return it as a leader. Confused.
> + }
> + return NULL;
> +}
> +
> +void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes)
> +{
> + struct io_kiocb *member = req->grp_link;
> +
> + while (member) {
> + struct io_kiocb *next = member->grp_link;
> +
> + if (ignore_cqes)
> + member->flags |= REQ_F_CQE_SKIP;
> + if (!(member->flags & REQ_F_FAIL)) {
> + req_set_fail(member);
> + io_req_set_res(member, -ECANCELED, 0);
> + }
> + member = next;
> + }
> +}
> +
> +void io_queue_group_members(struct io_kiocb *req, bool async)
> +{
> + struct io_kiocb *member = req->grp_link;
> +
> + if (!member)
> + return;
> +
> + while (member) {
> + struct io_kiocb *next = member->grp_link;
> +
> + member->grp_link = req;
> + if (async)
> + member->flags |= REQ_F_FORCE_ASYNC;
> +
> + if (unlikely(member->flags & REQ_F_FAIL)) {
> + io_req_task_queue_fail(member, member->cqe.res);
> + } else if (member->flags & REQ_F_FORCE_ASYNC) {
> + io_req_task_queue(member);
> + } else {
> + io_queue_sqe(member);
> + }
> + member = next;
> + }
> + req->grp_link = NULL;
> +}
> +
> +static inline bool __io_complete_group_req(struct io_kiocb *req,
> + struct io_kiocb *lead)
> +{
> + WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
> +
> + if (WARN_ON_ONCE(lead->grp_refs <= 0))
> + return false;
> +
> + /*
> + * Set linked leader as failed if any member is failed, so
> + * the remained link chain can be terminated
> + */
> + if (unlikely((req->flags & REQ_F_FAIL) &&
> + ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
> + req_set_fail(lead);
> + return !--lead->grp_refs;
> +}
> +
> +/* Complete group request and collect completed leader for freeing */
> +static inline void io_complete_group_req(struct io_kiocb *req,
> + struct io_wq_work_list *grp_list)
> +{
> + struct io_kiocb *lead = get_group_leader(req);
> +
> + if (__io_complete_group_req(req, lead)) {
> + req->flags &= ~REQ_F_SQE_GROUP;
> + lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
> + if (!(lead->flags & REQ_F_CQE_SKIP))
> + io_req_commit_cqe(lead, lead->ctx->lockless_cq);
> +
> + if (req != lead) {
> + /*
> + * Add leader to free list if it isn't there
> + * otherwise clearing group flag for freeing it
> + * in current batch
> + */
> + if (!(lead->flags & REQ_F_SQE_GROUP))
> + wq_list_add_tail(&lead->comp_list, grp_list);
> + else
> + lead->flags &= ~REQ_F_SQE_GROUP;
> + }
> + } else if (req != lead) {
> + req->flags &= ~REQ_F_SQE_GROUP;
> + } else {
> + /*
> + * Leader's group flag clearing is delayed until it is
> + * removed from free list
> + */
> + }
> +}
> +
> static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> {
> struct io_ring_ctx *ctx = req->ctx;
> @@ -1427,6 +1545,17 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
> comp_list);
>
> if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
> + /*
> + * Group leader may be removed twice, don't free it
> + * if group flag isn't cleared, when some members
> + * aren't completed yet
> + */
> + if (req->flags & REQ_F_SQE_GROUP) {
> + node = req->comp_list.next;
> + req->flags &= ~REQ_F_SQE_GROUP;
> + continue;
> + }
> +
> if (req->flags & REQ_F_REFCOUNT) {
> node = req->comp_list.next;
> if (!req_ref_put_and_test(req))
> @@ -1459,6 +1588,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> __must_hold(&ctx->uring_lock)
> {
> struct io_submit_state *state = &ctx->submit_state;
> + struct io_wq_work_list grp_list = {NULL};
> struct io_wq_work_node *node;
>
> __io_cq_lock(ctx);
> @@ -1468,9 +1598,15 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>
> if (!(req->flags & REQ_F_CQE_SKIP))
> io_req_commit_cqe(req, ctx->lockless_cq);
> +
> + if (req->flags & REQ_F_SQE_GROUP)
Same note about hot path
> + io_complete_group_req(req, &grp_list);
> }
> __io_cq_unlock_post(ctx);
>
> + if (!wq_list_empty(&grp_list))
> + __wq_list_splice(&grp_list, state->compl_reqs.first);
What's the point of splicing it here insted of doing all
that under REQ_F_SQE_GROUP above?
> +
> if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
> io_free_batch_list(ctx, state->compl_reqs.first);
> INIT_WQ_LIST(&state->compl_reqs);
> @@ -1677,8 +1813,12 @@ static u32 io_get_sequence(struct io_kiocb *req)
> struct io_kiocb *cur;
>
> /* need original cached_sq_head, but it was increased for each req */
> - io_for_each_link(cur, req)
> - seq--;
> + io_for_each_link(cur, req) {
> + if (req_is_group_leader(cur))
> + seq -= cur->grp_refs;
> + else
> + seq--;
> + }
> return seq;
> }
>
> @@ -1793,11 +1933,20 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> struct io_kiocb *nxt = NULL;
>
> if (req_ref_put_and_test(req)) {
> - if (req->flags & IO_REQ_LINK_FLAGS)
> - nxt = io_req_find_next(req);
> + /*
> + * CQEs have been posted in io_req_complete_post() except
> + * for group leader, and we can't advance the link for
> + * group leader until its CQE is posted.
> + *
> + * TODO: try to avoid defer and complete leader in io_wq
> + * context directly
> + */
> + if (!req_is_group_leader(req)) {
> + req->flags |= REQ_F_CQE_SKIP;
> + if (req->flags & IO_REQ_LINK_FLAGS)
> + nxt = io_req_find_next(req);
> + }
>
> - /* we have posted CQEs in io_req_complete_post() */
> - req->flags |= REQ_F_CQE_SKIP;
> io_free_req(req);
> }
> return nxt ? &nxt->work : NULL;
> @@ -1863,6 +2012,8 @@ void io_wq_submit_work(struct io_wq_work *work)
> }
> }
>
> + if (need_queue_group_members(req))
> + io_queue_group_members(req, true);
> do {
> ret = io_issue_sqe(req, issue_flags);
> if (ret != -EAGAIN)
> @@ -1977,6 +2128,9 @@ static inline void io_queue_sqe(struct io_kiocb *req)
> */
> if (unlikely(ret))
> io_queue_async(req, ret);
> +
> + if (need_queue_group_members(req))
> + io_queue_group_members(req, false);
Request ownership is considered to be handed further at this
point and requests should not be touched. Only ret==0 from
io_issue_sqe it's still ours, but again it's handed somewhere
by io_queue_async().
> }
>
> static void io_queue_sqe_fallback(struct io_kiocb *req)
> @@ -2142,6 +2296,56 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> return def->prep(req, sqe);
> }
>
> +static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
> + struct io_kiocb *req)
> +{
> + /*
> + * Group chain is similar with link chain: starts with 1st sqe with
> + * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
> + */
> + if (group->head) {
> + struct io_kiocb *lead = group->head;
> +
> + /* members can't be in link chain, can't be drained */
> + req->flags &= ~(IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN);
> + lead->grp_refs += 1;
> + group->last->grp_link = req;
> + group->last = req;
> +
> + if (req->flags & REQ_F_SQE_GROUP)
> + return NULL;
> +
> + req->grp_link = NULL;
> + req->flags |= REQ_F_SQE_GROUP;
> + group->head = NULL;
> + return lead;
> + } else if (req->flags & REQ_F_SQE_GROUP) {
> + group->head = req;
> + group->last = req;
> + req->grp_refs = 1;
> + req->flags |= REQ_F_SQE_GROUP_LEADER;
> + return NULL;
> + } else {
> + return req;
> + }
> +}
> +
> +static __cold struct io_kiocb *io_submit_fail_group(
> + struct io_submit_link *link, struct io_kiocb *req)
> +{
> + struct io_kiocb *lead = link->head;
> +
> + /*
> + * Instead of failing eagerly, continue assembling the group link
> + * if applicable and mark the leader with REQ_F_FAIL. The group
> + * flushing code should find the flag and handle the rest
> + */
> + if (lead && (lead->flags & IO_REQ_LINK_FLAGS) && !(lead->flags & REQ_F_FAIL))
> + req_fail_link_node(lead, -ECANCELED);
> +
> + return io_group_sqe(link, req);
> +}
> +
> static __cold int io_submit_fail_link(struct io_submit_link *link,
> struct io_kiocb *req, int ret)
> {
> @@ -2180,11 +2384,18 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
> {
> struct io_ring_ctx *ctx = req->ctx;
> struct io_submit_link *link = &ctx->submit_state.link;
> + struct io_submit_link *group = &ctx->submit_state.group;
>
> trace_io_uring_req_failed(sqe, req, ret);
>
> req_fail_link_node(req, ret);
>
> + if (group->head || (req->flags & REQ_F_SQE_GROUP)) {
> + req = io_submit_fail_group(group, req);
> + if (!req)
> + return 0;
> + }
> +
> /* cover both linked and non-linked request */
> return io_submit_fail_link(link, req, ret);
> }
> @@ -2232,7 +2443,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> const struct io_uring_sqe *sqe)
> __must_hold(&ctx->uring_lock)
> {
> - struct io_submit_link *link = &ctx->submit_state.link;
> + struct io_submit_state *state = &ctx->submit_state;
> int ret;
>
> ret = io_init_req(ctx, req, sqe);
> @@ -2241,9 +2452,17 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>
> trace_io_uring_submit_req(req);
>
> - if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
> - REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
> - req = io_link_sqe(link, req);
> + if (unlikely(state->group.head ||
A note rather to myself and for the future, all theese checks
including links and groups can be folded under one common if.
> + (req->flags & REQ_F_SQE_GROUP))) {
> + req = io_group_sqe(&state->group, req);
> + if (!req)
> + return 0;
> + }
> +
> + if (unlikely(state->link.head ||
> + (req->flags & (IO_REQ_LINK_FLAGS | REQ_F_FORCE_ASYNC |
> + REQ_F_FAIL)))) {
> + req = io_link_sqe(&state->link, req);
> if (!req)
> return 0;
> }
> @@ -2258,6 +2477,17 @@ static void io_submit_state_end(struct io_ring_ctx *ctx)
> {
> struct io_submit_state *state = &ctx->submit_state;
>
> + /* the last member must set REQ_F_SQE_GROUP */
> + if (unlikely(state->group.head)) {
> + struct io_kiocb *lead = state->group.head;
> +
> + state->group.last->grp_link = NULL;
> + if (lead->flags & IO_REQ_LINK_FLAGS)
> + io_link_sqe(&state->link, lead);
> + else
> + io_queue_sqe_fallback(lead);
> + }
> +
> if (unlikely(state->link.head))
> io_queue_sqe_fallback(state->link.head);
> /* flush only after queuing links as they can generate completions */
> @@ -2277,6 +2507,7 @@ static void io_submit_state_start(struct io_submit_state *state,
> state->submit_nr = max_ios;
> /* set only head, no need to init link_last in advance */
> state->link.head = NULL;
> + state->group.head = NULL;
> }
>
> static void io_commit_sqring(struct io_ring_ctx *ctx)
> diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
> index 624ca9076a50..b11db3bdd8d8 100644
> --- a/io_uring/io_uring.h
> +++ b/io_uring/io_uring.h
> @@ -67,6 +67,8 @@ void io_req_defer_failed(struct io_kiocb *req, s32 res);
> bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
> bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags);
> void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
> +void io_queue_group_members(struct io_kiocb *req, bool async);
> +void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes);
>
> struct file *io_file_get_normal(struct io_kiocb *req, int fd);
> struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
> @@ -342,6 +344,16 @@ static inline void io_tw_lock(struct io_ring_ctx *ctx, struct io_tw_state *ts)
> lockdep_assert_held(&ctx->uring_lock);
> }
>
> +static inline bool req_is_group_leader(struct io_kiocb *req)
> +{
> + return req->flags & REQ_F_SQE_GROUP_LEADER;
> +}
> +
> +static inline bool req_is_group_member(struct io_kiocb *req)
> +{
> + return !req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP);
> +}
> +
> /*
> * Don't complete immediately but use deferred completion infrastructure.
> * Protected by ->uring_lock and can only be used either with
> @@ -355,6 +367,10 @@ static inline void io_req_complete_defer(struct io_kiocb *req)
> lockdep_assert_held(&req->ctx->uring_lock);
>
> wq_list_add_tail(&req->comp_list, &state->compl_reqs);
> +
> + /* members may not be issued when leader is completed */
> + if (unlikely(req_is_group_leader(req) && req->grp_link))
> + io_queue_group_members(req, false);
> }
>
> static inline void io_commit_cqring_flush(struct io_ring_ctx *ctx)
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-06-10 2:53 ` Pavel Begunkov
@ 2024-06-13 1:45 ` Ming Lei
2024-06-16 19:13 ` Pavel Begunkov
0 siblings, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-06-13 1:45 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf, ming.lei
On Mon, Jun 10, 2024 at 03:53:51AM +0100, Pavel Begunkov wrote:
> On 5/11/24 01:12, Ming Lei wrote:
> > SQE group is defined as one chain of SQEs starting with the first SQE that
> > has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> > doesn't have it set, and it is similar with chain of linked SQEs.
>
> The main concern stays same, it adds overhead nearly to every
> single hot function I can think of, as well as lots of
> complexity.
Almost every sqe group change is covered by REQ_F_SQE_GROUP, so I am
not clear what the added overhead is.
>
> Another minor issue is REQ_F_INFLIGHT, as explained before,
> cancellation has to be able to find all REQ_F_INFLIGHT
> requests. Requests you add to a group can have that flag
> but are not discoverable by core io_uring code.
OK, we can deal with it by setting leader as REQ_F_INFLIGHT if the
flag is set for any member, since all members are guaranteed to
be drained when leader is completed. Will do it in V4.
>
> Another note, I'll be looking deeper into this patch, there
> is too much of random tossing around of requests / refcounting
> and other dependencies, as well as odd intertwinings with
> other parts.
The only thing wrt. request refcount is for io-wq, since request
reference is grabbed when the req is handled in io-wq context, and
group leader need to be completed after all members are done. That
is all special change wrt. request refcounting.
>
> > Not like linked SQEs, each sqe is issued after the previous one is completed.
> > All SQEs in one group are submitted in parallel, so there isn't any dependency
> > among SQEs in one group.
> >
> > The 1st SQE is group leader, and the other SQEs are group member. The whole
> > group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> > the two flags are ignored for group members.
> >
> > When the group is in one link chain, this group isn't submitted until the
> > previous SQE or group is completed. And the following SQE or group can't
> > be started if this group isn't completed. Failure from any group member will
> > fail the group leader, then the link chain can be terminated.
> >
> > When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> > previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> > group leader only, we respect IO_DRAIN by always completing group leader as
> > the last one in the group.
> >
> > Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> > support N:M dependency, such as:
> >
> > - group A is chained with group B together
> > - group A has N SQEs
> > - group B has M SQEs
> >
> > then M SQEs in group B depend on N SQEs in group A.
> >
> > N:M dependency can support some interesting use cases in efficient way:
> >
> > 1) read from multiple files, then write the read data into single file
> >
> > 2) read from single file, and write the read data into multiple files
> >
> > 3) write same data into multiple files, and read data from multiple files and
> > compare if correct data is written
> >
> > Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> > extend sqe->flags with one uring context flag, such as use __pad3 for
> > non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
> >
> > Suggested-by: Kevin Wolf <[email protected]>
> > Signed-off-by: Ming Lei <[email protected]>
> > ---
> > include/linux/io_uring_types.h | 12 ++
> > include/uapi/linux/io_uring.h | 4 +
> > io_uring/io_uring.c | 255 +++++++++++++++++++++++++++++++--
> > io_uring/io_uring.h | 16 +++
> > io_uring/timeout.c | 2 +
> > 5 files changed, 277 insertions(+), 12 deletions(-)
> >
> > diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
> > index 7a6b190c7da7..62311b0f0e0b 100644
> > --- a/include/linux/io_uring_types.h
> > +++ b/include/linux/io_uring_types.h
> > @@ -666,6 +674,10 @@ struct io_kiocb {
> > u64 extra1;
> > u64 extra2;
> > } big_cqe;
> > +
> > + /* all SQE group members linked here for group lead */
> > + struct io_kiocb *grp_link;
> > + int grp_refs;
> > };
> > struct io_overflow_cqe {
> > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > index c184c9a312df..b87c5452de43 100644
> > --- a/io_uring/io_uring.c
> > +++ b/io_uring/io_uring.c
> > @@ -109,7 +109,8 @@
> > IOSQE_IO_HARDLINK | IOSQE_ASYNC)
> > #define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
> > - IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
> > + IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS | \
> > + IOSQE_SQE_GROUP)
> > #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
> > REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
> > @@ -915,6 +916,13 @@ static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
> > {
> > struct io_ring_ctx *ctx = req->ctx;
> > + /*
> > + * For group leader, cqe has to be committed after all members are
> > + * committed, when the request becomes normal one.
> > + */
> > + if (unlikely(req_is_group_leader(req)))
> > + return;
>
> The copy of it inlined into flush_completions should
> maintain a proper fast path.
>
> if (req->flags & (CQE_SKIP | GROUP)) {
> if (req->flags & CQE_SKIP)
> continue;
> if (req->flags & GROUP) {}
OK, I will try to do that in above way.
> }
>
> > +
> > if (unlikely(!io_fill_cqe_req(ctx, req))) {
> > if (lockless_cq) {
> > spin_lock(&ctx->completion_lock);
> > @@ -926,6 +934,116 @@ static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
> > }
> > }
> > +static inline bool need_queue_group_members(struct io_kiocb *req)
> > +{
> > + return req_is_group_leader(req) && req->grp_link;
> > +}
> > +
> > +/* Can only be called after this request is issued */
> > +static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
> > +{
> > + if (req->flags & REQ_F_SQE_GROUP) {
> > + if (req_is_group_leader(req))
> > + return req;
> > + return req->grp_link;
>
> I'm missing something, it seems io_group_sqe() adding all
> requests of a group into a singly linked list via ->grp_link,
> but here we return it as a leader. Confused.
->grp_link stores the singly linked list for group leader, and
the same field stores the group leader pointer for group member requests.
For later, we can add one union field to make code more readable.
Will do that in V4.
>
> > + }
> > + return NULL;
> > +}
> > +
> > +void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes)
> > +{
> > + struct io_kiocb *member = req->grp_link;
> > +
> > + while (member) {
> > + struct io_kiocb *next = member->grp_link;
> > +
> > + if (ignore_cqes)
> > + member->flags |= REQ_F_CQE_SKIP;
> > + if (!(member->flags & REQ_F_FAIL)) {
> > + req_set_fail(member);
> > + io_req_set_res(member, -ECANCELED, 0);
> > + }
> > + member = next;
> > + }
> > +}
> > +
> > +void io_queue_group_members(struct io_kiocb *req, bool async)
> > +{
> > + struct io_kiocb *member = req->grp_link;
> > +
> > + if (!member)
> > + return;
> > +
> > + while (member) {
> > + struct io_kiocb *next = member->grp_link;
> > +
> > + member->grp_link = req;
> > + if (async)
> > + member->flags |= REQ_F_FORCE_ASYNC;
> > +
> > + if (unlikely(member->flags & REQ_F_FAIL)) {
> > + io_req_task_queue_fail(member, member->cqe.res);
> > + } else if (member->flags & REQ_F_FORCE_ASYNC) {
> > + io_req_task_queue(member);
> > + } else {
> > + io_queue_sqe(member);
> > + }
> > + member = next;
> > + }
> > + req->grp_link = NULL;
> > +}
> > +
> > +static inline bool __io_complete_group_req(struct io_kiocb *req,
> > + struct io_kiocb *lead)
> > +{
> > + WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
> > +
> > + if (WARN_ON_ONCE(lead->grp_refs <= 0))
> > + return false;
> > +
> > + /*
> > + * Set linked leader as failed if any member is failed, so
> > + * the remained link chain can be terminated
> > + */
> > + if (unlikely((req->flags & REQ_F_FAIL) &&
> > + ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
> > + req_set_fail(lead);
> > + return !--lead->grp_refs;
> > +}
> > +
> > +/* Complete group request and collect completed leader for freeing */
> > +static inline void io_complete_group_req(struct io_kiocb *req,
> > + struct io_wq_work_list *grp_list)
> > +{
> > + struct io_kiocb *lead = get_group_leader(req);
> > +
> > + if (__io_complete_group_req(req, lead)) {
> > + req->flags &= ~REQ_F_SQE_GROUP;
> > + lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
> > + if (!(lead->flags & REQ_F_CQE_SKIP))
> > + io_req_commit_cqe(lead, lead->ctx->lockless_cq);
> > +
> > + if (req != lead) {
> > + /*
> > + * Add leader to free list if it isn't there
> > + * otherwise clearing group flag for freeing it
> > + * in current batch
> > + */
> > + if (!(lead->flags & REQ_F_SQE_GROUP))
> > + wq_list_add_tail(&lead->comp_list, grp_list);
> > + else
> > + lead->flags &= ~REQ_F_SQE_GROUP;
> > + }
> > + } else if (req != lead) {
> > + req->flags &= ~REQ_F_SQE_GROUP;
> > + } else {
> > + /*
> > + * Leader's group flag clearing is delayed until it is
> > + * removed from free list
> > + */
> > + }
> > +}
> > +
> > static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> > {
> > struct io_ring_ctx *ctx = req->ctx;
> > @@ -1427,6 +1545,17 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
> > comp_list);
> > if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
> > + /*
> > + * Group leader may be removed twice, don't free it
> > + * if group flag isn't cleared, when some members
> > + * aren't completed yet
> > + */
> > + if (req->flags & REQ_F_SQE_GROUP) {
> > + node = req->comp_list.next;
> > + req->flags &= ~REQ_F_SQE_GROUP;
> > + continue;
> > + }
> > +
> > if (req->flags & REQ_F_REFCOUNT) {
> > node = req->comp_list.next;
> > if (!req_ref_put_and_test(req))
> > @@ -1459,6 +1588,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > __must_hold(&ctx->uring_lock)
> > {
> > struct io_submit_state *state = &ctx->submit_state;
> > + struct io_wq_work_list grp_list = {NULL};
> > struct io_wq_work_node *node;
> > __io_cq_lock(ctx);
> > @@ -1468,9 +1598,15 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > if (!(req->flags & REQ_F_CQE_SKIP))
> > io_req_commit_cqe(req, ctx->lockless_cq);
> > +
> > + if (req->flags & REQ_F_SQE_GROUP)
>
> Same note about hot path
>
> > + io_complete_group_req(req, &grp_list);
> > }
> > __io_cq_unlock_post(ctx);
> > + if (!wq_list_empty(&grp_list))
> > + __wq_list_splice(&grp_list, state->compl_reqs.first);
>
> What's the point of splicing it here insted of doing all
> that under REQ_F_SQE_GROUP above?
As mentioned, group leader can't be completed until all members are
done, so any leaders in the current list have to be moved to this
local list for deferred completion. That should be the only tricky
part of the whole sqe group implementation.
>
> > +
> > if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
> > io_free_batch_list(ctx, state->compl_reqs.first);
> > INIT_WQ_LIST(&state->compl_reqs);
> > @@ -1677,8 +1813,12 @@ static u32 io_get_sequence(struct io_kiocb *req)
> > struct io_kiocb *cur;
> > /* need original cached_sq_head, but it was increased for each req */
> > - io_for_each_link(cur, req)
> > - seq--;
> > + io_for_each_link(cur, req) {
> > + if (req_is_group_leader(cur))
> > + seq -= cur->grp_refs;
> > + else
> > + seq--;
> > + }
> > return seq;
> > }
> > @@ -1793,11 +1933,20 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> > struct io_kiocb *nxt = NULL;
> > if (req_ref_put_and_test(req)) {
> > - if (req->flags & IO_REQ_LINK_FLAGS)
> > - nxt = io_req_find_next(req);
> > + /*
> > + * CQEs have been posted in io_req_complete_post() except
> > + * for group leader, and we can't advance the link for
> > + * group leader until its CQE is posted.
> > + *
> > + * TODO: try to avoid defer and complete leader in io_wq
> > + * context directly
> > + */
> > + if (!req_is_group_leader(req)) {
> > + req->flags |= REQ_F_CQE_SKIP;
> > + if (req->flags & IO_REQ_LINK_FLAGS)
> > + nxt = io_req_find_next(req);
> > + }
> > - /* we have posted CQEs in io_req_complete_post() */
> > - req->flags |= REQ_F_CQE_SKIP;
> > io_free_req(req);
> > }
> > return nxt ? &nxt->work : NULL;
> > @@ -1863,6 +2012,8 @@ void io_wq_submit_work(struct io_wq_work *work)
> > }
> > }
> > + if (need_queue_group_members(req))
> > + io_queue_group_members(req, true);
> > do {
> > ret = io_issue_sqe(req, issue_flags);
> > if (ret != -EAGAIN)
> > @@ -1977,6 +2128,9 @@ static inline void io_queue_sqe(struct io_kiocb *req)
> > */
> > if (unlikely(ret))
> > io_queue_async(req, ret);
> > +
> > + if (need_queue_group_members(req))
> > + io_queue_group_members(req, false);
>
> Request ownership is considered to be handed further at this
> point and requests should not be touched. Only ret==0 from
> io_issue_sqe it's still ours, but again it's handed somewhere
> by io_queue_async().
Yes, you are right.
And it has been fixed in my local tree:
@@ -2154,8 +2154,7 @@ static inline void io_queue_sqe(struct io_kiocb *req)
*/
if (unlikely(ret))
io_queue_async(req, ret);
-
- if (need_queue_group_members(req))
+ else if (need_queue_group_members(req))
io_queue_group_members(req, false);
}
>
> > }
> > static void io_queue_sqe_fallback(struct io_kiocb *req)
> > @@ -2142,6 +2296,56 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > return def->prep(req, sqe);
> > }
> > +static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
> > + struct io_kiocb *req)
> > +{
> > + /*
> > + * Group chain is similar with link chain: starts with 1st sqe with
> > + * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
> > + */
> > + if (group->head) {
> > + struct io_kiocb *lead = group->head;
> > +
> > + /* members can't be in link chain, can't be drained */
> > + req->flags &= ~(IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN);
> > + lead->grp_refs += 1;
> > + group->last->grp_link = req;
> > + group->last = req;
> > +
> > + if (req->flags & REQ_F_SQE_GROUP)
> > + return NULL;
> > +
> > + req->grp_link = NULL;
> > + req->flags |= REQ_F_SQE_GROUP;
> > + group->head = NULL;
> > + return lead;
> > + } else if (req->flags & REQ_F_SQE_GROUP) {
> > + group->head = req;
> > + group->last = req;
> > + req->grp_refs = 1;
> > + req->flags |= REQ_F_SQE_GROUP_LEADER;
> > + return NULL;
> > + } else {
> > + return req;
> > + }
> > +}
> > +
> > +static __cold struct io_kiocb *io_submit_fail_group(
> > + struct io_submit_link *link, struct io_kiocb *req)
> > +{
> > + struct io_kiocb *lead = link->head;
> > +
> > + /*
> > + * Instead of failing eagerly, continue assembling the group link
> > + * if applicable and mark the leader with REQ_F_FAIL. The group
> > + * flushing code should find the flag and handle the rest
> > + */
> > + if (lead && (lead->flags & IO_REQ_LINK_FLAGS) && !(lead->flags & REQ_F_FAIL))
> > + req_fail_link_node(lead, -ECANCELED);
> > +
> > + return io_group_sqe(link, req);
> > +}
> > +
> > static __cold int io_submit_fail_link(struct io_submit_link *link,
> > struct io_kiocb *req, int ret)
> > {
> > @@ -2180,11 +2384,18 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
> > {
> > struct io_ring_ctx *ctx = req->ctx;
> > struct io_submit_link *link = &ctx->submit_state.link;
> > + struct io_submit_link *group = &ctx->submit_state.group;
> > trace_io_uring_req_failed(sqe, req, ret);
> > req_fail_link_node(req, ret);
> > + if (group->head || (req->flags & REQ_F_SQE_GROUP)) {
> > + req = io_submit_fail_group(group, req);
> > + if (!req)
> > + return 0;
> > + }
> > +
> > /* cover both linked and non-linked request */
> > return io_submit_fail_link(link, req, ret);
> > }
> > @@ -2232,7 +2443,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > const struct io_uring_sqe *sqe)
> > __must_hold(&ctx->uring_lock)
> > {
> > - struct io_submit_link *link = &ctx->submit_state.link;
> > + struct io_submit_state *state = &ctx->submit_state;
> > int ret;
> > ret = io_init_req(ctx, req, sqe);
> > @@ -2241,9 +2452,17 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > trace_io_uring_submit_req(req);
> > - if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
> > - REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
> > - req = io_link_sqe(link, req);
> > + if (unlikely(state->group.head ||
>
> A note rather to myself and for the future, all theese checks
> including links and groups can be folded under one common if.
Sorry, I may not get the idea, can you provide one example?
We need different logics for group and link, meantime group
has to be handled first before linking, since only the group leader
can be linked.
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-06-13 1:45 ` Ming Lei
@ 2024-06-16 19:13 ` Pavel Begunkov
2024-06-17 3:54 ` Ming Lei
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-16 19:13 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf
On 6/13/24 02:45, Ming Lei wrote:
> On Mon, Jun 10, 2024 at 03:53:51AM +0100, Pavel Begunkov wrote:
>> On 5/11/24 01:12, Ming Lei wrote:
>>> SQE group is defined as one chain of SQEs starting with the first SQE that
>>> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
>>> doesn't have it set, and it is similar with chain of linked SQEs.
>>
>> The main concern stays same, it adds overhead nearly to every
>> single hot function I can think of, as well as lots of
>> complexity.
>
> Almost every sqe group change is covered by REQ_F_SQE_GROUP, so I am
> not clear what the added overhead is.
Yes, and there is a dozen of such in the hot path.
>> Another minor issue is REQ_F_INFLIGHT, as explained before,
>> cancellation has to be able to find all REQ_F_INFLIGHT
>> requests. Requests you add to a group can have that flag
>> but are not discoverable by core io_uring code.
>
> OK, we can deal with it by setting leader as REQ_F_INFLIGHT if the
> flag is set for any member, since all members are guaranteed to
> be drained when leader is completed. Will do it in V4.
Or fail if see one, that's also fine. REQ_F_INFLIGHT is
only set for POLL requests polling another io_uring.
>> Another note, I'll be looking deeper into this patch, there
>> is too much of random tossing around of requests / refcounting
>> and other dependencies, as well as odd intertwinings with
>> other parts.
>
> The only thing wrt. request refcount is for io-wq, since request
> reference is grabbed when the req is handled in io-wq context, and
> group leader need to be completed after all members are done. That
> is all special change wrt. request refcounting.
I rather mean refcounting the group leader, even if it's not
atomic.
>>> diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
>>> index 7a6b190c7da7..62311b0f0e0b 100644
>>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>>> index c184c9a312df..b87c5452de43 100644
>>> --- a/io_uring/io_uring.c
>>> +++ b/io_uring/io_uring.c
...
>>> }
>>> }
>>> +static inline bool need_queue_group_members(struct io_kiocb *req)
>>> +{
>>> + return req_is_group_leader(req) && req->grp_link;
>>> +}
>>> +
>>> +/* Can only be called after this request is issued */
>>> +static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
>>> +{
>>> + if (req->flags & REQ_F_SQE_GROUP) {
>>> + if (req_is_group_leader(req))
>>> + return req;
>>> + return req->grp_link;
>>
>> I'm missing something, it seems io_group_sqe() adding all
>> requests of a group into a singly linked list via ->grp_link,
>> but here we return it as a leader. Confused.
>
> ->grp_link stores the singly linked list for group leader, and
> the same field stores the group leader pointer for group member requests.
> For later, we can add one union field to make code more readable.
> Will do that in V4.
So you're repurposing it in io_queue_group_members(). Since
it has different meaning at different stages of execution,
it warrants a comment (unless there is one I missed).
>>> + }
>>> + return NULL;
>>> +}
>>> +
>>> +void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes)
>>> +{
>>> + struct io_kiocb *member = req->grp_link;
>>> +
>>> + while (member) {
>>> + struct io_kiocb *next = member->grp_link;
>>> +
>>> + if (ignore_cqes)
>>> + member->flags |= REQ_F_CQE_SKIP;
>>> + if (!(member->flags & REQ_F_FAIL)) {
>>> + req_set_fail(member);
>>> + io_req_set_res(member, -ECANCELED, 0);
>>> + }
>>> + member = next;
>>> + }
>>> +}
>>> +
>>> +void io_queue_group_members(struct io_kiocb *req, bool async)
>>> +{
>>> + struct io_kiocb *member = req->grp_link;
>>> +
>>> + if (!member)
>>> + return;
>>> +
>>> + while (member) {
>>> + struct io_kiocb *next = member->grp_link;
>>> +
>>> + member->grp_link = req;
>>> + if (async)
>>> + member->flags |= REQ_F_FORCE_ASYNC;
>>> +
>>> + if (unlikely(member->flags & REQ_F_FAIL)) {
>>> + io_req_task_queue_fail(member, member->cqe.res);
>>> + } else if (member->flags & REQ_F_FORCE_ASYNC) {
>>> + io_req_task_queue(member);
>>> + } else {
>>> + io_queue_sqe(member);
io_req_queue_tw_complete() please, just like links deal
with it, so it's executed in a well known context without
jumping ahead of other requests.
>>> + }
>>> + member = next;
>>> + }
>>> + req->grp_link = NULL;
>>> +}
>>> +
>>> +static inline bool __io_complete_group_req(struct io_kiocb *req,
>>> + struct io_kiocb *lead)
>>> +{
>>> + WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
>>> +
>>> + if (WARN_ON_ONCE(lead->grp_refs <= 0))
>>> + return false;
>>> +
>>> + /*
>>> + * Set linked leader as failed if any member is failed, so
>>> + * the remained link chain can be terminated
>>> + */
>>> + if (unlikely((req->flags & REQ_F_FAIL) &&
>>> + ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
>>> + req_set_fail(lead);
>>> + return !--lead->grp_refs;
>>> +}
>>> +
>>> +/* Complete group request and collect completed leader for freeing */
>>> +static inline void io_complete_group_req(struct io_kiocb *req,
>>> + struct io_wq_work_list *grp_list)
>>> +{
>>> + struct io_kiocb *lead = get_group_leader(req);
>>> +
>>> + if (__io_complete_group_req(req, lead)) {
>>> + req->flags &= ~REQ_F_SQE_GROUP;
>>> + lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
>>> + if (!(lead->flags & REQ_F_CQE_SKIP))
>>> + io_req_commit_cqe(lead, lead->ctx->lockless_cq);
>>> +
>>> + if (req != lead) {
>>> + /*
>>> + * Add leader to free list if it isn't there
>>> + * otherwise clearing group flag for freeing it
>>> + * in current batch
>>> + */
>>> + if (!(lead->flags & REQ_F_SQE_GROUP))
>>> + wq_list_add_tail(&lead->comp_list, grp_list);
>>> + else
>>> + lead->flags &= ~REQ_F_SQE_GROUP;
>>> + }
>>> + } else if (req != lead) {
>>> + req->flags &= ~REQ_F_SQE_GROUP;
>>> + } else {
>>> + /*
>>> + * Leader's group flag clearing is delayed until it is
>>> + * removed from free list
>>> + */
>>> + }
>>> +}
>>> +
>>> static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
>>> {
>>> struct io_ring_ctx *ctx = req->ctx;
>>> @@ -1427,6 +1545,17 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
>>> comp_list);
>>> if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
>>> + /*
>>> + * Group leader may be removed twice, don't free it
>>> + * if group flag isn't cleared, when some members
>>> + * aren't completed yet
>>> + */
>>> + if (req->flags & REQ_F_SQE_GROUP) {
>>> + node = req->comp_list.next;
>>> + req->flags &= ~REQ_F_SQE_GROUP;
>>> + continue;
>>> + }
>>> +
>>> if (req->flags & REQ_F_REFCOUNT) {
>>> node = req->comp_list.next;
>>> if (!req_ref_put_and_test(req))
>>> @@ -1459,6 +1588,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>>> __must_hold(&ctx->uring_lock)
>>> {
>>> struct io_submit_state *state = &ctx->submit_state;
>>> + struct io_wq_work_list grp_list = {NULL};
>>> struct io_wq_work_node *node;
>>> __io_cq_lock(ctx);
>>> @@ -1468,9 +1598,15 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>>> if (!(req->flags & REQ_F_CQE_SKIP))
>>> io_req_commit_cqe(req, ctx->lockless_cq);
>>> +
>>> + if (req->flags & REQ_F_SQE_GROUP)
>>
>> Same note about hot path
>>
>>> + io_complete_group_req(req, &grp_list);
>>> }
>>> __io_cq_unlock_post(ctx);
>>> + if (!wq_list_empty(&grp_list))
>>> + __wq_list_splice(&grp_list, state->compl_reqs.first);
>>
>> What's the point of splicing it here insted of doing all
>> that under REQ_F_SQE_GROUP above?
>
> As mentioned, group leader can't be completed until all members are
> done, so any leaders in the current list have to be moved to this
> local list for deferred completion. That should be the only tricky
> part of the whole sqe group implementation.
>
>>
>>> +
>>> if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
>>> io_free_batch_list(ctx, state->compl_reqs.first);
>>> INIT_WQ_LIST(&state->compl_reqs);
...
>>> @@ -1863,6 +2012,8 @@ void io_wq_submit_work(struct io_wq_work *work)
>>> }
>>> }
>>> + if (need_queue_group_members(req))
>>> + io_queue_group_members(req, true);
>>> do {
>>> ret = io_issue_sqe(req, issue_flags);
>>> if (ret != -EAGAIN)
>>> @@ -1977,6 +2128,9 @@ static inline void io_queue_sqe(struct io_kiocb *req)
>>> */
>>> if (unlikely(ret))
>>> io_queue_async(req, ret);
>>> +
>>> + if (need_queue_group_members(req))
>>> + io_queue_group_members(req, false);
>>
>> Request ownership is considered to be handed further at this
>> point and requests should not be touched. Only ret==0 from
>> io_issue_sqe it's still ours, but again it's handed somewhere
>> by io_queue_async().
>
> Yes, you are right.
>
> And it has been fixed in my local tree:
>
> @@ -2154,8 +2154,7 @@ static inline void io_queue_sqe(struct io_kiocb *req)
> */
> if (unlikely(ret))
> io_queue_async(req, ret);
> -
> - if (need_queue_group_members(req))
> + else if (need_queue_group_members(req))
> io_queue_group_members(req, false);
> }
In the else branch you don't own the request anymore
and shouldn't be poking into it.
It looks like you're trying to do io_queue_group_members()
when previously the request would get completed. It's not
the right place, and apart from whack'a'moled
io_wq_submit_work() there is also io_poll_issue() missed.
Seems __io_submit_flush_completions() / io_free_batch_list()
would be more appropriate, and you already have a chunk with
GROUP check in there handling the leader appearing in there
twice.
>>> }
>>> static void io_queue_sqe_fallback(struct io_kiocb *req)
...
>>> @@ -2232,7 +2443,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>> const struct io_uring_sqe *sqe)
>>> __must_hold(&ctx->uring_lock)
>>> {
>>> - struct io_submit_link *link = &ctx->submit_state.link;
>>> + struct io_submit_state *state = &ctx->submit_state;
>>> int ret;
>>> ret = io_init_req(ctx, req, sqe);
>>> @@ -2241,9 +2452,17 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>> trace_io_uring_submit_req(req);
>>> - if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
>>> - REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
>>> - req = io_link_sqe(link, req);
>>> + if (unlikely(state->group.head ||
>>
>> A note rather to myself and for the future, all theese checks
>> including links and groups can be folded under one common if.
>
> Sorry, I may not get the idea, can you provide one example?
To be clear, not suggesting you doing it.
Simplifying:
init_req() {
if (req->flags & GROUP|LINK) {
ctx->assembling;
}
}
io_submit_sqe() {
init_req();
if (ctx->assembling) {
check_groups/links();
if (done);
ctx->assembling = false;
}
}
>
> We need different logics for group and link, meantime group
> has to be handled first before linking, since only the group leader
> can be linked.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 5/9] io_uring: support SQE group
2024-06-16 19:13 ` Pavel Begunkov
@ 2024-06-17 3:54 ` Ming Lei
0 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-06-17 3:54 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf, ming.lei
On Sun, Jun 16, 2024 at 08:13:26PM +0100, Pavel Begunkov wrote:
> On 6/13/24 02:45, Ming Lei wrote:
> > On Mon, Jun 10, 2024 at 03:53:51AM +0100, Pavel Begunkov wrote:
> > > On 5/11/24 01:12, Ming Lei wrote:
> > > > SQE group is defined as one chain of SQEs starting with the first SQE that
> > > > has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> > > > doesn't have it set, and it is similar with chain of linked SQEs.
> > >
> > > The main concern stays same, it adds overhead nearly to every
> > > single hot function I can think of, as well as lots of
> > > complexity.
> >
> > Almost every sqe group change is covered by REQ_F_SQE_GROUP, so I am
> > not clear what the added overhead is.
>
> Yes, and there is a dozen of such in the hot path.
req->flags is supposed to be L1-cached in all these hot paths, and the
check is basically zero cost, so SQE_GROUP shouldn't add extra cost for
existed io_uring core code path.
>
> > > Another minor issue is REQ_F_INFLIGHT, as explained before,
> > > cancellation has to be able to find all REQ_F_INFLIGHT
> > > requests. Requests you add to a group can have that flag
> > > but are not discoverable by core io_uring code.
> >
> > OK, we can deal with it by setting leader as REQ_F_INFLIGHT if the
> > flag is set for any member, since all members are guaranteed to
> > be drained when leader is completed. Will do it in V4.
>
> Or fail if see one, that's also fine. REQ_F_INFLIGHT is
> only set for POLL requests polling another io_uring.
It is set for read-write/tee/splice op with normal file too, so looks
not safe to fail.
>
> > > Another note, I'll be looking deeper into this patch, there
> > > is too much of random tossing around of requests / refcounting
> > > and other dependencies, as well as odd intertwinings with
> > > other parts.
> >
> > The only thing wrt. request refcount is for io-wq, since request
> > reference is grabbed when the req is handled in io-wq context, and
> > group leader need to be completed after all members are done. That
> > is all special change wrt. request refcounting.
>
> I rather mean refcounting the group leader, even if it's not
> atomic.
If you mean reusing req->refs for refcounting the group leader, it may
not work, cause member can complete from io-wq, but leader may not.
Meantime using dedicated ->grp_refs actually simplifies things a lot.
>
> > > > diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
> > > > index 7a6b190c7da7..62311b0f0e0b 100644
> > > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > > index c184c9a312df..b87c5452de43 100644
> > > > --- a/io_uring/io_uring.c
> > > > +++ b/io_uring/io_uring.c
> ...
> > > > }
> > > > }
> > > > +static inline bool need_queue_group_members(struct io_kiocb *req)
> > > > +{
> > > > + return req_is_group_leader(req) && req->grp_link;
> > > > +}
> > > > +
> > > > +/* Can only be called after this request is issued */
> > > > +static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
> > > > +{
> > > > + if (req->flags & REQ_F_SQE_GROUP) {
> > > > + if (req_is_group_leader(req))
> > > > + return req;
> > > > + return req->grp_link;
> > >
> > > I'm missing something, it seems io_group_sqe() adding all
> > > requests of a group into a singly linked list via ->grp_link,
> > > but here we return it as a leader. Confused.
> >
> > ->grp_link stores the singly linked list for group leader, and
> > the same field stores the group leader pointer for group member requests.
> > For later, we can add one union field to make code more readable.
> > Will do that in V4.
>
> So you're repurposing it in io_queue_group_members(). Since
> it has different meaning at different stages of execution,
> it warrants a comment (unless there is one I missed).
OK, either adding comment or another union field for it.
>
> > > > + }
> > > > + return NULL;
> > > > +}
> > > > +
> > > > +void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes)
> > > > +{
> > > > + struct io_kiocb *member = req->grp_link;
> > > > +
> > > > + while (member) {
> > > > + struct io_kiocb *next = member->grp_link;
> > > > +
> > > > + if (ignore_cqes)
> > > > + member->flags |= REQ_F_CQE_SKIP;
> > > > + if (!(member->flags & REQ_F_FAIL)) {
> > > > + req_set_fail(member);
> > > > + io_req_set_res(member, -ECANCELED, 0);
> > > > + }
> > > > + member = next;
> > > > + }
> > > > +}
> > > > +
> > > > +void io_queue_group_members(struct io_kiocb *req, bool async)
> > > > +{
> > > > + struct io_kiocb *member = req->grp_link;
> > > > +
> > > > + if (!member)
> > > > + return;
> > > > +
> > > > + while (member) {
> > > > + struct io_kiocb *next = member->grp_link;
> > > > +
> > > > + member->grp_link = req;
> > > > + if (async)
> > > > + member->flags |= REQ_F_FORCE_ASYNC;
> > > > +
> > > > + if (unlikely(member->flags & REQ_F_FAIL)) {
> > > > + io_req_task_queue_fail(member, member->cqe.res);
> > > > + } else if (member->flags & REQ_F_FORCE_ASYNC) {
> > > > + io_req_task_queue(member);
> > > > + } else {
> > > > + io_queue_sqe(member);
>
> io_req_queue_tw_complete() please, just like links deal
> with it, so it's executed in a well known context without
> jumping ahead of other requests.
members needn't to be queued until leader is completed for plain
SQE_GROUP, otherwise perf can drop.
>
> > > > + }
> > > > + member = next;
> > > > + }
> > > > + req->grp_link = NULL;
> > > > +}
> > > > +
> > > > +static inline bool __io_complete_group_req(struct io_kiocb *req,
> > > > + struct io_kiocb *lead)
> > > > +{
> > > > + WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
> > > > +
> > > > + if (WARN_ON_ONCE(lead->grp_refs <= 0))
> > > > + return false;
> > > > +
> > > > + /*
> > > > + * Set linked leader as failed if any member is failed, so
> > > > + * the remained link chain can be terminated
> > > > + */
> > > > + if (unlikely((req->flags & REQ_F_FAIL) &&
> > > > + ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
> > > > + req_set_fail(lead);
> > > > + return !--lead->grp_refs;
> > > > +}
> > > > +
> > > > +/* Complete group request and collect completed leader for freeing */
> > > > +static inline void io_complete_group_req(struct io_kiocb *req,
> > > > + struct io_wq_work_list *grp_list)
> > > > +{
> > > > + struct io_kiocb *lead = get_group_leader(req);
> > > > +
> > > > + if (__io_complete_group_req(req, lead)) {
> > > > + req->flags &= ~REQ_F_SQE_GROUP;
> > > > + lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
> > > > + if (!(lead->flags & REQ_F_CQE_SKIP))
> > > > + io_req_commit_cqe(lead, lead->ctx->lockless_cq);
> > > > +
> > > > + if (req != lead) {
> > > > + /*
> > > > + * Add leader to free list if it isn't there
> > > > + * otherwise clearing group flag for freeing it
> > > > + * in current batch
> > > > + */
> > > > + if (!(lead->flags & REQ_F_SQE_GROUP))
> > > > + wq_list_add_tail(&lead->comp_list, grp_list);
> > > > + else
> > > > + lead->flags &= ~REQ_F_SQE_GROUP;
> > > > + }
> > > > + } else if (req != lead) {
> > > > + req->flags &= ~REQ_F_SQE_GROUP;
> > > > + } else {
> > > > + /*
> > > > + * Leader's group flag clearing is delayed until it is
> > > > + * removed from free list
> > > > + */
> > > > + }
> > > > +}
> > > > +
> > > > static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> > > > {
> > > > struct io_ring_ctx *ctx = req->ctx;
> > > > @@ -1427,6 +1545,17 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
> > > > comp_list);
> > > > if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
> > > > + /*
> > > > + * Group leader may be removed twice, don't free it
> > > > + * if group flag isn't cleared, when some members
> > > > + * aren't completed yet
> > > > + */
> > > > + if (req->flags & REQ_F_SQE_GROUP) {
> > > > + node = req->comp_list.next;
> > > > + req->flags &= ~REQ_F_SQE_GROUP;
> > > > + continue;
> > > > + }
> > > > +
> > > > if (req->flags & REQ_F_REFCOUNT) {
> > > > node = req->comp_list.next;
> > > > if (!req_ref_put_and_test(req))
> > > > @@ -1459,6 +1588,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > > > __must_hold(&ctx->uring_lock)
> > > > {
> > > > struct io_submit_state *state = &ctx->submit_state;
> > > > + struct io_wq_work_list grp_list = {NULL};
> > > > struct io_wq_work_node *node;
> > > > __io_cq_lock(ctx);
> > > > @@ -1468,9 +1598,15 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > > > if (!(req->flags & REQ_F_CQE_SKIP))
> > > > io_req_commit_cqe(req, ctx->lockless_cq);
> > > > +
> > > > + if (req->flags & REQ_F_SQE_GROUP)
> > >
> > > Same note about hot path
> > >
> > > > + io_complete_group_req(req, &grp_list);
> > > > }
> > > > __io_cq_unlock_post(ctx);
> > > > + if (!wq_list_empty(&grp_list))
> > > > + __wq_list_splice(&grp_list, state->compl_reqs.first);
> > >
> > > What's the point of splicing it here insted of doing all
> > > that under REQ_F_SQE_GROUP above?
> >
> > As mentioned, group leader can't be completed until all members are
> > done, so any leaders in the current list have to be moved to this
> > local list for deferred completion. That should be the only tricky
> > part of the whole sqe group implementation.
> >
> > >
> > > > +
> > > > if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
> > > > io_free_batch_list(ctx, state->compl_reqs.first);
> > > > INIT_WQ_LIST(&state->compl_reqs);
> ...
> > > > @@ -1863,6 +2012,8 @@ void io_wq_submit_work(struct io_wq_work *work)
> > > > }
> > > > }
> > > > + if (need_queue_group_members(req))
> > > > + io_queue_group_members(req, true);
> > > > do {
> > > > ret = io_issue_sqe(req, issue_flags);
> > > > if (ret != -EAGAIN)
> > > > @@ -1977,6 +2128,9 @@ static inline void io_queue_sqe(struct io_kiocb *req)
> > > > */
> > > > if (unlikely(ret))
> > > > io_queue_async(req, ret);
> > > > +
> > > > + if (need_queue_group_members(req))
> > > > + io_queue_group_members(req, false);
> > >
> > > Request ownership is considered to be handed further at this
> > > point and requests should not be touched. Only ret==0 from
> > > io_issue_sqe it's still ours, but again it's handed somewhere
> > > by io_queue_async().
> >
> > Yes, you are right.
> >
> > And it has been fixed in my local tree:
> >
> > @@ -2154,8 +2154,7 @@ static inline void io_queue_sqe(struct io_kiocb *req)
> > */
> > if (unlikely(ret))
> > io_queue_async(req, ret);
> > -
> > - if (need_queue_group_members(req))
> > + else if (need_queue_group_members(req))
> > io_queue_group_members(req, false);
> > }
>
> In the else branch you don't own the request anymore
> and shouldn't be poking into it.
In theory, it is yes, but now all requests won't be freed unless
returning from io_queue_sqe(), and it needs to be commented
carefully.
>
> It looks like you're trying to do io_queue_group_members()
> when previously the request would get completed. It's not
It is only true for REQ_F_SQE_GROUP_DEP, and there isn't such
dependency for plain SQE_GROUP.
> the right place, and apart from whack'a'moled
> io_wq_submit_work() there is also io_poll_issue() missed.
>
> Seems __io_submit_flush_completions() / io_free_batch_list()
> would be more appropriate, and you already have a chunk with
> GROUP check in there handling the leader appearing in there
> twice.
As mentioned, we need to queue members with leader together
if there isn't dependency among them.
>
>
> > > > }
> > > > static void io_queue_sqe_fallback(struct io_kiocb *req)
> ...
> > > > @@ -2232,7 +2443,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > > const struct io_uring_sqe *sqe)
> > > > __must_hold(&ctx->uring_lock)
> > > > {
> > > > - struct io_submit_link *link = &ctx->submit_state.link;
> > > > + struct io_submit_state *state = &ctx->submit_state;
> > > > int ret;
> > > > ret = io_init_req(ctx, req, sqe);
> > > > @@ -2241,9 +2452,17 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > > trace_io_uring_submit_req(req);
> > > > - if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
> > > > - REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
> > > > - req = io_link_sqe(link, req);
> > > > + if (unlikely(state->group.head ||
> > >
> > > A note rather to myself and for the future, all theese checks
> > > including links and groups can be folded under one common if.
> >
> > Sorry, I may not get the idea, can you provide one example?
>
> To be clear, not suggesting you doing it.
>
> Simplifying:
>
> init_req() {
> if (req->flags & GROUP|LINK) {
> ctx->assembling;
> }
> }
>
> io_submit_sqe() {
> init_req();
>
> if (ctx->assembling) {
> check_groups/links();
> if (done);
> ctx->assembling = false;
> }
> }
OK, I can work toward this way, and it is just to replace check over
group.head/link.head & link/group flag with ->assembling, meantime
with cost of setting ctx->assembling.
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH V3 6/9] io_uring: support sqe group with members depending on leader
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
` (4 preceding siblings ...)
2024-05-11 0:12 ` [PATCH V3 5/9] io_uring: support SQE group Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-05-11 0:12 ` [PATCH V3 7/9] io_uring: support providing sqe group buffer Ming Lei
` (3 subsequent siblings)
9 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
Generic sqe group provides flexible way for supporting N:M dependency
between groups.
However, some resource can't cross OPs, such as kernel buffer, otherwise
the buffer may be leaked easily in case that any OP failure or application
panic.
Add flag REQ_F_SQE_GROUP_DEP for allowing members to depend on group leader,
so that group members won't be queued until the leader request is completed,
and we still commit leader's CQE after all members CQE are posted. With this
way, the kernel resource lifetime can be aligned with group leader or group,
one typical use case is to support zero copy for device internal buffer.
This use case may not be generic enough, so set it only for specific OP which
can serve as group leader, meantime we have run out of sqe flags.
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring_types.h | 3 +++
io_uring/io_uring.c | 16 +++++++++++++++-
io_uring/io_uring.h | 5 ++++-
3 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 62311b0f0e0b..5cbc9d3346a7 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -477,6 +477,7 @@ enum {
REQ_F_BL_NO_RECYCLE_BIT,
REQ_F_BUFFERS_COMMIT_BIT,
REQ_F_SQE_GROUP_LEADER_BIT,
+ REQ_F_SQE_GROUP_DEP_BIT,
/* not a real bit, just to check we're not overflowing the space */
__REQ_F_LAST_BIT,
@@ -561,6 +562,8 @@ enum {
REQ_F_BUFFERS_COMMIT = IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT),
/* sqe group lead */
REQ_F_SQE_GROUP_LEADER = IO_REQ_FLAG(REQ_F_SQE_GROUP_LEADER_BIT),
+ /* sqe group with members depending on leader */
+ REQ_F_SQE_GROUP_DEP = IO_REQ_FLAG(REQ_F_SQE_GROUP_DEP_BIT),
};
typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index b87c5452de43..5d94629c01b8 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -936,7 +936,14 @@ static __always_inline void io_req_commit_cqe(struct io_kiocb *req,
static inline bool need_queue_group_members(struct io_kiocb *req)
{
- return req_is_group_leader(req) && req->grp_link;
+ if (likely(!(req->flags & REQ_F_SQE_GROUP)))
+ return false;
+
+ if (!(req->flags & REQ_F_SQE_GROUP_LEADER) ||
+ (req->flags & REQ_F_SQE_GROUP_DEP))
+ return false;
+
+ return !!req->grp_link;
}
/* Can only be called after this request is issued */
@@ -983,6 +990,9 @@ void io_queue_group_members(struct io_kiocb *req, bool async)
if (unlikely(member->flags & REQ_F_FAIL)) {
io_req_task_queue_fail(member, member->cqe.res);
+ } else if (unlikely((req->flags & REQ_F_FAIL) &&
+ (req->flags & REQ_F_SQE_GROUP_DEP))) {
+ io_req_task_queue_fail(member, -ECANCELED);
} else if (member->flags & REQ_F_FORCE_ASYNC) {
io_req_task_queue(member);
} else {
@@ -1065,6 +1075,10 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
return;
}
+ /* queue members which may depend on leader */
+ if (req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP_DEP))
+ io_queue_group_members(req, true);
+
io_cq_lock(ctx);
if (!(req->flags & REQ_F_CQE_SKIP))
io_req_commit_cqe(req, false);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index b11db3bdd8d8..f593ff8b2deb 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -368,7 +368,10 @@ static inline void io_req_complete_defer(struct io_kiocb *req)
wq_list_add_tail(&req->comp_list, &state->compl_reqs);
- /* members may not be issued when leader is completed */
+ /*
+ * Members may not be issued when leader is completed, or members
+ * depend on leader in case of REQ_F_SQE_GROUP_DEP
+ */
if (unlikely(req_is_group_leader(req) && req->grp_link))
io_queue_group_members(req, false);
}
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH V3 7/9] io_uring: support providing sqe group buffer
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
` (5 preceding siblings ...)
2024-05-11 0:12 ` [PATCH V3 6/9] io_uring: support sqe group with members depending on leader Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-06-10 2:00 ` Pavel Begunkov
2024-05-11 0:12 ` [PATCH V3 8/9] io_uring/uring_cmd: support provide group kernel buffer Ming Lei
` (2 subsequent siblings)
9 siblings, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
SQE group with REQ_F_SQE_GROUP_DEP introduces one new mechanism to share
resource among one group of requests, and all member requests can consume
the resource provided by group lead efficiently in parallel.
This patch uses the added sqe group feature REQ_F_SQE_GROUP_DEP to share
kernel buffer in sqe group:
- the group lead provides kernel buffer to member requests
- member requests use the provided buffer to do FS or network IO, or more
operations in future
- this kernel buffer is returned back after member requests use it up
This way looks a bit similar with kernel's pipe/splice, but there are some
important differences:
- splice is for transferring data between two FDs via pipe, and fd_out can
only read data from pipe; this feature can borrow buffer from group lead to
members, so member request can write data to this buffer if the provided
buffer is allowed to write to.
- splice implements data transfer by moving pages between subsystem and
pipe, that means page ownership is transferred, and this way is one of the
most complicated thing of splice; this patch supports scenarios in which
the buffer can't be transferred, and buffer is only borrowed to member
requests, and is returned back after member requests consume the provided
buffer, so buffer lifetime is simplified a lot. Especially the buffer is
guaranteed to be returned back.
- splice can't run in async way basically
It can help to implement generic zero copy between device and related
operations, such as ublk, fuse, vdpa, even network receive or whatever.
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring_types.h | 33 +++++++++++++++++++
io_uring/io_uring.c | 10 +++++-
io_uring/io_uring.h | 5 +++
io_uring/kbuf.c | 60 ++++++++++++++++++++++++++++++++++
io_uring/kbuf.h | 13 ++++++++
io_uring/net.c | 31 +++++++++++++++++-
io_uring/opdef.c | 5 +++
io_uring/opdef.h | 2 ++
io_uring/rw.c | 20 +++++++++++-
9 files changed, 176 insertions(+), 3 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 5cbc9d3346a7..e414c3544f72 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -6,6 +6,7 @@
#include <linux/task_work.h>
#include <linux/bitmap.h>
#include <linux/llist.h>
+#include <linux/bvec.h>
#include <uapi/linux/io_uring.h>
enum {
@@ -39,6 +40,26 @@ enum io_uring_cmd_flags {
IO_URING_F_COMPAT = (1 << 12),
};
+struct io_uring_kernel_buf;
+typedef void (io_uring_buf_giveback_t) (const struct io_uring_kernel_buf *);
+
+/* buffer provided from kernel */
+struct io_uring_kernel_buf {
+ unsigned long len;
+ unsigned short nr_bvecs;
+ unsigned char dir; /* ITER_SOURCE or ITER_DEST */
+
+ /* offset in the 1st bvec */
+ unsigned int offset;
+ const struct bio_vec *bvec;
+
+ /* called when we are done with this buffer */
+ io_uring_buf_giveback_t *grp_kbuf_ack;
+
+ /* private field, user don't touch it */
+ struct bio_vec __bvec[];
+};
+
struct io_wq_work_node {
struct io_wq_work_node *next;
};
@@ -478,6 +499,7 @@ enum {
REQ_F_BUFFERS_COMMIT_BIT,
REQ_F_SQE_GROUP_LEADER_BIT,
REQ_F_SQE_GROUP_DEP_BIT,
+ REQ_F_GROUP_KBUF_BIT,
/* not a real bit, just to check we're not overflowing the space */
__REQ_F_LAST_BIT,
@@ -564,6 +586,8 @@ enum {
REQ_F_SQE_GROUP_LEADER = IO_REQ_FLAG(REQ_F_SQE_GROUP_LEADER_BIT),
/* sqe group with members depending on leader */
REQ_F_SQE_GROUP_DEP = IO_REQ_FLAG(REQ_F_SQE_GROUP_DEP_BIT),
+ /* group lead provides kbuf for members, set for both lead and member */
+ REQ_F_GROUP_KBUF = IO_REQ_FLAG(REQ_F_GROUP_KBUF_BIT),
};
typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
@@ -647,6 +671,15 @@ struct io_kiocb {
* REQ_F_BUFFER_RING is set.
*/
struct io_buffer_list *buf_list;
+
+ /*
+ * store kernel buffer provided by sqe group lead, valid
+ * IFF REQ_F_GROUP_KBUF
+ *
+ * The buffer meta is immutable since it is shared by
+ * all member requests
+ */
+ const struct io_uring_kernel_buf *grp_kbuf;
};
union {
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 5d94629c01b8..7bd762846b91 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -114,7 +114,7 @@
#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
- REQ_F_ASYNC_DATA)
+ REQ_F_ASYNC_DATA | REQ_F_GROUP_KBUF)
#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
IO_REQ_CLEAN_FLAGS)
@@ -380,6 +380,11 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
static void io_clean_op(struct io_kiocb *req)
{
+ /* GROUP_KBUF is only available for REQ_F_SQE_GROUP_DEP */
+ if ((req->flags & (REQ_F_GROUP_KBUF | REQ_F_SQE_GROUP_DEP)) ==
+ (REQ_F_GROUP_KBUF | REQ_F_SQE_GROUP_DEP))
+ io_group_kbuf_drop(req);
+
if (req->flags & REQ_F_BUFFER_SELECTED) {
spin_lock(&req->ctx->completion_lock);
io_kbuf_drop(req);
@@ -982,9 +987,12 @@ void io_queue_group_members(struct io_kiocb *req, bool async)
return;
while (member) {
+ const struct io_issue_def *def = &io_issue_defs[member->opcode];
struct io_kiocb *next = member->grp_link;
member->grp_link = req;
+ if ((req->flags & REQ_F_GROUP_KBUF) && def->accept_group_kbuf)
+ member->flags |= REQ_F_GROUP_KBUF;
if (async)
member->flags |= REQ_F_FORCE_ASYNC;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index f593ff8b2deb..3569f2c8b12e 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -354,6 +354,11 @@ static inline bool req_is_group_member(struct io_kiocb *req)
return !req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP);
}
+static inline bool req_support_group_dep(struct io_kiocb *req)
+{
+ return req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP_DEP);
+}
+
/*
* Don't complete immediately but use deferred completion infrastructure.
* Protected by ->uring_lock and can only be used either with
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index d2945c9c812b..4293bed374b7 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -823,3 +823,63 @@ int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma)
io_put_bl(ctx, bl);
return ret;
}
+
+int io_provide_group_kbuf(struct io_kiocb *req,
+ const struct io_uring_kernel_buf *grp_kbuf)
+{
+ if (unlikely(!req_support_group_dep(req)))
+ return -EINVAL;
+
+ /*
+ * Borrow this buffer from one kernel subsystem, and return them
+ * by calling `grp_kbuf_ack` when the group lead is freed.
+ *
+ * Not like pipe/splice, this kernel buffer is always owned by the
+ * provider, and has to be returned back.
+ */
+ req->grp_kbuf = grp_kbuf;
+ req->flags |= REQ_F_GROUP_KBUF;
+
+ return 0;
+}
+
+int io_import_group_kbuf(struct io_kiocb *req, unsigned long buf_off,
+ unsigned int len, int dir, struct iov_iter *iter)
+{
+ struct io_kiocb *lead = req->grp_link;
+ const struct io_uring_kernel_buf *kbuf;
+ unsigned long offset;
+
+ WARN_ON_ONCE(!(req->flags & REQ_F_GROUP_KBUF));
+
+ if (!req_is_group_member(req))
+ return -EINVAL;
+
+ if (!lead || !req_support_group_dep(lead) || !lead->grp_kbuf)
+ return -EINVAL;
+
+ /* req->fused_cmd_kbuf is immutable */
+ kbuf = lead->grp_kbuf;
+ offset = kbuf->offset;
+
+ if (!kbuf->bvec)
+ return -EINVAL;
+
+ if (dir != kbuf->dir)
+ return -EINVAL;
+
+ if (unlikely(buf_off > kbuf->len))
+ return -EFAULT;
+
+ if (unlikely(len > kbuf->len - buf_off))
+ return -EFAULT;
+
+ /* don't use io_import_fixed which doesn't support multipage bvec */
+ offset += buf_off;
+ iov_iter_bvec(iter, dir, kbuf->bvec, kbuf->nr_bvecs, offset + len);
+
+ if (offset)
+ iov_iter_advance(iter, offset);
+
+ return 0;
+}
diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
index b90aca3a57fa..2e1b7f91efb6 100644
--- a/io_uring/kbuf.h
+++ b/io_uring/kbuf.h
@@ -82,6 +82,11 @@ struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx,
unsigned long bgid);
int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma);
+int io_provide_group_kbuf(struct io_kiocb *req,
+ const struct io_uring_kernel_buf *grp_kbuf);
+int io_import_group_kbuf(struct io_kiocb *req, unsigned long buf_off,
+ unsigned int len, int dir, struct iov_iter *iter);
+
static inline bool io_kbuf_recycle_ring(struct io_kiocb *req)
{
/*
@@ -180,4 +185,12 @@ static inline unsigned int io_put_kbufs(struct io_kiocb *req, int nbufs,
{
return __io_put_kbufs(req, nbufs, issue_flags);
}
+
+static inline void io_group_kbuf_drop(struct io_kiocb *req)
+{
+ const struct io_uring_kernel_buf *gbuf = req->grp_kbuf;
+
+ if (gbuf && gbuf->grp_kbuf_ack)
+ gbuf->grp_kbuf_ack(gbuf);
+}
#endif
diff --git a/io_uring/net.c b/io_uring/net.c
index 070dea9a4eda..83fd5879082e 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -79,6 +79,13 @@ struct io_sr_msg {
*/
#define MULTISHOT_MAX_RETRY 32
+#define user_ptr_to_u64(x) ( \
+{ \
+ typecheck(void __user *, (x)); \
+ (u64)(unsigned long)(x); \
+} \
+)
+
int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
{
struct io_shutdown *shutdown = io_kiocb_to_cmd(req, struct io_shutdown);
@@ -365,7 +372,7 @@ static int io_send_setup(struct io_kiocb *req)
kmsg->msg.msg_name = &kmsg->addr;
kmsg->msg.msg_namelen = sr->addr_len;
}
- if (!io_do_buffer_select(req)) {
+ if (!io_do_buffer_select(req) && !(req->flags & REQ_F_GROUP_KBUF)) {
ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len,
&kmsg->msg.msg_iter);
if (unlikely(ret < 0))
@@ -585,6 +592,15 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
if (issue_flags & IO_URING_F_NONBLOCK)
flags |= MSG_DONTWAIT;
+ if (req->flags & REQ_F_GROUP_KBUF) {
+ ret = io_import_group_kbuf(req,
+ user_ptr_to_u64(sr->buf),
+ sr->len, ITER_SOURCE,
+ &kmsg->msg.msg_iter);
+ if (unlikely(ret))
+ return ret;
+ }
+
retry_bundle:
if (io_do_buffer_select(req)) {
struct buf_sel_arg arg = {
@@ -1132,6 +1148,11 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
if (unlikely(ret))
goto out_free;
sr->buf = NULL;
+ } else if (req->flags & REQ_F_GROUP_KBUF) {
+ ret = io_import_group_kbuf(req, user_ptr_to_u64(sr->buf),
+ sr->len, ITER_DEST, &kmsg->msg.msg_iter);
+ if (unlikely(ret))
+ goto out_free;
}
kmsg->msg.msg_inq = -1;
@@ -1334,6 +1355,14 @@ static int io_send_zc_import(struct io_kiocb *req, struct io_async_msghdr *kmsg)
if (unlikely(ret))
return ret;
kmsg->msg.sg_from_iter = io_sg_from_iter;
+ } else if (req->flags & REQ_F_GROUP_KBUF) {
+ struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+
+ ret = io_import_group_kbuf(req, user_ptr_to_u64(sr->buf),
+ sr->len, ITER_SOURCE, &kmsg->msg.msg_iter);
+ if (unlikely(ret))
+ return ret;
+ kmsg->msg.sg_from_iter = io_sg_from_iter;
} else {
ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len, &kmsg->msg.msg_iter);
if (unlikely(ret))
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index 2de5cca9504e..92b657a063a0 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -246,6 +246,7 @@ const struct io_issue_def io_issue_defs[] = {
.ioprio = 1,
.iopoll = 1,
.iopoll_queue = 1,
+ .accept_group_kbuf = 1,
.async_size = sizeof(struct io_async_rw),
.prep = io_prep_read,
.issue = io_read,
@@ -260,6 +261,7 @@ const struct io_issue_def io_issue_defs[] = {
.ioprio = 1,
.iopoll = 1,
.iopoll_queue = 1,
+ .accept_group_kbuf = 1,
.async_size = sizeof(struct io_async_rw),
.prep = io_prep_write,
.issue = io_write,
@@ -282,6 +284,7 @@ const struct io_issue_def io_issue_defs[] = {
.audit_skip = 1,
.ioprio = 1,
.buffer_select = 1,
+ .accept_group_kbuf = 1,
#if defined(CONFIG_NET)
.async_size = sizeof(struct io_async_msghdr),
.prep = io_sendmsg_prep,
@@ -297,6 +300,7 @@ const struct io_issue_def io_issue_defs[] = {
.buffer_select = 1,
.audit_skip = 1,
.ioprio = 1,
+ .accept_group_kbuf = 1,
#if defined(CONFIG_NET)
.async_size = sizeof(struct io_async_msghdr),
.prep = io_recvmsg_prep,
@@ -424,6 +428,7 @@ const struct io_issue_def io_issue_defs[] = {
.pollout = 1,
.audit_skip = 1,
.ioprio = 1,
+ .accept_group_kbuf = 1,
#if defined(CONFIG_NET)
.async_size = sizeof(struct io_async_msghdr),
.prep = io_send_zc_prep,
diff --git a/io_uring/opdef.h b/io_uring/opdef.h
index 7ee6f5aa90aa..a53970655c82 100644
--- a/io_uring/opdef.h
+++ b/io_uring/opdef.h
@@ -29,6 +29,8 @@ struct io_issue_def {
unsigned iopoll_queue : 1;
/* vectored opcode, set if 1) vectored, and 2) handler needs to know */
unsigned vectored : 1;
+ /* opcodes which accept provided group kbuf */
+ unsigned accept_group_kbuf : 1;
/* size of async data needed, if any */
unsigned short async_size;
diff --git a/io_uring/rw.c b/io_uring/rw.c
index a6bf2ea8db91..4ae3ab9f2160 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -235,7 +235,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
if (io_rw_alloc_async(req))
return -ENOMEM;
- if (!do_import || io_do_buffer_select(req))
+ if (!do_import || io_do_buffer_select(req) ||
+ (req->flags & REQ_F_GROUP_KBUF))
return 0;
rw = req->async_data;
@@ -620,11 +621,16 @@ static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
*/
static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter)
{
+ struct io_kiocb *req = cmd_to_io_kiocb(rw);
struct kiocb *kiocb = &rw->kiocb;
struct file *file = kiocb->ki_filp;
ssize_t ret = 0;
loff_t *ppos;
+ /* group buffer is kernel buffer and doesn't have userspace addr */
+ if (req->flags & REQ_F_GROUP_KBUF)
+ return -EOPNOTSUPP;
+
/*
* Don't support polled IO through this interface, and we can't
* support non-blocking either. For the latter, this just causes
@@ -830,6 +836,11 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
ret = io_import_iovec(ITER_DEST, req, io, issue_flags);
if (unlikely(ret < 0))
return ret;
+ } else if (req->flags & REQ_F_GROUP_KBUF) {
+ ret = io_import_group_kbuf(req, rw->addr, rw->len, ITER_DEST,
+ &io->iter);
+ if (unlikely(ret))
+ return ret;
}
ret = io_rw_init_file(req, FMODE_READ);
@@ -1012,6 +1023,13 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
ssize_t ret, ret2;
loff_t *ppos;
+ if (req->flags & REQ_F_GROUP_KBUF) {
+ ret = io_import_group_kbuf(req, rw->addr, rw->len, ITER_SOURCE,
+ &io->iter);
+ if (unlikely(ret))
+ return ret;
+ }
+
ret = io_rw_init_file(req, FMODE_WRITE);
if (unlikely(ret))
return ret;
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH V3 7/9] io_uring: support providing sqe group buffer
2024-05-11 0:12 ` [PATCH V3 7/9] io_uring: support providing sqe group buffer Ming Lei
@ 2024-06-10 2:00 ` Pavel Begunkov
2024-06-12 0:22 ` Ming Lei
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-10 2:00 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Kevin Wolf
On 5/11/24 01:12, Ming Lei wrote:
> SQE group with REQ_F_SQE_GROUP_DEP introduces one new mechanism to share
> resource among one group of requests, and all member requests can consume
> the resource provided by group lead efficiently in parallel.
>
> This patch uses the added sqe group feature REQ_F_SQE_GROUP_DEP to share
> kernel buffer in sqe group:
>
> - the group lead provides kernel buffer to member requests
>
> - member requests use the provided buffer to do FS or network IO, or more
> operations in future
>
> - this kernel buffer is returned back after member requests use it up
>
> This way looks a bit similar with kernel's pipe/splice, but there are some
> important differences:
>
> - splice is for transferring data between two FDs via pipe, and fd_out can
> only read data from pipe; this feature can borrow buffer from group lead to
> members, so member request can write data to this buffer if the provided
> buffer is allowed to write to.
>
> - splice implements data transfer by moving pages between subsystem and
> pipe, that means page ownership is transferred, and this way is one of the
> most complicated thing of splice; this patch supports scenarios in which
> the buffer can't be transferred, and buffer is only borrowed to member
> requests, and is returned back after member requests consume the provided
> buffer, so buffer lifetime is simplified a lot. Especially the buffer is
> guaranteed to be returned back.
>
> - splice can't run in async way basically
>
> It can help to implement generic zero copy between device and related
> operations, such as ublk, fuse, vdpa, even network receive or whatever.
>
> Signed-off-by: Ming Lei <[email protected]>
> ---
> include/linux/io_uring_types.h | 33 +++++++++++++++++++
> io_uring/io_uring.c | 10 +++++-
> io_uring/io_uring.h | 5 +++
> io_uring/kbuf.c | 60 ++++++++++++++++++++++++++++++++++
> io_uring/kbuf.h | 13 ++++++++
> io_uring/net.c | 31 +++++++++++++++++-
> io_uring/opdef.c | 5 +++
> io_uring/opdef.h | 2 ++
> io_uring/rw.c | 20 +++++++++++-
> 9 files changed, 176 insertions(+), 3 deletions(-)
>
...
> diff --git a/io_uring/net.c b/io_uring/net.c
> index 070dea9a4eda..83fd5879082e 100644
> --- a/io_uring/net.c
> +++ b/io_uring/net.c
> @@ -79,6 +79,13 @@ struct io_sr_msg {
...
> retry_bundle:
> if (io_do_buffer_select(req)) {
> struct buf_sel_arg arg = {
> @@ -1132,6 +1148,11 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
> if (unlikely(ret))
> goto out_free;
> sr->buf = NULL;
> + } else if (req->flags & REQ_F_GROUP_KBUF) {
> + ret = io_import_group_kbuf(req, user_ptr_to_u64(sr->buf),
> + sr->len, ITER_DEST, &kmsg->msg.msg_iter);
> + if (unlikely(ret))
> + goto out_free;
> }
>
> kmsg->msg.msg_inq = -1;
> @@ -1334,6 +1355,14 @@ static int io_send_zc_import(struct io_kiocb *req, struct io_async_msghdr *kmsg)
> if (unlikely(ret))
> return ret;
> kmsg->msg.sg_from_iter = io_sg_from_iter;
> + } else if (req->flags & REQ_F_GROUP_KBUF) {
> + struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
> +
> + ret = io_import_group_kbuf(req, user_ptr_to_u64(sr->buf),
> + sr->len, ITER_SOURCE, &kmsg->msg.msg_iter);
> + if (unlikely(ret))
> + return ret;
> + kmsg->msg.sg_from_iter = io_sg_from_iter;
Not looking here too deeply I'm pretty sure it's buggy.
The buffer can only be reused once the notification
CQE completes, and there is nothing in regards to it.
> } else {
> ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len, &kmsg->msg.msg_iter);
> if (unlikely(ret))
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 7/9] io_uring: support providing sqe group buffer
2024-06-10 2:00 ` Pavel Begunkov
@ 2024-06-12 0:22 ` Ming Lei
0 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-06-12 0:22 UTC (permalink / raw)
To: Pavel Begunkov; +Cc: Jens Axboe, io-uring, linux-block, Kevin Wolf
On Mon, Jun 10, 2024 at 03:00:23AM +0100, Pavel Begunkov wrote:
> On 5/11/24 01:12, Ming Lei wrote:
> > SQE group with REQ_F_SQE_GROUP_DEP introduces one new mechanism to share
> > resource among one group of requests, and all member requests can consume
> > the resource provided by group lead efficiently in parallel.
> >
> > This patch uses the added sqe group feature REQ_F_SQE_GROUP_DEP to share
> > kernel buffer in sqe group:
> >
> > - the group lead provides kernel buffer to member requests
> >
> > - member requests use the provided buffer to do FS or network IO, or more
> > operations in future
> >
> > - this kernel buffer is returned back after member requests use it up
> >
> > This way looks a bit similar with kernel's pipe/splice, but there are some
> > important differences:
> >
> > - splice is for transferring data between two FDs via pipe, and fd_out can
> > only read data from pipe; this feature can borrow buffer from group lead to
> > members, so member request can write data to this buffer if the provided
> > buffer is allowed to write to.
> >
> > - splice implements data transfer by moving pages between subsystem and
> > pipe, that means page ownership is transferred, and this way is one of the
> > most complicated thing of splice; this patch supports scenarios in which
> > the buffer can't be transferred, and buffer is only borrowed to member
> > requests, and is returned back after member requests consume the provided
> > buffer, so buffer lifetime is simplified a lot. Especially the buffer is
> > guaranteed to be returned back.
> >
> > - splice can't run in async way basically
> >
> > It can help to implement generic zero copy between device and related
> > operations, such as ublk, fuse, vdpa, even network receive or whatever.
> >
> > Signed-off-by: Ming Lei <[email protected]>
> > ---
> > include/linux/io_uring_types.h | 33 +++++++++++++++++++
> > io_uring/io_uring.c | 10 +++++-
> > io_uring/io_uring.h | 5 +++
> > io_uring/kbuf.c | 60 ++++++++++++++++++++++++++++++++++
> > io_uring/kbuf.h | 13 ++++++++
> > io_uring/net.c | 31 +++++++++++++++++-
> > io_uring/opdef.c | 5 +++
> > io_uring/opdef.h | 2 ++
> > io_uring/rw.c | 20 +++++++++++-
> > 9 files changed, 176 insertions(+), 3 deletions(-)
> >
> ...
> > diff --git a/io_uring/net.c b/io_uring/net.c
> > index 070dea9a4eda..83fd5879082e 100644
> > --- a/io_uring/net.c
> > +++ b/io_uring/net.c
> > @@ -79,6 +79,13 @@ struct io_sr_msg {
> ...
> > retry_bundle:
> > if (io_do_buffer_select(req)) {
> > struct buf_sel_arg arg = {
> > @@ -1132,6 +1148,11 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
> > if (unlikely(ret))
> > goto out_free;
> > sr->buf = NULL;
> > + } else if (req->flags & REQ_F_GROUP_KBUF) {
> > + ret = io_import_group_kbuf(req, user_ptr_to_u64(sr->buf),
> > + sr->len, ITER_DEST, &kmsg->msg.msg_iter);
> > + if (unlikely(ret))
> > + goto out_free;
> > }
> > kmsg->msg.msg_inq = -1;
> > @@ -1334,6 +1355,14 @@ static int io_send_zc_import(struct io_kiocb *req, struct io_async_msghdr *kmsg)
> > if (unlikely(ret))
> > return ret;
> > kmsg->msg.sg_from_iter = io_sg_from_iter;
> > + } else if (req->flags & REQ_F_GROUP_KBUF) {
> > + struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
> > +
> > + ret = io_import_group_kbuf(req, user_ptr_to_u64(sr->buf),
> > + sr->len, ITER_SOURCE, &kmsg->msg.msg_iter);
> > + if (unlikely(ret))
> > + return ret;
> > + kmsg->msg.sg_from_iter = io_sg_from_iter;
>
> Not looking here too deeply I'm pretty sure it's buggy.
> The buffer can only be reused once the notification
> CQE completes, and there is nothing in regards to it.
OK. It isn't triggered in ublk-nbd because the buffer is still valid
until the peer reply is received, when the notification is definitely
ready.
I will remove send zc support in the enablement series, and it can
be added in future without much difficulty.
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH V3 8/9] io_uring/uring_cmd: support provide group kernel buffer
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
` (6 preceding siblings ...)
2024-05-11 0:12 ` [PATCH V3 7/9] io_uring: support providing sqe group buffer Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-05-11 0:12 ` [PATCH V3 9/9] ublk: support provide io buffer Ming Lei
2024-06-03 0:05 ` [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
9 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
Allow uring command to be group leader for providing kernel buffer,
and this way can support generic device zero copy over device buffer.
The following patch will use the way to support zero copy for ublk.
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring/cmd.h | 7 +++++++
include/uapi/linux/io_uring.h | 7 ++++++-
io_uring/uring_cmd.c | 28 ++++++++++++++++++++++++++++
3 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
index 447fbfd32215..fde3a2ec7d9a 100644
--- a/include/linux/io_uring/cmd.h
+++ b/include/linux/io_uring/cmd.h
@@ -48,6 +48,8 @@ void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
unsigned int issue_flags);
+int io_uring_cmd_provide_kbuf(struct io_uring_cmd *ioucmd,
+ const struct io_uring_kernel_buf *grp_kbuf);
#else
static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd)
@@ -67,6 +69,11 @@ static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
}
+static inline int io_uring_cmd_provide_kbuf(struct io_uring_cmd *ioucmd,
+ const struct io_uring_kernel_buf *grp_kbuf)
+{
+ return -EOPNOTSUPP;
+}
#endif
/*
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 2b99d9d0b93e..7c510937e53e 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -269,9 +269,14 @@ enum io_uring_op {
* sqe->uring_cmd_flags top 8bits aren't available for userspace
* IORING_URING_CMD_FIXED use registered buffer; pass this flag
* along with setting sqe->buf_index.
+ * IORING_PROVIDE_GROUP_KBUF this command provides group kernel buffer
+ * for member requests which can retrieve
+ * any sub-buffer with offset(sqe->addr) and
+ * len(sqe->len)
*/
#define IORING_URING_CMD_FIXED (1U << 0)
-#define IORING_URING_CMD_MASK IORING_URING_CMD_FIXED
+#define IORING_PROVIDE_GROUP_KBUF (1U << 1)
+#define IORING_URING_CMD_MASK (IORING_URING_CMD_FIXED | IORING_PROVIDE_GROUP_KBUF)
/*
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 21ac5fb2d5f0..14744eac9158 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -15,6 +15,7 @@
#include "alloc_cache.h"
#include "rsrc.h"
#include "uring_cmd.h"
+#include "kbuf.h"
static struct uring_cache *io_uring_async_get(struct io_kiocb *req)
{
@@ -175,6 +176,26 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
}
EXPORT_SYMBOL_GPL(io_uring_cmd_done);
+/*
+ * Provide kernel buffer for sqe group members to consume, and the caller
+ * has to guarantee that the provided buffer and the callback are valid
+ * until the callback is called.
+ */
+int io_uring_cmd_provide_kbuf(struct io_uring_cmd *ioucmd,
+ const struct io_uring_kernel_buf *grp_kbuf)
+{
+ struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+
+ if (unlikely(!(ioucmd->flags & IORING_PROVIDE_GROUP_KBUF)))
+ return -EINVAL;
+
+ if (unlikely(!req_support_group_dep(req)))
+ return -EINVAL;
+
+ return io_provide_group_kbuf(req, grp_kbuf);
+}
+EXPORT_SYMBOL_GPL(io_uring_cmd_provide_kbuf);
+
static int io_uring_cmd_prep_setup(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
@@ -207,6 +228,13 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
if (ioucmd->flags & ~IORING_URING_CMD_MASK)
return -EINVAL;
+ if (ioucmd->flags & IORING_PROVIDE_GROUP_KBUF) {
+ /* LEADER flag isn't set yet, so check GROUP only */
+ if (!(req->flags & REQ_F_SQE_GROUP))
+ return -EINVAL;
+ req->flags |= REQ_F_SQE_GROUP_DEP;
+ }
+
if (ioucmd->flags & IORING_URING_CMD_FIXED) {
struct io_ring_ctx *ctx = req->ctx;
u16 index;
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH V3 9/9] ublk: support provide io buffer
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
` (7 preceding siblings ...)
2024-05-11 0:12 ` [PATCH V3 8/9] io_uring/uring_cmd: support provide group kernel buffer Ming Lei
@ 2024-05-11 0:12 ` Ming Lei
2024-06-03 0:05 ` [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
9 siblings, 0 replies; 28+ messages in thread
From: Ming Lei @ 2024-05-11 0:12 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Ming Lei
Implement uring command's IORING_PROVIDE_GROUP_KBUF, and provide
io buffer for userpace to run io_uring operations(FS, network IO),
then ublk zero copy can be supported.
userspace code:
https://github.com/ublk-org/ublksrv/tree/group-provide-buf.v2
git clone https://github.com/ublk-org/ublksrv.git -b group-provide-buf.v2
And both loop and nbd zero copy(io_uring send and send zc) are covered.
Performance improvement is quite obvious in big block size test, such as
'loop --buffered_io' perf is doubled in 64KB block test("loop/007 vs
loop/009").
Signed-off-by: Ming Lei <[email protected]>
---
drivers/block/ublk_drv.c | 158 ++++++++++++++++++++++++++++++++--
include/uapi/linux/ublk_cmd.h | 7 +-
2 files changed, 154 insertions(+), 11 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index bea3d5cf8a83..ef07b907437d 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -71,6 +71,8 @@ struct ublk_rq_data {
__u64 sector;
__u32 operation;
__u32 nr_zones;
+ bool allocated_bvec;
+ struct io_uring_kernel_buf buf[0];
};
struct ublk_uring_cmd_pdu {
@@ -189,11 +191,15 @@ struct ublk_params_header {
__u32 types;
};
+static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
+ struct ublk_queue *ubq, int tag, size_t offset);
static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq);
static inline unsigned int ublk_req_build_flags(struct request *req);
static inline struct ublksrv_io_desc *ublk_get_iod(struct ublk_queue *ubq,
int tag);
+static void ublk_io_buf_giveback_cb(const struct io_uring_kernel_buf *buf);
+
static inline bool ublk_dev_is_user_copy(const struct ublk_device *ub)
{
return ub->dev_info.flags & UBLK_F_USER_COPY;
@@ -566,6 +572,11 @@ static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
return ublk_support_user_copy(ubq);
}
+static inline bool ublk_support_zc(const struct ublk_queue *ubq)
+{
+ return ubq->flags & UBLK_F_SUPPORT_ZERO_COPY;
+}
+
static inline void ublk_init_req_ref(const struct ublk_queue *ubq,
struct request *req)
{
@@ -829,6 +840,71 @@ static size_t ublk_copy_user_pages(const struct request *req,
return done;
}
+/*
+ * The built command buffer is immutable, so it is fine to feed it to
+ * concurrent io_uring provide buf commands
+ */
+static int ublk_init_zero_copy_buffer(struct request *req)
+{
+ struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
+ struct io_uring_kernel_buf *imu = data->buf;
+ struct req_iterator rq_iter;
+ unsigned int nr_bvecs = 0;
+ struct bio_vec *bvec;
+ unsigned int offset;
+ struct bio_vec bv;
+
+ if (!ublk_rq_has_data(req))
+ goto exit;
+
+ rq_for_each_bvec(bv, req, rq_iter)
+ nr_bvecs++;
+
+ if (!nr_bvecs)
+ goto exit;
+
+ if (req->bio != req->biotail) {
+ int idx = 0;
+
+ bvec = kvmalloc_array(nr_bvecs, sizeof(struct bio_vec),
+ GFP_NOIO);
+ if (!bvec)
+ return -ENOMEM;
+
+ offset = 0;
+ rq_for_each_bvec(bv, req, rq_iter)
+ bvec[idx++] = bv;
+ data->allocated_bvec = true;
+ } else {
+ struct bio *bio = req->bio;
+
+ offset = bio->bi_iter.bi_bvec_done;
+ bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
+ }
+ imu->bvec = bvec;
+ imu->nr_bvecs = nr_bvecs;
+ imu->offset = offset;
+ imu->len = blk_rq_bytes(req);
+ imu->dir = req_op(req) == REQ_OP_READ ? ITER_DEST : ITER_SOURCE;
+ imu->grp_kbuf_ack = ublk_io_buf_giveback_cb;
+
+ return 0;
+exit:
+ imu->bvec = NULL;
+ return 0;
+}
+
+static void ublk_deinit_zero_copy_buffer(struct request *req)
+{
+ struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
+ struct io_uring_kernel_buf *imu = data->buf;
+
+ if (data->allocated_bvec) {
+ kvfree(imu->bvec);
+ data->allocated_bvec = false;
+ }
+}
+
static inline bool ublk_need_map_req(const struct request *req)
{
return ublk_rq_has_data(req) && req_op(req) == REQ_OP_WRITE;
@@ -840,13 +916,25 @@ static inline bool ublk_need_unmap_req(const struct request *req)
(req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_DRV_IN);
}
-static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
+static int ublk_map_io(const struct ublk_queue *ubq, struct request *req,
struct ublk_io *io)
{
const unsigned int rq_bytes = blk_rq_bytes(req);
- if (ublk_support_user_copy(ubq))
+ if (ublk_support_user_copy(ubq)) {
+ if (ublk_support_zc(ubq)) {
+ int ret = ublk_init_zero_copy_buffer(req);
+
+ /*
+ * The only failure is -ENOMEM for allocating providing
+ * buffer command, return zero so that we can requeue
+ * this req.
+ */
+ if (unlikely(ret))
+ return 0;
+ }
return rq_bytes;
+ }
/*
* no zero copy, we delay copy WRITE request data into ublksrv
@@ -864,13 +952,16 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
}
static int ublk_unmap_io(const struct ublk_queue *ubq,
- const struct request *req,
+ struct request *req,
struct ublk_io *io)
{
const unsigned int rq_bytes = blk_rq_bytes(req);
- if (ublk_support_user_copy(ubq))
+ if (ublk_support_user_copy(ubq)) {
+ if (ublk_support_zc(ubq))
+ ublk_deinit_zero_copy_buffer(req);
return rq_bytes;
+ }
if (ublk_need_unmap_req(req)) {
struct iov_iter iter;
@@ -1016,6 +1107,7 @@ static inline void __ublk_complete_rq(struct request *req)
return;
exit:
+ ublk_deinit_zero_copy_buffer(req);
blk_mq_end_request(req, res);
}
@@ -1658,6 +1750,45 @@ static inline void ublk_prep_cancel(struct io_uring_cmd *cmd,
io_uring_cmd_mark_cancelable(cmd, issue_flags);
}
+static void ublk_io_buf_giveback_cb(const struct io_uring_kernel_buf *buf)
+{
+ struct ublk_rq_data *data = container_of(buf, struct ublk_rq_data, buf[0]);
+ struct request *req = blk_mq_rq_from_pdu(data);
+ struct ublk_queue *ubq = req->mq_hctx->driver_data;
+
+ ublk_put_req_ref(ubq, req);
+}
+
+static int ublk_provide_io_buf(struct io_uring_cmd *cmd,
+ struct ublk_queue *ubq, int tag)
+{
+ struct ublk_device *ub = cmd->file->private_data;
+ struct ublk_rq_data *data;
+ struct request *req;
+
+ if (!ub)
+ return -EPERM;
+
+ req = __ublk_check_and_get_req(ub, ubq, tag, 0);
+ if (!req)
+ return -EINVAL;
+
+ pr_devel("%s: qid %d tag %u request bytes %u\n",
+ __func__, tag, ubq->q_id, blk_rq_bytes(req));
+
+ data = blk_mq_rq_to_pdu(req);
+
+ /*
+ * io_uring guarantees that the callback will be called after
+ * the provided buffer is consumed, and it is automatic removal
+ * before this uring command is freed.
+ *
+ * This request won't be completed unless the callback is called,
+ * so ublk module won't be unloaded too.
+ */
+ return io_uring_cmd_provide_kbuf(cmd, data->buf);
+}
+
static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags,
const struct ublksrv_io_cmd *ub_cmd)
@@ -1674,6 +1805,10 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
__func__, cmd->cmd_op, ub_cmd->q_id, tag,
ub_cmd->result);
+ if ((cmd->flags & IORING_PROVIDE_GROUP_KBUF) &&
+ cmd_op != UBLK_U_IO_PROVIDE_IO_BUF)
+ return -EOPNOTSUPP;
+
if (ub_cmd->q_id >= ub->dev_info.nr_hw_queues)
goto out;
@@ -1709,6 +1844,8 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
ret = -EINVAL;
switch (_IOC_NR(cmd_op)) {
+ case _IOC_NR(UBLK_U_IO_PROVIDE_IO_BUF):
+ return ublk_provide_io_buf(cmd, ubq, tag);
case UBLK_IO_FETCH_REQ:
/* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
if (ublk_queue_ready(ubq)) {
@@ -2128,11 +2265,14 @@ static void ublk_align_max_io_size(struct ublk_device *ub)
static int ublk_add_tag_set(struct ublk_device *ub)
{
+ int zc = !!(ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY);
+ struct ublk_rq_data *data;
+
ub->tag_set.ops = &ublk_mq_ops;
ub->tag_set.nr_hw_queues = ub->dev_info.nr_hw_queues;
ub->tag_set.queue_depth = ub->dev_info.queue_depth;
ub->tag_set.numa_node = NUMA_NO_NODE;
- ub->tag_set.cmd_size = sizeof(struct ublk_rq_data);
+ ub->tag_set.cmd_size = struct_size(data, buf, zc);
ub->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
ub->tag_set.driver_data = ub;
return blk_mq_alloc_tag_set(&ub->tag_set);
@@ -2417,8 +2557,12 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
goto out_free_dev_number;
}
- /* We are not ready to support zero copy */
- ub->dev_info.flags &= ~UBLK_F_SUPPORT_ZERO_COPY;
+ /* zero copy depends on user copy */
+ if ((ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY) &&
+ !ublk_dev_is_user_copy(ub)) {
+ ret = -EINVAL;
+ goto out_free_dev_number;
+ }
ub->dev_info.nr_hw_queues = min_t(unsigned int,
ub->dev_info.nr_hw_queues, nr_cpu_ids);
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index c8dc5f8ea699..897ace0794c2 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -94,6 +94,8 @@
_IOWR('u', UBLK_IO_COMMIT_AND_FETCH_REQ, struct ublksrv_io_cmd)
#define UBLK_U_IO_NEED_GET_DATA \
_IOWR('u', UBLK_IO_NEED_GET_DATA, struct ublksrv_io_cmd)
+#define UBLK_U_IO_PROVIDE_IO_BUF \
+ _IOWR('u', 0x23, struct ublksrv_io_cmd)
/* only ABORT means that no re-fetch */
#define UBLK_IO_RES_OK 0
@@ -126,10 +128,7 @@
#define UBLKSRV_IO_BUF_TOTAL_BITS (UBLK_QID_OFF + UBLK_QID_BITS)
#define UBLKSRV_IO_BUF_TOTAL_SIZE (1ULL << UBLKSRV_IO_BUF_TOTAL_BITS)
-/*
- * zero copy requires 4k block size, and can remap ublk driver's io
- * request into ublksrv's vm space
- */
+/* io_uring provide kbuf command based zero copy */
#define UBLK_F_SUPPORT_ZERO_COPY (1ULL << 0)
/*
--
2.42.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf
2024-05-11 0:12 [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
` (8 preceding siblings ...)
2024-05-11 0:12 ` [PATCH V3 9/9] ublk: support provide io buffer Ming Lei
@ 2024-06-03 0:05 ` Ming Lei
2024-06-07 12:32 ` Pavel Begunkov
9 siblings, 1 reply; 28+ messages in thread
From: Ming Lei @ 2024-06-03 0:05 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: linux-block, Pavel Begunkov, Kevin Wolf, Hollin Liu
On Sat, May 11, 2024 at 08:12:03AM +0800, Ming Lei wrote:
> Hello,
>
> The 1st 4 patches are cleanup, and prepare for adding sqe group.
>
> The 5th patch supports generic sqe group which is like link chain, but
> allows each sqe in group to be issued in parallel and the group shares
> same IO_LINK & IO_DRAIN boundary, so N:M dependency can be supported with
> sqe group & io link together. sqe group changes nothing on
> IOSQE_IO_LINK.
>
> The 6th patch supports one variant of sqe group: allow members to depend
> on group leader, so that kernel resource lifetime can be aligned with
> group leader or group, then any kernel resource can be shared in this
> sqe group, and can be used in generic device zero copy.
>
> The 7th & 8th patches supports providing sqe group buffer via the sqe
> group variant.
>
> The 9th patch supports ublk zero copy based on io_uring providing sqe
> group buffer.
>
> Tests:
>
> 1) pass liburing test
> - make runtests
>
> 2) write/pass two sqe group test cases:
>
> https://github.com/axboe/liburing/compare/master...ming1:liburing:sqe_group_v2
>
> - covers related sqe flags combination and linking groups, both nop and
> one multi-destination file copy.
>
> - cover failure handling test: fail leader IO or member IO in both single
> group and linked groups, which is done in each sqe flags combination
> test
>
> 3) ublksrv zero copy:
>
> ublksrv userspace implements zero copy by sqe group & provide group
> kbuf:
>
> git clone https://github.com/ublk-org/ublksrv.git -b group-provide-buf_v2
> make test T=loop/009:nbd/061:nbd/062 #ublk zc tests
>
> When running 64KB block size test on ublk-loop('ublk add -t loop --buffered_io -f $backing'),
> it is observed that perf is doubled.
>
> Any comments are welcome!
>
> V3:
> - add IORING_FEAT_SQE_GROUP
> - simplify group completion, and minimize change on io_req_complete_defer()
> - simplify & cleanup io_queue_group_members()
> - fix many failure handling issues
> - cover failure handling code in added liburing tests
> - remove RFC
Hello Jens and Pavel,
V3 should address all your comments, would you mind to take a look at
this version?
Thanks,
Ming
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf
2024-06-03 0:05 ` [PATCH V3 0/9] io_uring: support sqe group and provide group kbuf Ming Lei
@ 2024-06-07 12:32 ` Pavel Begunkov
0 siblings, 0 replies; 28+ messages in thread
From: Pavel Begunkov @ 2024-06-07 12:32 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Kevin Wolf, Hollin Liu
On 6/3/24 01:05, Ming Lei wrote:
> On Sat, May 11, 2024 at 08:12:03AM +0800, Ming Lei wrote:
>> Hello,
>>
>> The 1st 4 patches are cleanup, and prepare for adding sqe group.
>>
>> The 5th patch supports generic sqe group which is like link chain, but
>> allows each sqe in group to be issued in parallel and the group shares
>> same IO_LINK & IO_DRAIN boundary, so N:M dependency can be supported with
>> sqe group & io link together. sqe group changes nothing on
>> IOSQE_IO_LINK.
>>
>> The 6th patch supports one variant of sqe group: allow members to depend
>> on group leader, so that kernel resource lifetime can be aligned with
>> group leader or group, then any kernel resource can be shared in this
>> sqe group, and can be used in generic device zero copy.
>>
>> The 7th & 8th patches supports providing sqe group buffer via the sqe
>> group variant.
>>
>> The 9th patch supports ublk zero copy based on io_uring providing sqe
>> group buffer.
>>
>> Tests:
>>
>> 1) pass liburing test
>> - make runtests
>>
>> 2) write/pass two sqe group test cases:
>>
>> https://github.com/axboe/liburing/compare/master...ming1:liburing:sqe_group_v2
>>
>> - covers related sqe flags combination and linking groups, both nop and
>> one multi-destination file copy.
>>
>> - cover failure handling test: fail leader IO or member IO in both single
>> group and linked groups, which is done in each sqe flags combination
>> test
>>
>> 3) ublksrv zero copy:
>>
>> ublksrv userspace implements zero copy by sqe group & provide group
>> kbuf:
>>
>> git clone https://github.com/ublk-org/ublksrv.git -b group-provide-buf_v2
>> make test T=loop/009:nbd/061:nbd/062 #ublk zc tests
>>
>> When running 64KB block size test on ublk-loop('ublk add -t loop --buffered_io -f $backing'),
>> it is observed that perf is doubled.
>>
>> Any comments are welcome!
>>
>> V3:
>> - add IORING_FEAT_SQE_GROUP
>> - simplify group completion, and minimize change on io_req_complete_defer()
>> - simplify & cleanup io_queue_group_members()
>> - fix many failure handling issues
>> - cover failure handling code in added liburing tests
>> - remove RFC
>
> Hello Jens and Pavel,
>
> V3 should address all your comments, would you mind to take a look at
> this version?
I'll take a look this weekend
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 28+ messages in thread