public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-5.15 0/3] fix failed linkchain code logic
@ 2021-08-18  7:43 Hao Xu
  2021-08-18  7:43 ` [PATCH 1/3] io_uring: remove redundant req_set_fail() Hao Xu
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Hao Xu @ 2021-08-18  7:43 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, Pavel Begunkov, Joseph Qi

the first and last patch are code clean.
the second is the main one, which refactors linkchain failure path to
fix a problem, detail in the commit message.

Hao Xu (3):
  io_uring: remove redundant req_set_fail()
  io_uring: fix failed linkchain code logic
  io_uring: move fail path of request submittion to the end

 fs/io_uring.c | 97 +++++++++++++++++++++++++++++++++------------------
 1 file changed, 64 insertions(+), 33 deletions(-)

-- 
2.24.4


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] io_uring: remove redundant req_set_fail()
  2021-08-18  7:43 [PATCH for-5.15 0/3] fix failed linkchain code logic Hao Xu
@ 2021-08-18  7:43 ` Hao Xu
  2021-08-18  7:43 ` [PATCH 2/3] io_uring: fix failed linkchain code logic Hao Xu
  2021-08-18  7:43 ` [PATCH 3/3] io_uring: move fail path of request submittion to the end Hao Xu
  2 siblings, 0 replies; 8+ messages in thread
From: Hao Xu @ 2021-08-18  7:43 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, Pavel Begunkov, Joseph Qi

req_set_fail() in io_submit_sqe() is redundant, remove it.

Signed-off-by: Hao Xu <[email protected]>
---
 fs/io_uring.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 1be7af620395..c0b841506869 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6629,7 +6629,6 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 fail_req:
 		if (link->head) {
 			/* fail even hard links since we don't submit */
-			req_set_fail(link->head);
 			io_req_complete_failed(link->head, -ECANCELED);
 			link->head = NULL;
 		}
-- 
2.24.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] io_uring: fix failed linkchain code logic
  2021-08-18  7:43 [PATCH for-5.15 0/3] fix failed linkchain code logic Hao Xu
  2021-08-18  7:43 ` [PATCH 1/3] io_uring: remove redundant req_set_fail() Hao Xu
@ 2021-08-18  7:43 ` Hao Xu
  2021-08-18 10:20   ` Pavel Begunkov
  2021-08-18  7:43 ` [PATCH 3/3] io_uring: move fail path of request submittion to the end Hao Xu
  2 siblings, 1 reply; 8+ messages in thread
From: Hao Xu @ 2021-08-18  7:43 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, Pavel Begunkov, Joseph Qi

Given a linkchain like this:
req0(link_flag)-->req1(link_flag)-->...-->reqn(no link_flag)

There is a problem:
 - if some intermediate linked req like req1 's submittion fails, reqs
   after it won't be cancelled.

   - sqpoll disabled: maybe it's ok since users can get the error info
     of req1 and stop submitting the following sqes.

   - sqpoll enabled: definitely a problem, the following sqes will be
     submitted in the next round.

The solution is to refactor the code logic to:
 - link a linked req to the chain first, no matter its submittion fails
   or not.
 - if a linked req's submittion fails, just mark head as
   failed. leverage req->result to indicate whether the req is a failed
   one or cancelled one.
 - submit or fail the whole chain

Signed-off-by: Hao Xu <[email protected]>
---
 fs/io_uring.c | 86 ++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 58 insertions(+), 28 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c0b841506869..383668e07417 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1920,11 +1920,13 @@ static void io_fail_links(struct io_kiocb *req)
 
 	req->link = NULL;
 	while (link) {
+		int res = link->result ? link->result : -ECANCELED;
+
 		nxt = link->link;
 		link->link = NULL;
 
 		trace_io_uring_fail_link(req, link);
-		io_cqring_fill_event(link->ctx, link->user_data, -ECANCELED, 0);
+		io_cqring_fill_event(link->ctx, link->user_data, res, 0);
 		io_put_req_deferred(link);
 		link = nxt;
 	}
@@ -5698,7 +5700,7 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	if (is_timeout_link) {
 		struct io_submit_link *link = &req->ctx->submit_state.link;
 
-		if (!link->head)
+		if (!link->head || link->head == req)
 			return -EINVAL;
 		if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
 			return -EINVAL;
@@ -6622,17 +6624,38 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	__must_hold(&ctx->uring_lock)
 {
 	struct io_submit_link *link = &ctx->submit_state.link;
+	bool is_link = sqe->flags & (IOSQE_IO_LINK | IOSQE_IO_HARDLINK);
+	struct io_kiocb *head;
 	int ret;
 
+	/*
+	 * we don't update link->last until we've done io_req_prep()
+	 * since linked timeout uses old link->last
+	 */
+	if (link->head)
+		link->last->link = req;
+	else if (is_link)
+		link->head = req;
+	head = link->head;
+
 	ret = io_init_req(ctx, req, sqe);
 	if (unlikely(ret)) {
 fail_req:
-		if (link->head) {
-			/* fail even hard links since we don't submit */
-			io_req_complete_failed(link->head, -ECANCELED);
-			link->head = NULL;
+		req->result = ret;
+		if (head) {
+			link->last = req;
+			if (is_link) {
+				req_set_fail(head);
+			} else {
+				int res = head->result ? head->result : -ECANCELED;
+
+				link->head = NULL;
+				/* fail even hard links since we don't submit */
+				io_req_complete_failed(head, res);
+			}
+		} else {
+			io_req_complete_failed(req, ret);
 		}
-		io_req_complete_failed(req, ret);
 		return ret;
 	}
 
@@ -6652,28 +6675,26 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	 * submitted sync once the chain is complete. If none of those
 	 * conditions are true (normal request), then just queue it.
 	 */
-	if (link->head) {
-		struct io_kiocb *head = link->head;
-
-		ret = io_req_prep_async(req);
-		if (unlikely(ret))
-			goto fail_req;
+	if (head) {
+		if (req != head) {
+			ret = io_req_prep_async(req);
+			if (unlikely(ret))
+				goto fail_req;
+		}
 		trace_io_uring_link(ctx, req, head);
-		link->last->link = req;
-		link->last = req;
 
+		link->last = req;
 		/* last request of a link, enqueue the link */
-		if (!(req->flags & (REQ_F_LINK | REQ_F_HARDLINK))) {
-			link->head = NULL;
-			io_queue_sqe(head);
+		if (!is_link) {
+			if (head->flags & REQ_F_FAIL) {
+				goto fail_req;
+			} else {
+				link->head = NULL;
+				io_queue_sqe(head);
+			}
 		}
 	} else {
-		if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
-			link->head = req;
-			link->last = req;
-		} else {
-			io_queue_sqe(req);
-		}
+		io_queue_sqe(req);
 	}
 
 	return 0;
@@ -6685,8 +6706,17 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 static void io_submit_state_end(struct io_submit_state *state,
 				struct io_ring_ctx *ctx)
 {
-	if (state->link.head)
-		io_queue_sqe(state->link.head);
+	struct io_kiocb *head = state->link.head;
+
+	if (head) {
+		if (head->flags & REQ_F_FAIL) {
+			int res = head->result ? head->result : -ECANCELED;
+
+			io_req_complete_failed(head, res);
+		} else {
+			io_queue_sqe(head);
+		}
+	}
 	if (state->compl_nr)
 		io_submit_flush_completions(ctx);
 	if (state->plug_started)
@@ -6701,8 +6731,8 @@ static void io_submit_state_start(struct io_submit_state *state,
 {
 	state->plug_started = false;
 	state->ios_left = max_ios;
-	/* set only head, no need to init link_last in advance */
 	state->link.head = NULL;
+	state->link.last = NULL;
 }
 
 static void io_commit_sqring(struct io_ring_ctx *ctx)
@@ -6788,7 +6818,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
 		}
 		/* will complete beyond this point, count as submitted */
 		submitted++;
-		if (io_submit_sqe(ctx, req, sqe))
+		if (io_submit_sqe(ctx, req, sqe) && !(ctx->flags & IORING_SETUP_SQPOLL))
 			break;
 	}
 
-- 
2.24.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] io_uring: move fail path of request submittion to the end
  2021-08-18  7:43 [PATCH for-5.15 0/3] fix failed linkchain code logic Hao Xu
  2021-08-18  7:43 ` [PATCH 1/3] io_uring: remove redundant req_set_fail() Hao Xu
  2021-08-18  7:43 ` [PATCH 2/3] io_uring: fix failed linkchain code logic Hao Xu
@ 2021-08-18  7:43 ` Hao Xu
  2 siblings, 0 replies; 8+ messages in thread
From: Hao Xu @ 2021-08-18  7:43 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, Pavel Begunkov, Joseph Qi

Move the fail path of request submittion to the end of the function to
make the logic more readable.

Signed-off-by: Hao Xu <[email protected]>
---
 fs/io_uring.c | 40 +++++++++++++++++++++-------------------
 1 file changed, 21 insertions(+), 19 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 383668e07417..5eb09ca4a0a7 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6639,25 +6639,8 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	head = link->head;
 
 	ret = io_init_req(ctx, req, sqe);
-	if (unlikely(ret)) {
-fail_req:
-		req->result = ret;
-		if (head) {
-			link->last = req;
-			if (is_link) {
-				req_set_fail(head);
-			} else {
-				int res = head->result ? head->result : -ECANCELED;
-
-				link->head = NULL;
-				/* fail even hard links since we don't submit */
-				io_req_complete_failed(head, res);
-			}
-		} else {
-			io_req_complete_failed(req, ret);
-		}
-		return ret;
-	}
+	if (unlikely(ret))
+		goto fail_req;
 
 	ret = io_req_prep(req, sqe);
 	if (unlikely(ret))
@@ -6698,6 +6681,25 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	}
 
 	return 0;
+
+fail_req:
+	if (head) {
+		req->result = ret;
+		link->last = req;
+		if (is_link) {
+			req_set_fail(head);
+		} else {
+			int res = head->result ? head->result : -ECANCELED;
+
+			link->head = NULL;
+			/* fail even hard links since we don't submit */
+			io_req_complete_failed(head, res);
+		}
+	} else {
+		io_req_complete_failed(req, ret);
+	}
+
+	return ret;
 }
 
 /*
-- 
2.24.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] io_uring: fix failed linkchain code logic
  2021-08-18  7:43 ` [PATCH 2/3] io_uring: fix failed linkchain code logic Hao Xu
@ 2021-08-18 10:20   ` Pavel Begunkov
  2021-08-18 12:22     ` Hao Xu
  0 siblings, 1 reply; 8+ messages in thread
From: Pavel Begunkov @ 2021-08-18 10:20 UTC (permalink / raw)
  To: Hao Xu, Jens Axboe; +Cc: io-uring, Joseph Qi

On 8/18/21 8:43 AM, Hao Xu wrote:
> Given a linkchain like this:
> req0(link_flag)-->req1(link_flag)-->...-->reqn(no link_flag)
> 
> There is a problem:
>  - if some intermediate linked req like req1 's submittion fails, reqs
>    after it won't be cancelled.
> 
>    - sqpoll disabled: maybe it's ok since users can get the error info
>      of req1 and stop submitting the following sqes.
> 
>    - sqpoll enabled: definitely a problem, the following sqes will be
>      submitted in the next round.
> 
> The solution is to refactor the code logic to:
>  - link a linked req to the chain first, no matter its submittion fails
>    or not.
>  - if a linked req's submittion fails, just mark head as
>    failed. leverage req->result to indicate whether the req is a failed
>    one or cancelled one.
>  - submit or fail the whole chain
> 
> Signed-off-by: Hao Xu <[email protected]>
> ---
>  fs/io_uring.c | 86 ++++++++++++++++++++++++++++++++++-----------------
>  1 file changed, 58 insertions(+), 28 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index c0b841506869..383668e07417 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -1920,11 +1920,13 @@ static void io_fail_links(struct io_kiocb *req)
>  
>  	req->link = NULL;
>  	while (link) {
> +		int res = link->result ? link->result : -ECANCELED;

btw, we don't properly initialise req->result, and don't want to.
Perhaps, can be more like 

res = -ECANCELLED;
if (req->flags & FAIL)
	res = req->result;


> +
>  		nxt = link->link;
>  		link->link = NULL;
>  
>  		trace_io_uring_fail_link(req, link);
> -		io_cqring_fill_event(link->ctx, link->user_data, -ECANCELED, 0);
> +		io_cqring_fill_event(link->ctx, link->user_data, res, 0);
>  		io_put_req_deferred(link);
>  		link = nxt;
>  	}
> @@ -5698,7 +5700,7 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>  	if (is_timeout_link) {
>  		struct io_submit_link *link = &req->ctx->submit_state.link;
>  
> -		if (!link->head)
> +		if (!link->head || link->head == req)
>  			return -EINVAL;
>  		if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
>  			return -EINVAL;
> @@ -6622,17 +6624,38 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>  	__must_hold(&ctx->uring_lock)
>  {
>  	struct io_submit_link *link = &ctx->submit_state.link;
> +	bool is_link = sqe->flags & (IOSQE_IO_LINK | IOSQE_IO_HARDLINK);
> +	struct io_kiocb *head;
>  	int ret;
>  
> +	/*
> +	 * we don't update link->last until we've done io_req_prep()
> +	 * since linked timeout uses old link->last
> +	 */
> +	if (link->head)
> +		link->last->link = req;
> +	else if (is_link)
> +		link->head = req;
> +	head = link->head;

It's a horrorsome amount of overhead. How about to set the fail flag
if failed early and actually fail on io_queue_sqe(), as below. It's
not tested and a couple more bits added, but hopefully gives the idea.


diff --git a/fs/io_uring.c b/fs/io_uring.c
index ba087f395507..3fd0730655d0 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6530,8 +6530,10 @@ static inline void io_queue_sqe(struct io_kiocb *req)
 	if (unlikely(req->ctx->drain_active) && io_drain_req(req))
 		return;
 
-	if (likely(!(req->flags & REQ_F_FORCE_ASYNC))) {
+	if (likely(!(req->flags & (REQ_F_FORCE_ASYNC|REQ_F_FAIL)))) {
 		__io_queue_sqe(req);
+	} else if (req->flags & REQ_F_FAIL) {
+		io_req_complete_failed(req, ret);
 	} else {
 		int ret = io_req_prep_async(req);
 
@@ -6640,19 +6642,17 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	ret = io_init_req(ctx, req, sqe);
 	if (unlikely(ret)) {
 fail_req:
-		if (link->head) {
-			/* fail even hard links since we don't submit */
+		/* fail even hard links since we don't submit */
+		if (link->head)
 			req_set_fail(link->head);
-			io_req_complete_failed(link->head, -ECANCELED);
-			link->head = NULL;
-		}
-		io_req_complete_failed(req, ret);
-		return ret;
+		req_set_fail(req);
+		req->result = ret;
+	} else {
+		ret = io_req_prep(req, sqe);
+		if (unlikely(ret))
+			goto fail_req;
 	}
 
-	ret = io_req_prep(req, sqe);
-	if (unlikely(ret))
-		goto fail_req;
 
 	/* don't need @sqe from now on */
 	trace_io_uring_submit_sqe(ctx, req, req->opcode, req->user_data,
@@ -6670,8 +6670,10 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		struct io_kiocb *head = link->head;
 
 		ret = io_req_prep_async(req);
-		if (unlikely(ret))
-			goto fail_req;
+		if (unlikely(ret)) {
+			req->result = ret;
+			req_set_fail(link->head);
+		}
 		trace_io_uring_link(ctx, req, head);
 		link->last->link = req;
 		link->last = req;

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] io_uring: fix failed linkchain code logic
  2021-08-18 10:20   ` Pavel Begunkov
@ 2021-08-18 12:22     ` Hao Xu
  2021-08-18 14:40       ` Pavel Begunkov
  0 siblings, 1 reply; 8+ messages in thread
From: Hao Xu @ 2021-08-18 12:22 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe; +Cc: io-uring, Joseph Qi

在 2021/8/18 下午6:20, Pavel Begunkov 写道:
> On 8/18/21 8:43 AM, Hao Xu wrote:
>> Given a linkchain like this:
>> req0(link_flag)-->req1(link_flag)-->...-->reqn(no link_flag)
>>
>> There is a problem:
>>   - if some intermediate linked req like req1 's submittion fails, reqs
>>     after it won't be cancelled.
>>
>>     - sqpoll disabled: maybe it's ok since users can get the error info
>>       of req1 and stop submitting the following sqes.
>>
>>     - sqpoll enabled: definitely a problem, the following sqes will be
>>       submitted in the next round.
>>
>> The solution is to refactor the code logic to:
>>   - link a linked req to the chain first, no matter its submittion fails
>>     or not.
>>   - if a linked req's submittion fails, just mark head as
>>     failed. leverage req->result to indicate whether the req is a failed
>>     one or cancelled one.
>>   - submit or fail the whole chain
>>
>> Signed-off-by: Hao Xu <[email protected]>
>> ---
>>   fs/io_uring.c | 86 ++++++++++++++++++++++++++++++++++-----------------
>>   1 file changed, 58 insertions(+), 28 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index c0b841506869..383668e07417 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -1920,11 +1920,13 @@ static void io_fail_links(struct io_kiocb *req)
>>   
>>   	req->link = NULL;
>>   	while (link) {
>> +		int res = link->result ? link->result : -ECANCELED;
> 
> btw, we don't properly initialise req->result, and don't want to.
I see, req->result is cleaned to 0 in io_preinit_req() but not the same
when move it back to the free list.
> Perhaps, can be more like
> 
> res = -ECANCELLED;
> if (req->flags & FAIL)
> 	res = req->result;
Agree.

> 
> 
>> +
>>   		nxt = link->link;
>>   		link->link = NULL;
>>   
>>   		trace_io_uring_fail_link(req, link);
>> -		io_cqring_fill_event(link->ctx, link->user_data, -ECANCELED, 0);
>> +		io_cqring_fill_event(link->ctx, link->user_data, res, 0);
>>   		io_put_req_deferred(link);
>>   		link = nxt;
>>   	}
>> @@ -5698,7 +5700,7 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>>   	if (is_timeout_link) {
>>   		struct io_submit_link *link = &req->ctx->submit_state.link;
>>   
>> -		if (!link->head)
>> +		if (!link->head || link->head == req)
>>   			return -EINVAL;
>>   		if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
>>   			return -EINVAL;
>> @@ -6622,17 +6624,38 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>   	__must_hold(&ctx->uring_lock)
>>   {
>>   	struct io_submit_link *link = &ctx->submit_state.link;
>> +	bool is_link = sqe->flags & (IOSQE_IO_LINK | IOSQE_IO_HARDLINK);
>> +	struct io_kiocb *head;
>>   	int ret;
>>   
>> +	/*
>> +	 * we don't update link->last until we've done io_req_prep()
>> +	 * since linked timeout uses old link->last
>> +	 */
>> +	if (link->head)
>> +		link->last->link = req;
>> +	else if (is_link)
>> +		link->head = req;
>> +	head = link->head;
> 
> It's a horrorsome amount of overhead. How about to set the fail flag
> if failed early and actually fail on io_queue_sqe(), as below. It's
> not tested and a couple more bits added, but hopefully gives the idea.
I get the idea, it's truely with less change. But why do you think the
above code bring in more overhead, since anyway we have to link the req
to the linkchain. I tested it with 
fio-direct-4k-read-with/without-sqpoll, didn't see performance regression.

> 
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index ba087f395507..3fd0730655d0 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6530,8 +6530,10 @@ static inline void io_queue_sqe(struct io_kiocb *req)
>   	if (unlikely(req->ctx->drain_active) && io_drain_req(req))
>   		return;
>   
> -	if (likely(!(req->flags & REQ_F_FORCE_ASYNC))) {
> +	if (likely(!(req->flags & (REQ_F_FORCE_ASYNC|REQ_F_FAIL)))) {
>   		__io_queue_sqe(req);
> +	} else if (req->flags & REQ_F_FAIL) {
> +		io_req_complete_failed(req, ret);
>   	} else {
>   		int ret = io_req_prep_async(req);
>   
> @@ -6640,19 +6642,17 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>   	ret = io_init_req(ctx, req, sqe);
>   	if (unlikely(ret)) {
>   fail_req:
> -		if (link->head) {
> -			/* fail even hard links since we don't submit */
> +		/* fail even hard links since we don't submit */
> +		if (link->head)
>   			req_set_fail(link->head);
> -			io_req_complete_failed(link->head, -ECANCELED);
> -			link->head = NULL;
> -		}
> -		io_req_complete_failed(req, ret);
> -		return ret;
> +		req_set_fail(req);
> +		req->result = ret;
> +	} else {
> +		ret = io_req_prep(req, sqe);
> +		if (unlikely(ret))
> +			goto fail_req;
>   	}
>   
> -	ret = io_req_prep(req, sqe);
> -	if (unlikely(ret))
> -		goto fail_req;
>   
>   	/* don't need @sqe from now on */
>   	trace_io_uring_submit_sqe(ctx, req, req->opcode, req->user_data,
> @@ -6670,8 +6670,10 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>   		struct io_kiocb *head = link->head;
>   
maybe better to add an if(head & FAIL) here, since we don't need to
prep_async if we know it will be cancelled.
>   		ret = io_req_prep_async(req);
> -		if (unlikely(ret))
> -			goto fail_req;
> +		if (unlikely(ret)) {
> +			req->result = ret;
> +			req_set_fail(link->head);
> +		}
>   		trace_io_uring_link(ctx, req, head);
>   		link->last->link = req;
>   		link->last = req;
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] io_uring: fix failed linkchain code logic
  2021-08-18 12:22     ` Hao Xu
@ 2021-08-18 14:40       ` Pavel Begunkov
  2021-08-19 10:30         ` Hao Xu
  0 siblings, 1 reply; 8+ messages in thread
From: Pavel Begunkov @ 2021-08-18 14:40 UTC (permalink / raw)
  To: Hao Xu, Jens Axboe; +Cc: io-uring, Joseph Qi

On 8/18/21 1:22 PM, Hao Xu wrote:
> 在 2021/8/18 下午6:20, Pavel Begunkov 写道:
>> On 8/18/21 8:43 AM, Hao Xu wrote:
>>> Given a linkchain like this:
>>> req0(link_flag)-->req1(link_flag)-->...-->reqn(no link_flag)
>>>
[...]
>>>       struct io_submit_link *link = &ctx->submit_state.link;
>>> +    bool is_link = sqe->flags & (IOSQE_IO_LINK | IOSQE_IO_HARDLINK);
>>> +    struct io_kiocb *head;
>>>       int ret;
>>>   +    /*
>>> +     * we don't update link->last until we've done io_req_prep()
>>> +     * since linked timeout uses old link->last
>>> +     */
>>> +    if (link->head)
>>> +        link->last->link = req;
>>> +    else if (is_link)
>>> +        link->head = req;
>>> +    head = link->head;
>>
>> It's a horrorsome amount of overhead. How about to set the fail flag
>> if failed early and actually fail on io_queue_sqe(), as below. It's
>> not tested and a couple more bits added, but hopefully gives the idea.
> I get the idea, it's truely with less change. But why do you think the
> above code bring in more overhead, since anyway we have to link the req
> to the linkchain. I tested it with fio-direct-4k-read-with/without-sqpoll, didn't see performance regression.

Well, it's an exaggeration :) However, we were cutting the overhead,
and there is no atomics or other heavy operations left in the hot path,
just pure number of instructions a request should go through. That's
just to clear the reason why I don't want extras on the path.

For the non-linked path, first it adds 2 ifs in front and removes one
at the end. Then there is is_link, which is most likely to be saved
on stack. And same with @head which I'd expect to be saved on stack.

If we have a way to avoid it, that's great, and it looks we can.

[...]
>>   -    ret = io_req_prep(req, sqe);
>> -    if (unlikely(ret))
>> -        goto fail_req;
>>         /* don't need @sqe from now on */
>>       trace_io_uring_submit_sqe(ctx, req, req->opcode, req->user_data,
>> @@ -6670,8 +6670,10 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>           struct io_kiocb *head = link->head;
>>   
> maybe better to add an if(head & FAIL) here, since we don't need to
> prep_async if we know it will be cancelled.

Such an early fail is marginal enough to not care about performance,
but I agree that the check is needed as io_req_prep_async() won't be
able to handle it. E.g. if it failed to grab a file.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] io_uring: fix failed linkchain code logic
  2021-08-18 14:40       ` Pavel Begunkov
@ 2021-08-19 10:30         ` Hao Xu
  0 siblings, 0 replies; 8+ messages in thread
From: Hao Xu @ 2021-08-19 10:30 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe; +Cc: io-uring, Joseph Qi

在 2021/8/18 下午10:40, Pavel Begunkov 写道:
> On 8/18/21 1:22 PM, Hao Xu wrote:
>> 在 2021/8/18 下午6:20, Pavel Begunkov 写道:
>>> On 8/18/21 8:43 AM, Hao Xu wrote:
>>>> Given a linkchain like this:
>>>> req0(link_flag)-->req1(link_flag)-->...-->reqn(no link_flag)
>>>>
> [...]
>>>>        struct io_submit_link *link = &ctx->submit_state.link;
>>>> +    bool is_link = sqe->flags & (IOSQE_IO_LINK | IOSQE_IO_HARDLINK);
>>>> +    struct io_kiocb *head;
>>>>        int ret;
>>>>    +    /*
>>>> +     * we don't update link->last until we've done io_req_prep()
>>>> +     * since linked timeout uses old link->last
>>>> +     */
>>>> +    if (link->head)
>>>> +        link->last->link = req;
>>>> +    else if (is_link)
>>>> +        link->head = req;
>>>> +    head = link->head;
>>>
>>> It's a horrorsome amount of overhead. How about to set the fail flag
>>> if failed early and actually fail on io_queue_sqe(), as below. It's
>>> not tested and a couple more bits added, but hopefully gives the idea.
>> I get the idea, it's truely with less change. But why do you think the
>> above code bring in more overhead, since anyway we have to link the req
>> to the linkchain. I tested it with fio-direct-4k-read-with/without-sqpoll, didn't see performance regression.
> 
> Well, it's an exaggeration :) However, we were cutting the overhead,
> and there is no atomics or other heavy operations left in the hot path,
> just pure number of instructions a request should go through. That's
> just to clear the reason why I don't want extras on the path.
> 
> For the non-linked path, first it adds 2 ifs in front and removes one
> at the end. Then there is is_link, which is most likely to be saved
> on stack. And same with @head which I'd expect to be saved on stack.
Agree on most part except the @head, despite cache hit, a direct stack
variable head is better than a stack pointer link.
I'm excited to see io_uring is becoming faster and faster since we are
actively using it. Thanks for the great work, the recent refcount
optimization is amazing.
I'll send a v2 patchset.

Thanks,
Hao
> 
> If we have a way to avoid it, that's great, and it looks we can.
> 
> [...]
>>>    -    ret = io_req_prep(req, sqe);
>>> -    if (unlikely(ret))
>>> -        goto fail_req;
>>>          /* don't need @sqe from now on */
>>>        trace_io_uring_submit_sqe(ctx, req, req->opcode, req->user_data,
>>> @@ -6670,8 +6670,10 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>>            struct io_kiocb *head = link->head;
>>>    
>> maybe better to add an if(head & FAIL) here, since we don't need to
>> prep_async if we know it will be cancelled.
> 
> Such an early fail is marginal enough to not care about performance,
> but I agree that the check is needed as io_req_prep_async() won't be
> able to handle it. E.g. if it failed to grab a file.
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-08-19 10:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-08-18  7:43 [PATCH for-5.15 0/3] fix failed linkchain code logic Hao Xu
2021-08-18  7:43 ` [PATCH 1/3] io_uring: remove redundant req_set_fail() Hao Xu
2021-08-18  7:43 ` [PATCH 2/3] io_uring: fix failed linkchain code logic Hao Xu
2021-08-18 10:20   ` Pavel Begunkov
2021-08-18 12:22     ` Hao Xu
2021-08-18 14:40       ` Pavel Begunkov
2021-08-19 10:30         ` Hao Xu
2021-08-18  7:43 ` [PATCH 3/3] io_uring: move fail path of request submittion to the end Hao Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox