public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH v2] io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
@ 2020-02-18 16:28 Xiaoguang Wang
  2020-02-18 16:39 ` Xiaoguang Wang
  2020-02-19  1:49 ` Xiaoguang Wang
  0 siblings, 2 replies; 3+ messages in thread
From: Xiaoguang Wang @ 2020-02-18 16:28 UTC (permalink / raw)
  To: io-uring; +Cc: axboe, joseph.qi, Xiaoguang Wang

After making ext4 support iopoll method:
  let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
    rm -f testfile; sync;
    sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring  -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.

There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.

Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.

For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list.

Fixes: 2b2ed9750fc9 ("io_uring: fix bad inflight accounting for SETUP_IOPOLL|SETUP_SQTHREAD")
Signed-off-by: Xiaoguang Wang <[email protected]>

---
V2:
    simple code cleanups and add necessary comments.
---
 fs/io_uring.c | 72 ++++++++++++++++++++++++++++-----------------------
 1 file changed, 40 insertions(+), 32 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 77f22c3da30f..b6d7c45d0d0d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1793,6 +1793,9 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
 		list_add(&req->list, &ctx->poll_list);
 	else
 		list_add_tail(&req->list, &ctx->poll_list);
+
+	if (ctx->flags & IORING_SETUP_SQPOLL && wq_has_sleeper(&ctx->sqo_wait))
+		wake_up(&ctx->sqo_wait);
 }
 
 static void io_file_put(struct io_submit_state *state)
@@ -5011,9 +5014,9 @@ static int io_sq_thread(void *data)
 	const struct cred *old_cred;
 	mm_segment_t old_fs;
 	DEFINE_WAIT(wait);
-	unsigned inflight;
 	unsigned long timeout;
-	int ret;
+	int ret = 0;
+	bool needs_uring_lock = false;
 
 	complete(&ctx->completions[1]);
 
@@ -5021,39 +5024,21 @@ static int io_sq_thread(void *data)
 	set_fs(USER_DS);
 	old_cred = override_creds(ctx->creds);
 
-	ret = timeout = inflight = 0;
+	if (ctx->flags & IORING_SETUP_IOPOLL)
+		needs_uring_lock = true;
+	timeout = jiffies + ctx->sq_thread_idle;
 	while (!kthread_should_park()) {
 		unsigned int to_submit;
 
-		if (inflight) {
+		if (!list_empty(&ctx->poll_list)) {
 			unsigned nr_events = 0;
 
-			if (ctx->flags & IORING_SETUP_IOPOLL) {
-				/*
-				 * inflight is the count of the maximum possible
-				 * entries we submitted, but it can be smaller
-				 * if we dropped some of them. If we don't have
-				 * poll entries available, then we know that we
-				 * have nothing left to poll for. Reset the
-				 * inflight count to zero in that case.
-				 */
-				mutex_lock(&ctx->uring_lock);
-				if (!list_empty(&ctx->poll_list))
-					__io_iopoll_check(ctx, &nr_events, 0);
-				else
-					inflight = 0;
-				mutex_unlock(&ctx->uring_lock);
-			} else {
-				/*
-				 * Normal IO, just pretend everything completed.
-				 * We don't have to poll completions for that.
-				 */
-				nr_events = inflight;
-			}
-
-			inflight -= nr_events;
-			if (!inflight)
+			mutex_lock(&ctx->uring_lock);
+			if (!list_empty(&ctx->poll_list))
+				__io_iopoll_check(ctx, &nr_events, 0);
+			if (list_empty(&ctx->poll_list))
 				timeout = jiffies + ctx->sq_thread_idle;
+			mutex_unlock(&ctx->uring_lock);
 		}
 
 		to_submit = io_sqring_entries(ctx);
@@ -5070,7 +5055,7 @@ static int io_sq_thread(void *data)
 			 * more IO, we should wait for the application to
 			 * reap events and wake us up.
 			 */
-			if (inflight ||
+			if (!list_empty(&ctx->poll_list) ||
 			    (!time_after(jiffies, timeout) && ret != -EBUSY &&
 			    !percpu_ref_is_dying(&ctx->refs))) {
 				cond_resched();
@@ -5089,6 +5074,24 @@ static int io_sq_thread(void *data)
 				cur_mm = NULL;
 			}
 
+			/*
+			 * While doing polled IO, before going to sleep, we need
+			 * to check if there are new reqs added to poll_list, it
+			 * is because reqs may have been punted to io worker and
+			 * will be added to poll_list later, hence check the
+			 * poll_list again, meanwhile we need to hold uring_lock
+			 * to do this check, otherwise we may lose wakeup event
+			 * in io_iopoll_req_issued().
+			 */
+			if (needs_uring_lock) {
+				mutex_lock(&ctx->uring_lock);
+				if (!list_empty(&ctx->poll_list)) {
+					mutex_unlock(&ctx->uring_lock);
+					cond_resched();
+					continue;
+				}
+			}
+
 			prepare_to_wait(&ctx->sqo_wait, &wait,
 						TASK_INTERRUPTIBLE);
 
@@ -5101,16 +5104,22 @@ static int io_sq_thread(void *data)
 			if (!to_submit || ret == -EBUSY) {
 				if (kthread_should_park()) {
 					finish_wait(&ctx->sqo_wait, &wait);
+					if (needs_uring_lock)
+						mutex_unlock(&ctx->uring_lock);
 					break;
 				}
 				if (signal_pending(current))
 					flush_signals(current);
+				if (needs_uring_lock)
+					mutex_unlock(&ctx->uring_lock);
 				schedule();
 				finish_wait(&ctx->sqo_wait, &wait);
 
 				ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
 				continue;
 			}
+			if (needs_uring_lock)
+				mutex_unlock(&ctx->uring_lock);
 			finish_wait(&ctx->sqo_wait, &wait);
 
 			ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
@@ -5119,8 +5128,7 @@ static int io_sq_thread(void *data)
 		mutex_lock(&ctx->uring_lock);
 		ret = io_submit_sqes(ctx, to_submit, NULL, -1, &cur_mm, true);
 		mutex_unlock(&ctx->uring_lock);
-		if (ret > 0)
-			inflight += ret;
+		timeout = jiffies + ctx->sq_thread_idle;
 	}
 
 	set_fs(old_fs);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
  2020-02-18 16:28 [PATCH v2] io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL Xiaoguang Wang
@ 2020-02-18 16:39 ` Xiaoguang Wang
  2020-02-19  1:49 ` Xiaoguang Wang
  1 sibling, 0 replies; 3+ messages in thread
From: Xiaoguang Wang @ 2020-02-18 16:39 UTC (permalink / raw)
  To: io-uring; +Cc: axboe, joseph.qi

hi,

> After making ext4 support iopoll method:
>    let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
> we found fio can easily hang in fio_ioring_getevents() with below fio
> job:
>      rm -f testfile; sync;
>      sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
> -rw=write -ioengine=io_uring  -hipri=1 -sqthread_poll=1 -direct=1
> -bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
> with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.
> 
> There are two issues that results in this hang, one reason is that
> when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
> does not use io_uring_enter to get completed events, it relies on
> kernel io_sq_thread to poll for completed events.
> 
> Another reason is that there is a race: when io_submit_sqes() in
> io_sq_thread() submits a batch of sqes, variable 'inflight' will
> record the number of submitted reqs, then io_sq_thread will poll for
> reqs which have been added to poll_list. But note, if some previous
> reqs have been punted to io worker, these reqs will won't be in
> poll_list timely. io_sq_thread() will only poll for a part of previous
> submitted reqs, and then find poll_list is empty, reset variable
> 'inflight' to be zero. If app just waits these deferred reqs and does
> not wake up io_sq_thread again, then hang happens.
> 
> For app that entirely relies on io_sq_thread to poll completed requests,
> let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
> element to poll_list.
> 
> Fixes: 2b2ed9750fc9 ("io_uring: fix bad inflight accounting for SETUP_IOPOLL|SETUP_SQTHREAD")
> Signed-off-by: Xiaoguang Wang <[email protected]>
> 
> ---
> V2:
>      simple code cleanups and add necessary comments.
Sorry, I still don't figure out a better way to clean up the codes yet.
Or we add a new kernel thread to do poll job, codes may seem simpler :)

+static int io_poll_thread(void *data)
+{
+       struct io_ring_ctx *ctx = data;
+       const struct cred *old_cred;
+       DEFINE_WAIT(wait);
+
+       old_cred = override_creds(ctx->creds);
+       while (!kthread_should_park()) {
+               unsigned nr_events = 0;
+               bool has_inflight = true;
+
+               mutex_lock(&ctx->uring_lock);
+               if (!list_empty(&ctx->poll_list))
+                       __io_iopoll_check(ctx, &nr_events, 0);
+               else
+                       has_inflight = false;
+               mutex_unlock(&ctx->uring_lock);
+
+               if (!has_inflight) {
+                       mutex_lock(&ctx->uring_lock);
+                       prepare_to_wait(&ctx->cqo_wait, &wait,
+                                       TASK_INTERRUPTIBLE);
+                       if (!list_empty(&ctx->poll_list)) {
+                               finish_wait(&ctx->cqo_wait, &wait);
+                               mutex_unlock(&ctx->uring_lock);
+                               cond_resched();
+                               continue;
+                       }
+                       mutex_unlock(&ctx->uring_lock);
+                       schedule();
+                       finish_wait(&ctx->cqo_wait, &wait);
+               }
+               cond_resched();
+       }
+       revert_creds(old_cred);
+       kthread_parkme();
+       return 0;
+}

Regards,
Xiaoguang Wang

> ---
>   fs/io_uring.c | 72 ++++++++++++++++++++++++++++-----------------------
>   1 file changed, 40 insertions(+), 32 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 77f22c3da30f..b6d7c45d0d0d 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -1793,6 +1793,9 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
>   		list_add(&req->list, &ctx->poll_list);
>   	else
>   		list_add_tail(&req->list, &ctx->poll_list);
> +
> +	if (ctx->flags & IORING_SETUP_SQPOLL && wq_has_sleeper(&ctx->sqo_wait))
> +		wake_up(&ctx->sqo_wait);
>   }
>   
>   static void io_file_put(struct io_submit_state *state)
> @@ -5011,9 +5014,9 @@ static int io_sq_thread(void *data)
>   	const struct cred *old_cred;
>   	mm_segment_t old_fs;
>   	DEFINE_WAIT(wait);
> -	unsigned inflight;
>   	unsigned long timeout;
> -	int ret;
> +	int ret = 0;
> +	bool needs_uring_lock = false;
>   
>   	complete(&ctx->completions[1]);
>   
> @@ -5021,39 +5024,21 @@ static int io_sq_thread(void *data)
>   	set_fs(USER_DS);
>   	old_cred = override_creds(ctx->creds);
>   
> -	ret = timeout = inflight = 0;
> +	if (ctx->flags & IORING_SETUP_IOPOLL)
> +		needs_uring_lock = true;
> +	timeout = jiffies + ctx->sq_thread_idle;
>   	while (!kthread_should_park()) {
>   		unsigned int to_submit;
>   
> -		if (inflight) {
> +		if (!list_empty(&ctx->poll_list)) {
>   			unsigned nr_events = 0;
>   
> -			if (ctx->flags & IORING_SETUP_IOPOLL) {
> -				/*
> -				 * inflight is the count of the maximum possible
> -				 * entries we submitted, but it can be smaller
> -				 * if we dropped some of them. If we don't have
> -				 * poll entries available, then we know that we
> -				 * have nothing left to poll for. Reset the
> -				 * inflight count to zero in that case.
> -				 */
> -				mutex_lock(&ctx->uring_lock);
> -				if (!list_empty(&ctx->poll_list))
> -					__io_iopoll_check(ctx, &nr_events, 0);
> -				else
> -					inflight = 0;
> -				mutex_unlock(&ctx->uring_lock);
> -			} else {
> -				/*
> -				 * Normal IO, just pretend everything completed.
> -				 * We don't have to poll completions for that.
> -				 */
> -				nr_events = inflight;
> -			}
> -
> -			inflight -= nr_events;
> -			if (!inflight)
> +			mutex_lock(&ctx->uring_lock);
> +			if (!list_empty(&ctx->poll_list))
> +				__io_iopoll_check(ctx, &nr_events, 0);
> +			if (list_empty(&ctx->poll_list))
>   				timeout = jiffies + ctx->sq_thread_idle;
> +			mutex_unlock(&ctx->uring_lock);
>   		}
>   
>   		to_submit = io_sqring_entries(ctx);
> @@ -5070,7 +5055,7 @@ static int io_sq_thread(void *data)
>   			 * more IO, we should wait for the application to
>   			 * reap events and wake us up.
>   			 */
> -			if (inflight ||
> +			if (!list_empty(&ctx->poll_list) ||
>   			    (!time_after(jiffies, timeout) && ret != -EBUSY &&
>   			    !percpu_ref_is_dying(&ctx->refs))) {
>   				cond_resched();
> @@ -5089,6 +5074,24 @@ static int io_sq_thread(void *data)
>   				cur_mm = NULL;
>   			}
>   
> +			/*
> +			 * While doing polled IO, before going to sleep, we need
> +			 * to check if there are new reqs added to poll_list, it
> +			 * is because reqs may have been punted to io worker and
> +			 * will be added to poll_list later, hence check the
> +			 * poll_list again, meanwhile we need to hold uring_lock
> +			 * to do this check, otherwise we may lose wakeup event
> +			 * in io_iopoll_req_issued().
> +			 */
> +			if (needs_uring_lock) {
> +				mutex_lock(&ctx->uring_lock);
> +				if (!list_empty(&ctx->poll_list)) {
> +					mutex_unlock(&ctx->uring_lock);
> +					cond_resched();
> +					continue;
> +				}
> +			}
> +
>   			prepare_to_wait(&ctx->sqo_wait, &wait,
>   						TASK_INTERRUPTIBLE);
>   
> @@ -5101,16 +5104,22 @@ static int io_sq_thread(void *data)
>   			if (!to_submit || ret == -EBUSY) {
>   				if (kthread_should_park()) {
>   					finish_wait(&ctx->sqo_wait, &wait);
> +					if (needs_uring_lock)
> +						mutex_unlock(&ctx->uring_lock);
>   					break;
>   				}
>   				if (signal_pending(current))
>   					flush_signals(current);
> +				if (needs_uring_lock)
> +					mutex_unlock(&ctx->uring_lock);
>   				schedule();
>   				finish_wait(&ctx->sqo_wait, &wait);
>   
>   				ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
>   				continue;
>   			}
> +			if (needs_uring_lock)
> +				mutex_unlock(&ctx->uring_lock);
>   			finish_wait(&ctx->sqo_wait, &wait);
>   
>   			ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
> @@ -5119,8 +5128,7 @@ static int io_sq_thread(void *data)
>   		mutex_lock(&ctx->uring_lock);
>   		ret = io_submit_sqes(ctx, to_submit, NULL, -1, &cur_mm, true);
>   		mutex_unlock(&ctx->uring_lock);
> -		if (ret > 0)
> -			inflight += ret;
> +		timeout = jiffies + ctx->sq_thread_idle;
>   	}
>   
>   	set_fs(old_fs);
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
  2020-02-18 16:28 [PATCH v2] io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL Xiaoguang Wang
  2020-02-18 16:39 ` Xiaoguang Wang
@ 2020-02-19  1:49 ` Xiaoguang Wang
  1 sibling, 0 replies; 3+ messages in thread
From: Xiaoguang Wang @ 2020-02-19  1:49 UTC (permalink / raw)
  To: io-uring; +Cc: axboe, joseph.qi, Ext4 Developers List

hi,

Cc ext4 mail list as well, in case someone runs into the same issue.

Regards,
Xiaoguang Wang

> After making ext4 support iopoll method:
>    let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
> we found fio can easily hang in fio_ioring_getevents() with below fio
> job:
>      rm -f testfile; sync;
>      sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
> -rw=write -ioengine=io_uring  -hipri=1 -sqthread_poll=1 -direct=1
> -bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
> with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.
> 
> There are two issues that results in this hang, one reason is that
> when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
> does not use io_uring_enter to get completed events, it relies on
> kernel io_sq_thread to poll for completed events.
> 
> Another reason is that there is a race: when io_submit_sqes() in
> io_sq_thread() submits a batch of sqes, variable 'inflight' will
> record the number of submitted reqs, then io_sq_thread will poll for
> reqs which have been added to poll_list. But note, if some previous
> reqs have been punted to io worker, these reqs will won't be in
> poll_list timely. io_sq_thread() will only poll for a part of previous
> submitted reqs, and then find poll_list is empty, reset variable
> 'inflight' to be zero. If app just waits these deferred reqs and does
> not wake up io_sq_thread again, then hang happens.
> 
> For app that entirely relies on io_sq_thread to poll completed requests,
> let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
> element to poll_list.
> 
> Fixes: 2b2ed9750fc9 ("io_uring: fix bad inflight accounting for SETUP_IOPOLL|SETUP_SQTHREAD")
> Signed-off-by: Xiaoguang Wang <[email protected]>
> 
> ---
> V2:
>      simple code cleanups and add necessary comments.
> ---
>   fs/io_uring.c | 72 ++++++++++++++++++++++++++++-----------------------
>   1 file changed, 40 insertions(+), 32 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 77f22c3da30f..b6d7c45d0d0d 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -1793,6 +1793,9 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
>   		list_add(&req->list, &ctx->poll_list);
>   	else
>   		list_add_tail(&req->list, &ctx->poll_list);
> +
> +	if (ctx->flags & IORING_SETUP_SQPOLL && wq_has_sleeper(&ctx->sqo_wait))
> +		wake_up(&ctx->sqo_wait);
>   }
>   
>   static void io_file_put(struct io_submit_state *state)
> @@ -5011,9 +5014,9 @@ static int io_sq_thread(void *data)
>   	const struct cred *old_cred;
>   	mm_segment_t old_fs;
>   	DEFINE_WAIT(wait);
> -	unsigned inflight;
>   	unsigned long timeout;
> -	int ret;
> +	int ret = 0;
> +	bool needs_uring_lock = false;
>   
>   	complete(&ctx->completions[1]);
>   
> @@ -5021,39 +5024,21 @@ static int io_sq_thread(void *data)
>   	set_fs(USER_DS);
>   	old_cred = override_creds(ctx->creds);
>   
> -	ret = timeout = inflight = 0;
> +	if (ctx->flags & IORING_SETUP_IOPOLL)
> +		needs_uring_lock = true;
> +	timeout = jiffies + ctx->sq_thread_idle;
>   	while (!kthread_should_park()) {
>   		unsigned int to_submit;
>   
> -		if (inflight) {
> +		if (!list_empty(&ctx->poll_list)) {
>   			unsigned nr_events = 0;
>   
> -			if (ctx->flags & IORING_SETUP_IOPOLL) {
> -				/*
> -				 * inflight is the count of the maximum possible
> -				 * entries we submitted, but it can be smaller
> -				 * if we dropped some of them. If we don't have
> -				 * poll entries available, then we know that we
> -				 * have nothing left to poll for. Reset the
> -				 * inflight count to zero in that case.
> -				 */
> -				mutex_lock(&ctx->uring_lock);
> -				if (!list_empty(&ctx->poll_list))
> -					__io_iopoll_check(ctx, &nr_events, 0);
> -				else
> -					inflight = 0;
> -				mutex_unlock(&ctx->uring_lock);
> -			} else {
> -				/*
> -				 * Normal IO, just pretend everything completed.
> -				 * We don't have to poll completions for that.
> -				 */
> -				nr_events = inflight;
> -			}
> -
> -			inflight -= nr_events;
> -			if (!inflight)
> +			mutex_lock(&ctx->uring_lock);
> +			if (!list_empty(&ctx->poll_list))
> +				__io_iopoll_check(ctx, &nr_events, 0);
> +			if (list_empty(&ctx->poll_list))
>   				timeout = jiffies + ctx->sq_thread_idle;
> +			mutex_unlock(&ctx->uring_lock);
>   		}
>   
>   		to_submit = io_sqring_entries(ctx);
> @@ -5070,7 +5055,7 @@ static int io_sq_thread(void *data)
>   			 * more IO, we should wait for the application to
>   			 * reap events and wake us up.
>   			 */
> -			if (inflight ||
> +			if (!list_empty(&ctx->poll_list) ||
>   			    (!time_after(jiffies, timeout) && ret != -EBUSY &&
>   			    !percpu_ref_is_dying(&ctx->refs))) {
>   				cond_resched();
> @@ -5089,6 +5074,24 @@ static int io_sq_thread(void *data)
>   				cur_mm = NULL;
>   			}
>   
> +			/*
> +			 * While doing polled IO, before going to sleep, we need
> +			 * to check if there are new reqs added to poll_list, it
> +			 * is because reqs may have been punted to io worker and
> +			 * will be added to poll_list later, hence check the
> +			 * poll_list again, meanwhile we need to hold uring_lock
> +			 * to do this check, otherwise we may lose wakeup event
> +			 * in io_iopoll_req_issued().
> +			 */
> +			if (needs_uring_lock) {
> +				mutex_lock(&ctx->uring_lock);
> +				if (!list_empty(&ctx->poll_list)) {
> +					mutex_unlock(&ctx->uring_lock);
> +					cond_resched();
> +					continue;
> +				}
> +			}
> +
>   			prepare_to_wait(&ctx->sqo_wait, &wait,
>   						TASK_INTERRUPTIBLE);
>   
> @@ -5101,16 +5104,22 @@ static int io_sq_thread(void *data)
>   			if (!to_submit || ret == -EBUSY) {
>   				if (kthread_should_park()) {
>   					finish_wait(&ctx->sqo_wait, &wait);
> +					if (needs_uring_lock)
> +						mutex_unlock(&ctx->uring_lock);
>   					break;
>   				}
>   				if (signal_pending(current))
>   					flush_signals(current);
> +				if (needs_uring_lock)
> +					mutex_unlock(&ctx->uring_lock);
>   				schedule();
>   				finish_wait(&ctx->sqo_wait, &wait);
>   
>   				ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
>   				continue;
>   			}
> +			if (needs_uring_lock)
> +				mutex_unlock(&ctx->uring_lock);
>   			finish_wait(&ctx->sqo_wait, &wait);
>   
>   			ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
> @@ -5119,8 +5128,7 @@ static int io_sq_thread(void *data)
>   		mutex_lock(&ctx->uring_lock);
>   		ret = io_submit_sqes(ctx, to_submit, NULL, -1, &cur_mm, true);
>   		mutex_unlock(&ctx->uring_lock);
> -		if (ret > 0)
> -			inflight += ret;
> +		timeout = jiffies + ctx->sq_thread_idle;
>   	}
>   
>   	set_fs(old_fs);
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-02-19  1:49 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-02-18 16:28 [PATCH v2] io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL Xiaoguang Wang
2020-02-18 16:39 ` Xiaoguang Wang
2020-02-19  1:49 ` Xiaoguang Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox