public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH v2] io-wq: fix race between worker exiting and activating free worker
@ 2021-08-04 20:00 Jens Axboe
  2021-08-04 20:28 ` Nadav Amit
  0 siblings, 1 reply; 2+ messages in thread
From: Jens Axboe @ 2021-08-04 20:00 UTC (permalink / raw)
  To: io-uring; +Cc: Nadav Amit

Nadav correctly reports that we have a race between a worker exiting,
and new work being queued. This can lead to work being queued behind
an existing worker that could be sleeping on an event before it can
run to completion, and hence introducing potential big latency gaps
if we hit this race condition:

cpu0                                    cpu1
----                                    ----
                                        io_wqe_worker()
                                        schedule_timeout()
                                         // timed out
io_wqe_enqueue()
io_wqe_wake_worker()
// work_flags & IO_WQ_WORK_CONCURRENT
io_wqe_activate_free_worker()
                                         io_worker_exit()

Fix this by having the exiting worker go through the normal decrement
of a running worker, which will spawn a new one if needed.

The free worker activation is modified to only return success if we
were able to find a sleeping worker - if not, we keep looking through
the list. If we fail, we create a new worker as per usual.

Cc: [email protected]
Link: https://lore.kernel.org/io-uring/[email protected]/
Reported-by: Nadav Amit <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>

---

diff --git a/fs/io-wq.c b/fs/io-wq.c
index cf086b01c6c6..50dc93ffc153 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -130,6 +130,7 @@ struct io_cb_cancel_data {
 };
 
 static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index);
+static void io_wqe_dec_running(struct io_worker *worker);
 
 static bool io_worker_get(struct io_worker *worker)
 {
@@ -168,26 +169,21 @@ static void io_worker_exit(struct io_worker *worker)
 {
 	struct io_wqe *wqe = worker->wqe;
 	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
-	unsigned flags;
 
 	if (refcount_dec_and_test(&worker->ref))
 		complete(&worker->ref_done);
 	wait_for_completion(&worker->ref_done);
 
-	preempt_disable();
-	current->flags &= ~PF_IO_WORKER;
-	flags = worker->flags;
-	worker->flags = 0;
-	if (flags & IO_WORKER_F_RUNNING)
-		atomic_dec(&acct->nr_running);
-	worker->flags = 0;
-	preempt_enable();
-
 	raw_spin_lock_irq(&wqe->lock);
-	if (flags & IO_WORKER_F_FREE)
+	if (worker->flags & IO_WORKER_F_FREE)
 		hlist_nulls_del_rcu(&worker->nulls_node);
 	list_del_rcu(&worker->all_list);
 	acct->nr_workers--;
+	preempt_disable();
+	io_wqe_dec_running(worker);
+	worker->flags = 0;
+	current->flags &= ~PF_IO_WORKER;
+	preempt_enable();
 	raw_spin_unlock_irq(&wqe->lock);
 
 	kfree_rcu(worker, rcu);
@@ -214,15 +210,19 @@ static bool io_wqe_activate_free_worker(struct io_wqe *wqe)
 	struct hlist_nulls_node *n;
 	struct io_worker *worker;
 
-	n = rcu_dereference(hlist_nulls_first_rcu(&wqe->free_list));
-	if (is_a_nulls(n))
-		return false;
-
-	worker = hlist_nulls_entry(n, struct io_worker, nulls_node);
-	if (io_worker_get(worker)) {
-		wake_up_process(worker->task);
+	/*
+	 * Iterate free_list and see if we can find an idle worker to
+	 * activate. If a given worker is on the free_list but in the process
+	 * of exiting, keep trying.
+	 */
+	hlist_nulls_for_each_entry_rcu(worker, n, &wqe->free_list, nulls_node) {
+		if (!io_worker_get(worker))
+			continue;
+		if (wake_up_process(worker->task)) {
+			io_worker_release(worker);
+			return true;
+		}
 		io_worker_release(worker);
-		return true;
 	}
 
 	return false;

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] io-wq: fix race between worker exiting and activating free worker
  2021-08-04 20:00 [PATCH v2] io-wq: fix race between worker exiting and activating free worker Jens Axboe
@ 2021-08-04 20:28 ` Nadav Amit
  0 siblings, 0 replies; 2+ messages in thread
From: Nadav Amit @ 2021-08-04 20:28 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring


> On Aug 4, 2021, at 1:00 PM, Jens Axboe <[email protected]> wrote:
> 
> Nadav correctly reports that we have a race between a worker exiting,
> and new work being queued. This can lead to work being queued behind
> an existing worker that could be sleeping on an event before it can
> run to completion, and hence introducing potential big latency gaps
> if we hit this race condition:
> 
> cpu0                                    cpu1
> ----                                    ----
>                                        io_wqe_worker()
>                                        schedule_timeout()
>                                         // timed out
> io_wqe_enqueue()
> io_wqe_wake_worker()
> // work_flags & IO_WQ_WORK_CONCURRENT
> io_wqe_activate_free_worker()
>                                         io_worker_exit()
> 
> Fix this by having the exiting worker go through the normal decrement
> of a running worker, which will spawn a new one if needed.
> 
> The free worker activation is modified to only return success if we
> were able to find a sleeping worker - if not, we keep looking through
> the list. If we fail, we create a new worker as per usual.
> 
> Cc: [email protected]
> Link: https://lore.kernel.org/io-uring/[email protected]/
> Reported-by: Nadav Amit <[email protected]>
> Signed-off-by: Jens Axboe <[email protected]>


Tested-by: Nadav Amit <[email protected]>

Thanks!


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-08-04 20:28 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-04 20:00 [PATCH v2] io-wq: fix race between worker exiting and activating free worker Jens Axboe
2021-08-04 20:28 ` Nadav Amit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox