public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCHSET 0/2] tw/wakeup tweaks
@ 2022-12-17 20:48 Jens Axboe
  2022-12-17 20:48 ` [PATCH 1/2] io_uring: don't use TIF_NOTIFY_SIGNAL to test for availability of task_work Jens Axboe
  2022-12-17 20:48 ` [PATCH 2/2] io_uring: include task_work run after scheduling in wait for events Jens Axboe
  0 siblings, 2 replies; 4+ messages in thread
From: Jens Axboe @ 2022-12-17 20:48 UTC (permalink / raw)
  To: io-uring; +Cc: dylany, asml.silence

Hi,

One patch tweaks when we think we have task_work pending to be more
inclusive rather than gate on TIF_NOTIFY_SIGNAL, patch two ensures we
don't do extra loops io_cqring_wait() - and just as importantly, that
the expected run path for task_work will not have the task itself
adding to the ctx->cq_wait waitqueue.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] io_uring: don't use TIF_NOTIFY_SIGNAL to test for availability of task_work
  2022-12-17 20:48 [PATCHSET 0/2] tw/wakeup tweaks Jens Axboe
@ 2022-12-17 20:48 ` Jens Axboe
  2022-12-17 20:48 ` [PATCH 2/2] io_uring: include task_work run after scheduling in wait for events Jens Axboe
  1 sibling, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-12-17 20:48 UTC (permalink / raw)
  To: io-uring; +Cc: dylany, asml.silence, Jens Axboe

Use task_work_pending() as a better test for whether we have task_work
or not, TIF_NOTIFY_SIGNAL is only valid if the any of the task_work
items had been queued with TWA_SIGNAL as the notification mechanism.
Hence task_work_pending() is a more reliable check.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/io_uring.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index c117e029c8dc..e9f0d41ebb99 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -266,8 +266,7 @@ static inline int io_run_task_work(void)
 
 static inline bool io_task_work_pending(struct io_ring_ctx *ctx)
 {
-	return test_thread_flag(TIF_NOTIFY_SIGNAL) ||
-		!wq_list_empty(&ctx->work_llist);
+	return task_work_pending(current) || !wq_list_empty(&ctx->work_llist);
 }
 
 static inline int io_run_task_work_ctx(struct io_ring_ctx *ctx)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] io_uring: include task_work run after scheduling in wait for events
  2022-12-17 20:48 [PATCHSET 0/2] tw/wakeup tweaks Jens Axboe
  2022-12-17 20:48 ` [PATCH 1/2] io_uring: don't use TIF_NOTIFY_SIGNAL to test for availability of task_work Jens Axboe
@ 2022-12-17 20:48 ` Jens Axboe
  2022-12-18  3:37   ` [PATCH v2 " Jens Axboe
  1 sibling, 1 reply; 4+ messages in thread
From: Jens Axboe @ 2022-12-17 20:48 UTC (permalink / raw)
  To: io-uring; +Cc: dylany, asml.silence, Jens Axboe

It's quite possible that we got woken up because task_work was queued,
and we need to process this task_work to generate the events waited for.
If we return to the wait loop without running task_work, we'll end up
adding the task to the waitqueue again, only to call
io_cqring_wait_schedule() again which will run the task_work. This is
less efficient than it could be, as it requires adding to the cq_wait
queue again. It also triggers the wakeup path for completions as
cq_wait is now non-empty with the task itself, and it'll require another
lock grab and deletion to remove ourselves from the waitqueue.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/io_uring.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 16a323a9ff70..945bea3e8e5f 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2481,7 +2481,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	}
 	if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
 		return -ETIME;
-	return 1;
+	return io_run_task_work_sig(ctx);
 }
 
 /*
@@ -2546,6 +2546,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 		prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
 						TASK_INTERRUPTIBLE);
 		ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
+		if (__io_cqring_events_user(ctx) >= min_events)
+			break;
 		cond_resched();
 	} while (ret > 0);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 2/2] io_uring: include task_work run after scheduling in wait for events
  2022-12-17 20:48 ` [PATCH 2/2] io_uring: include task_work run after scheduling in wait for events Jens Axboe
@ 2022-12-18  3:37   ` Jens Axboe
  0 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-12-18  3:37 UTC (permalink / raw)
  To: io-uring; +Cc: dylany, asml.silence

It's quite possible that we got woken up because task_work was queued,
and we need to process this task_work to generate the events waited for.
If we return to the wait loop without running task_work, we'll end up
adding the task to the waitqueue again, only to call
io_cqring_wait_schedule() again which will run the task_work. This is
less efficient than it could be, as it requires adding to the cq_wait
queue again. It also triggers the wakeup path for completions as
cq_wait is now non-empty with the task itself, and it'll require another
lock grab and deletion to remove ourselves from the waitqueue.

Signed-off-by: Jens Axboe <[email protected]>

---

v2: tweak return value so we don't potentially return early from
    waiting on events, if we had nothing to do post returning from
    schedule.

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 16a323a9ff70..ff2bbac1a10f 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2481,7 +2481,14 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	}
 	if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
 		return -ETIME;
-	return 1;
+
+	/*
+	 * Run task_work after scheduling. If we got woken because of
+	 * task_work being processed, run it now rather than let the caller
+	 * do another wait loop.
+	 */
+	ret = io_run_task_work_sig(ctx);
+	return ret < 0 ? ret : 1;
 }
 
 /*
@@ -2546,6 +2553,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 		prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
 						TASK_INTERRUPTIBLE);
 		ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
+		if (__io_cqring_events_user(ctx) >= min_events)
+			break;
 		cond_resched();
 	} while (ret > 0);

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-12-18  3:37 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-12-17 20:48 [PATCHSET 0/2] tw/wakeup tweaks Jens Axboe
2022-12-17 20:48 ` [PATCH 1/2] io_uring: don't use TIF_NOTIFY_SIGNAL to test for availability of task_work Jens Axboe
2022-12-17 20:48 ` [PATCH 2/2] io_uring: include task_work run after scheduling in wait for events Jens Axboe
2022-12-18  3:37   ` [PATCH v2 " Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox