public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH 2/4] io_uring: implement our own schedule timeout handling
  2024-02-13 19:03 [PATCHSET 0/4] Add support for batched min timeout Jens Axboe
@ 2024-02-13 19:03 ` Jens Axboe
  0 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2024-02-13 19:03 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

In preparation for having two distinct timeouts and avoid waking the
task if we don't need to.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/io_uring.c | 39 +++++++++++++++++++++++++++++++++++----
 io_uring/io_uring.h |  2 ++
 2 files changed, 37 insertions(+), 4 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 67cc7003b5bd..f2d3f39d6106 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2495,7 +2495,7 @@ static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
 	 * Cannot safely flush overflowed CQEs from here, ensure we wake up
 	 * the task, and the next invocation will do it.
 	 */
-	if (io_should_wake(iowq) || io_has_work(iowq->ctx))
+	if (io_should_wake(iowq) || io_has_work(iowq->ctx) || iowq->hit_timeout)
 		return autoremove_wake_function(curr, mode, wake_flags, key);
 	return -1;
 }
@@ -2523,6 +2523,37 @@ static bool current_pending_io(void)
 	return percpu_counter_read_positive(&tctx->inflight);
 }
 
+static enum hrtimer_restart io_cqring_timer_wakeup(struct hrtimer *timer)
+{
+	struct io_wait_queue *iowq = container_of(timer, struct io_wait_queue, t);
+	struct io_ring_ctx *ctx = iowq->ctx;
+
+	WRITE_ONCE(iowq->hit_timeout, 1);
+	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+		wake_up_process(ctx->submitter_task);
+	else
+		io_cqring_wake(ctx);
+	return HRTIMER_NORESTART;
+}
+
+static int io_cqring_schedule_timeout(struct io_wait_queue *iowq)
+{
+	iowq->hit_timeout = 0;
+	hrtimer_init_on_stack(&iowq->t, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+	iowq->t.function = io_cqring_timer_wakeup;
+	hrtimer_set_expires_range_ns(&iowq->t, iowq->timeout, 0);
+	hrtimer_start_expires(&iowq->t, HRTIMER_MODE_ABS);
+
+	if (!READ_ONCE(iowq->hit_timeout))
+		schedule();
+
+	hrtimer_cancel(&iowq->t);
+	destroy_hrtimer_on_stack(&iowq->t);
+	__set_current_state(TASK_RUNNING);
+
+	return READ_ONCE(iowq->hit_timeout) ? -ETIME : 0;
+}
+
 static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 				     struct io_wait_queue *iowq)
 {
@@ -2536,10 +2567,10 @@ static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	io_wait = current->in_iowait;
 	if (current_pending_io())
 		current->in_iowait = 1;
-	if (iowq->timeout == KTIME_MAX)
+	if (iowq->timeout != KTIME_MAX)
+		ret = io_cqring_schedule_timeout(iowq);
+	else
 		schedule();
-	else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
-		ret = -ETIME;
 	current->in_iowait = io_wait;
 	return ret;
 }
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 1ca99522811b..d7295ae2c8a6 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -40,7 +40,9 @@ struct io_wait_queue {
 	struct io_ring_ctx *ctx;
 	unsigned cq_tail;
 	unsigned nr_timeouts;
+	int hit_timeout;
 	ktime_t timeout;
+	struct hrtimer t;
 
 #ifdef CONFIG_NET_RX_BUSY_POLL
 	unsigned int napi_busy_poll_to;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCHSET v2 0/4] Add support for batched min timeout
@ 2024-02-15 16:06 Jens Axboe
  2024-02-15 16:06 ` [PATCH 1/4] io_uring: move schedule wait logic into helper Jens Axboe
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Jens Axboe @ 2024-02-15 16:06 UTC (permalink / raw)
  To: io-uring

Hi,

Normal CQE waiting is generally either done with a timeout, or without
one. Outside of the timeout, the other key parameter is how many events
to wait for. If we ask for N events and we get that within the timeout,
then we return successfully. If we do not, then we return with -ETIME
and the application can then check how many CQEs are actually available,
if any.

This works fine, but we're increasingly using smaller timeouts in
applications for targeted batch waiting. Eg "give me N requests in T
usec". If the application has other things do do every T usec, this
works fine. But if it's an event loop that wants to process completions
to make progress, it's pointless to return after T usec if there's
nothing to do. The application can't really make T bigger reliably, as
this may be the target it has to meet at busier times of the day.

This patchset adds support for min timeout waiting, which adds a third
parameter to how waits are done. The N and T timeout remain, but we add
a min_timeout option, M. The batch is now defined by N and M. The
application can now say "give me N requests in M usec, but if none have
arrived, just sleep until T has passed". This allows for using a sane
N+M, while avoid waking and returning all the time if nothing happens.

The semantics are as follows:

- If M expires and no events are available, keep waiting until T has
  expired. This is identical to using N+T without setting M at all,
  except if an event arrives after M has expired, we return immediately.

- If M expires and events are available, return those even if it's
  less than N.

- If N events arrive before M expires, return those events. This is
  identical to T == M, and M not being set.

There's a liburing branch with test cases here:

https://git.kernel.dk/cgit/liburing/log/?h=min-wait

and the patches are on top of the current for-6.9/io_uring branch. They
can also be viewed here:

https://git.kernel.dk/cgit/linux/log/?h=io_uring-min-wait

 include/uapi/linux/io_uring.h |   3 +-
 io_uring/io_uring.c           | 156 ++++++++++++++++++++++++++++------
 io_uring/io_uring.h           |   4 +
 3 files changed, 134 insertions(+), 29 deletions(-)

Changes since v1:
- Fix issue with both min_wait and timeout, and transitioning to the long
  timeout. We'd add the current time potentially more than once, causing
  much longer waits than what was asked for. Test case has been added for
  that as well.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] io_uring: move schedule wait logic into helper
  2024-02-15 16:06 [PATCHSET v2 0/4] Add support for batched min timeout Jens Axboe
@ 2024-02-15 16:06 ` Jens Axboe
  2024-02-15 16:06 ` [PATCH 2/4] io_uring: implement our own schedule timeout handling Jens Axboe
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2024-02-15 16:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

In preparation for expanding how we handle waits, move the actual
schedule and schedule_timeout() handling into a helper.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/io_uring.c | 37 +++++++++++++++++++++----------------
 1 file changed, 21 insertions(+), 16 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 45a2f8f3a77c..8f52acc14ebc 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2516,22 +2516,10 @@ static bool current_pending_io(void)
 	return percpu_counter_read_positive(&tctx->inflight);
 }
 
-/* when returns >0, the caller should retry */
-static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
-					  struct io_wait_queue *iowq)
+static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+				     struct io_wait_queue *iowq)
 {
-	int io_wait, ret;
-
-	if (unlikely(READ_ONCE(ctx->check_cq)))
-		return 1;
-	if (unlikely(!llist_empty(&ctx->work_llist)))
-		return 1;
-	if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL)))
-		return 1;
-	if (unlikely(task_sigpending(current)))
-		return -EINTR;
-	if (unlikely(io_should_wake(iowq)))
-		return 0;
+	int io_wait, ret = 0;
 
 	/*
 	 * Mark us as being in io_wait if we have pending requests, so cpufreq
@@ -2541,7 +2529,6 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	io_wait = current->in_iowait;
 	if (current_pending_io())
 		current->in_iowait = 1;
-	ret = 0;
 	if (iowq->timeout == KTIME_MAX)
 		schedule();
 	else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
@@ -2550,6 +2537,24 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	return ret;
 }
 
+/* when returns >0, the caller should retry */
+static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+					  struct io_wait_queue *iowq)
+{
+	if (unlikely(READ_ONCE(ctx->check_cq)))
+		return 1;
+	if (unlikely(!llist_empty(&ctx->work_llist)))
+		return 1;
+	if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL)))
+		return 1;
+	if (unlikely(task_sigpending(current)))
+		return -EINTR;
+	if (unlikely(io_should_wake(iowq)))
+		return 0;
+
+	return __io_cqring_wait_schedule(ctx, iowq);
+}
+
 /*
  * Wait until events become available, if we don't already have some. The
  * application must reap them itself, as they reside on the shared cq ring.
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] io_uring: implement our own schedule timeout handling
  2024-02-15 16:06 [PATCHSET v2 0/4] Add support for batched min timeout Jens Axboe
  2024-02-15 16:06 ` [PATCH 1/4] io_uring: move schedule wait logic into helper Jens Axboe
@ 2024-02-15 16:06 ` Jens Axboe
  2024-02-15 16:06 ` [PATCH 3/4] io_uring: add support for batch wait timeout Jens Axboe
  2024-02-15 16:06 ` [PATCH 4/4] io_uring: wire up min batch wake timeout Jens Axboe
  3 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2024-02-15 16:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

In preparation for having two distinct timeouts and avoid waking the
task if we don't need to.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/io_uring.c | 39 +++++++++++++++++++++++++++++++++++----
 io_uring/io_uring.h |  2 ++
 2 files changed, 37 insertions(+), 4 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 8f52acc14ebc..ebc646ad6acf 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2488,7 +2488,7 @@ static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
 	 * Cannot safely flush overflowed CQEs from here, ensure we wake up
 	 * the task, and the next invocation will do it.
 	 */
-	if (io_should_wake(iowq) || io_has_work(iowq->ctx))
+	if (io_should_wake(iowq) || io_has_work(iowq->ctx) || iowq->hit_timeout)
 		return autoremove_wake_function(curr, mode, wake_flags, key);
 	return -1;
 }
@@ -2516,6 +2516,37 @@ static bool current_pending_io(void)
 	return percpu_counter_read_positive(&tctx->inflight);
 }
 
+static enum hrtimer_restart io_cqring_timer_wakeup(struct hrtimer *timer)
+{
+	struct io_wait_queue *iowq = container_of(timer, struct io_wait_queue, t);
+	struct io_ring_ctx *ctx = iowq->ctx;
+
+	WRITE_ONCE(iowq->hit_timeout, 1);
+	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+		wake_up_process(ctx->submitter_task);
+	else
+		io_cqring_wake(ctx);
+	return HRTIMER_NORESTART;
+}
+
+static int io_cqring_schedule_timeout(struct io_wait_queue *iowq)
+{
+	iowq->hit_timeout = 0;
+	hrtimer_init_on_stack(&iowq->t, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+	iowq->t.function = io_cqring_timer_wakeup;
+	hrtimer_set_expires_range_ns(&iowq->t, iowq->timeout, 0);
+	hrtimer_start_expires(&iowq->t, HRTIMER_MODE_ABS);
+
+	if (!READ_ONCE(iowq->hit_timeout))
+		schedule();
+
+	hrtimer_cancel(&iowq->t);
+	destroy_hrtimer_on_stack(&iowq->t);
+	__set_current_state(TASK_RUNNING);
+
+	return READ_ONCE(iowq->hit_timeout) ? -ETIME : 0;
+}
+
 static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 				     struct io_wait_queue *iowq)
 {
@@ -2529,10 +2560,10 @@ static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	io_wait = current->in_iowait;
 	if (current_pending_io())
 		current->in_iowait = 1;
-	if (iowq->timeout == KTIME_MAX)
+	if (iowq->timeout != KTIME_MAX)
+		ret = io_cqring_schedule_timeout(iowq);
+	else
 		schedule();
-	else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
-		ret = -ETIME;
 	current->in_iowait = io_wait;
 	return ret;
 }
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 6426ee382276..9d1045bdc505 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -40,7 +40,9 @@ struct io_wait_queue {
 	struct io_ring_ctx *ctx;
 	unsigned cq_tail;
 	unsigned nr_timeouts;
+	int hit_timeout;
 	ktime_t timeout;
+	struct hrtimer t;
 
 #ifdef CONFIG_NET_RX_BUSY_POLL
 	unsigned int napi_busy_poll_to;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] io_uring: add support for batch wait timeout
  2024-02-15 16:06 [PATCHSET v2 0/4] Add support for batched min timeout Jens Axboe
  2024-02-15 16:06 ` [PATCH 1/4] io_uring: move schedule wait logic into helper Jens Axboe
  2024-02-15 16:06 ` [PATCH 2/4] io_uring: implement our own schedule timeout handling Jens Axboe
@ 2024-02-15 16:06 ` Jens Axboe
  2024-02-15 16:06 ` [PATCH 4/4] io_uring: wire up min batch wake timeout Jens Axboe
  3 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2024-02-15 16:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Waiting for events with io_uring has two knobs that can be set:

1) The number of events to wake for
2) The timeout associated with the event

Waiting will abort when either of those conditions are met, as expected.

This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.

For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.

This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/io_uring.c | 79 +++++++++++++++++++++++++++++++++++++++------
 io_uring/io_uring.h |  2 ++
 2 files changed, 71 insertions(+), 10 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index ebc646ad6acf..e72261f280a7 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2529,12 +2529,65 @@ static enum hrtimer_restart io_cqring_timer_wakeup(struct hrtimer *timer)
 	return HRTIMER_NORESTART;
 }
 
-static int io_cqring_schedule_timeout(struct io_wait_queue *iowq)
+/*
+ * Doing min_timeout portion. If we saw any timeouts, events, or have work,
+ * wake up. If not, and we have a normal timeout, switch to that and keep
+ * sleeping.
+ */
+static enum hrtimer_restart io_cqring_min_timer_wakeup(struct hrtimer *timer)
 {
+	struct io_wait_queue *iowq = container_of(timer, struct io_wait_queue, t);
+	struct io_ring_ctx *ctx = iowq->ctx;
+	ktime_t timeout;
+
+	/* no general timeout, or shorter, we are done */
+	if (iowq->timeout == KTIME_MAX ||
+	    ktime_after(iowq->min_timeout, iowq->timeout))
+		goto out_wake;
+	/* work we may need to run, wake function will see if we need to wake */
+	if (io_has_work(ctx))
+		goto out_wake;
+	/* got events since we started waiting, min timeout is done */
+	if (iowq->cq_min_tail != READ_ONCE(ctx->rings->cq.tail))
+		goto out_wake;
+	/* if we have any events and min timeout expired, we're done */
+	if (io_cqring_events(ctx))
+		goto out_wake;
+
+	/*
+	 * If using deferred task_work running and application is waiting on
+	 * more than one request, ensure we reset it now where we are switching
+	 * to normal sleeps. Any request completion post min_wait should wake
+	 * the task and return.
+	 */
+	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+		atomic_set(&ctx->cq_wait_nr, 1);
+
+	timeout = ktime_sub_ns(iowq->timeout, iowq->min_timeout);
+	iowq->t.function = io_cqring_timer_wakeup;
+	hrtimer_set_expires(timer, ktime_add_ns(timeout, ktime_get_ns()));
+	return HRTIMER_RESTART;
+out_wake:
+	return io_cqring_timer_wakeup(timer);
+}
+
+static int io_cqring_schedule_timeout(struct io_wait_queue *iowq,
+				      ktime_t start_time)
+{
+	ktime_t timeout;
+
 	iowq->hit_timeout = 0;
 	hrtimer_init_on_stack(&iowq->t, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
-	iowq->t.function = io_cqring_timer_wakeup;
-	hrtimer_set_expires_range_ns(&iowq->t, iowq->timeout, 0);
+
+	if (iowq->min_timeout != KTIME_MAX) {
+		timeout = ktime_add_ns(iowq->min_timeout, start_time);
+		iowq->t.function = io_cqring_min_timer_wakeup;
+	} else {
+		timeout = ktime_add_ns(iowq->timeout, start_time);
+		iowq->t.function = io_cqring_timer_wakeup;
+	}
+
+	hrtimer_set_expires_range_ns(&iowq->t, timeout, 0);
 	hrtimer_start_expires(&iowq->t, HRTIMER_MODE_ABS);
 
 	if (!READ_ONCE(iowq->hit_timeout))
@@ -2548,7 +2601,8 @@ static int io_cqring_schedule_timeout(struct io_wait_queue *iowq)
 }
 
 static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
-				     struct io_wait_queue *iowq)
+				     struct io_wait_queue *iowq,
+				     ktime_t start_time)
 {
 	int io_wait, ret = 0;
 
@@ -2560,8 +2614,8 @@ static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	io_wait = current->in_iowait;
 	if (current_pending_io())
 		current->in_iowait = 1;
-	if (iowq->timeout != KTIME_MAX)
-		ret = io_cqring_schedule_timeout(iowq);
+	if (iowq->timeout != KTIME_MAX || iowq->min_timeout != KTIME_MAX)
+		ret = io_cqring_schedule_timeout(iowq, start_time);
 	else
 		schedule();
 	current->in_iowait = io_wait;
@@ -2570,7 +2624,8 @@ static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 
 /* when returns >0, the caller should retry */
 static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
-					  struct io_wait_queue *iowq)
+					  struct io_wait_queue *iowq,
+					  ktime_t start_time)
 {
 	if (unlikely(READ_ONCE(ctx->check_cq)))
 		return 1;
@@ -2583,7 +2638,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	if (unlikely(io_should_wake(iowq)))
 		return 0;
 
-	return __io_cqring_wait_schedule(ctx, iowq);
+	return __io_cqring_wait_schedule(ctx, iowq, start_time);
 }
 
 /*
@@ -2596,6 +2651,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 {
 	struct io_wait_queue iowq;
 	struct io_rings *rings = ctx->rings;
+	ktime_t start_time;
 	int ret;
 
 	if (!io_allowed_run_tw(ctx))
@@ -2626,8 +2682,11 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 	INIT_LIST_HEAD(&iowq.wq.entry);
 	iowq.ctx = ctx;
 	iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
+	iowq.cq_min_tail = READ_ONCE(ctx->rings->cq.tail);
 	iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
+	iowq.min_timeout = KTIME_MAX;
 	iowq.timeout = KTIME_MAX;
+	start_time = ktime_get_ns();
 
 	if (uts) {
 		struct timespec64 ts;
@@ -2635,7 +2694,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 		if (get_timespec64(&ts, uts))
 			return -EFAULT;
 
-		iowq.timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
+		iowq.timeout = timespec64_to_ktime(ts);
 		io_napi_adjust_timeout(ctx, &iowq, &ts);
 	}
 
@@ -2654,7 +2713,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 							TASK_INTERRUPTIBLE);
 		}
 
-		ret = io_cqring_wait_schedule(ctx, &iowq);
+		ret = io_cqring_wait_schedule(ctx, &iowq, start_time);
 		__set_current_state(TASK_RUNNING);
 		atomic_set(&ctx->cq_wait_nr, IO_CQ_WAKE_INIT);
 
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 9d1045bdc505..f385c7e36cb7 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -39,8 +39,10 @@ struct io_wait_queue {
 	struct wait_queue_entry wq;
 	struct io_ring_ctx *ctx;
 	unsigned cq_tail;
+	unsigned cq_min_tail;
 	unsigned nr_timeouts;
 	int hit_timeout;
+	ktime_t min_timeout;
 	ktime_t timeout;
 	struct hrtimer t;
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] io_uring: wire up min batch wake timeout
  2024-02-15 16:06 [PATCHSET v2 0/4] Add support for batched min timeout Jens Axboe
                   ` (2 preceding siblings ...)
  2024-02-15 16:06 ` [PATCH 3/4] io_uring: add support for batch wait timeout Jens Axboe
@ 2024-02-15 16:06 ` Jens Axboe
  3 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2024-02-15 16:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Expose min_wait_usec in io_uring_getevents_arg, replacing the pad member
that is currently in there. The value is in usecs, which is explained in
the name as well.

Note that if min_wait_usec and a normal timeout is used in conjunction,
the normal timeout is still relative to the base time. For example, if
min_wait_usec is set to 100 and the normal timeout is 1000, the max
total time waited is still 1000. This also means that if the normal
timeout is shorter than min_wait_usec, then only the min_wait_usec will
take effect.

See previous commit for an explanation of how this works.

IORING_FEAT_MIN_TIMEOUT is added as a feature flag for this, as
applications doing submit_and_wait_timeout() style operations will
generally not see the -EINVAL from the wait side as they return the
number of IOs submitted. Only if no IOs are submitted will the -EINVAL
bubble back up to the application.

Signed-off-by: Jens Axboe <[email protected]>
---
 include/uapi/linux/io_uring.h |  3 ++-
 io_uring/io_uring.c           | 21 +++++++++++++--------
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 7bd10201a02b..dbefda14d087 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -522,6 +522,7 @@ struct io_uring_params {
 #define IORING_FEAT_CQE_SKIP		(1U << 11)
 #define IORING_FEAT_LINKED_FILE		(1U << 12)
 #define IORING_FEAT_REG_REG_RING	(1U << 13)
+#define IORING_FEAT_MIN_TIMEOUT		(1U << 14)
 
 /*
  * io_uring_register(2) opcodes and arguments
@@ -738,7 +739,7 @@ enum {
 struct io_uring_getevents_arg {
 	__u64	sigmask;
 	__u32	sigmask_sz;
-	__u32	pad;
+	__u32	min_wait_usec;
 	__u64	ts;
 };
 
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index e72261f280a7..8dd5eb647b43 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2647,7 +2647,8 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
  */
 static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 			  const sigset_t __user *sig, size_t sigsz,
-			  struct __kernel_timespec __user *uts)
+			  struct __kernel_timespec __user *uts,
+			  ktime_t min_time)
 {
 	struct io_wait_queue iowq;
 	struct io_rings *rings = ctx->rings;
@@ -2684,7 +2685,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 	iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
 	iowq.cq_min_tail = READ_ONCE(ctx->rings->cq.tail);
 	iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
-	iowq.min_timeout = KTIME_MAX;
+	iowq.min_timeout = min_time;
 	iowq.timeout = KTIME_MAX;
 	start_time = ktime_get_ns();
 
@@ -3634,10 +3635,12 @@ static int io_validate_ext_arg(unsigned flags, const void __user *argp, size_t a
 
 static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz,
 			  struct __kernel_timespec __user **ts,
-			  const sigset_t __user **sig)
+			  const sigset_t __user **sig, ktime_t *min_time)
 {
 	struct io_uring_getevents_arg arg;
 
+	*min_time = KTIME_MAX;
+
 	/*
 	 * If EXT_ARG isn't set, then we have no timespec and the argp pointer
 	 * is just a pointer to the sigset_t.
@@ -3656,8 +3659,8 @@ static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz
 		return -EINVAL;
 	if (copy_from_user(&arg, argp, sizeof(arg)))
 		return -EFAULT;
-	if (arg.pad)
-		return -EINVAL;
+	if (arg.min_wait_usec)
+		*min_time = arg.min_wait_usec * NSEC_PER_USEC;
 	*sig = u64_to_user_ptr(arg.sigmask);
 	*argsz = arg.sigmask_sz;
 	*ts = u64_to_user_ptr(arg.ts);
@@ -3769,13 +3772,14 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 		} else {
 			const sigset_t __user *sig;
 			struct __kernel_timespec __user *ts;
+			ktime_t min_time;
 
-			ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
+			ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig, &min_time);
 			if (likely(!ret2)) {
 				min_complete = min(min_complete,
 						   ctx->cq_entries);
 				ret2 = io_cqring_wait(ctx, min_complete, sig,
-						      argsz, ts);
+						      argsz, ts, min_time);
 			}
 		}
 
@@ -4058,7 +4062,8 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
 			IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
 			IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
 			IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
-			IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING;
+			IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING |
+			IORING_FEAT_MIN_TIMEOUT;
 
 	if (copy_to_user(params, p, sizeof(*p))) {
 		ret = -EFAULT;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-02-15 16:10 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-15 16:06 [PATCHSET v2 0/4] Add support for batched min timeout Jens Axboe
2024-02-15 16:06 ` [PATCH 1/4] io_uring: move schedule wait logic into helper Jens Axboe
2024-02-15 16:06 ` [PATCH 2/4] io_uring: implement our own schedule timeout handling Jens Axboe
2024-02-15 16:06 ` [PATCH 3/4] io_uring: add support for batch wait timeout Jens Axboe
2024-02-15 16:06 ` [PATCH 4/4] io_uring: wire up min batch wake timeout Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2024-02-13 19:03 [PATCHSET 0/4] Add support for batched min timeout Jens Axboe
2024-02-13 19:03 ` [PATCH 2/4] io_uring: implement our own schedule timeout handling Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox