public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed
@ 2022-08-19 12:19 Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 1/7] io_uring: remove unnecessary variable Dylan Yudaken
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

We have seen workloads which suffer due to the way task work is currently
scheduled. This scheduling can cause non-trivial tasks to run interrupting
useful work on the workload. For example in network servers, a large async
recv may run, calling memcpy on a large packet, interrupting a send. Which
would add latency.

This series adds an option to defer async work until user space calls
io_uring_enter with the GETEVENTS flag. This allows the workload to choose
when to schedule async work and have finer control (at the expense of
complexity of managing this) of scheduling.

Patches 1,2 are prep patches
Patch 3 changes io_uring_enter to not pre-run task work
Patch 4/5/6 adds the new flag and functionality
Patch 7 adds tracing for the local task work running

Changes since v2:
 - add a patch to trace local task work run
 - return -EEXIST if calling from the wrong task
 - properly handle shutting down due to an exec
 - remove 'all' parameter from io_run_task_work_ctx
 
Changes since v1:
 - Removed the first patch (using ctx variable) which was broken
 - Require IORING_SETUP_SINGLE_ISSUER and make sure waiter task
   is the same as the submitter task
 - Just don't run task work at the start of io_uring_enter (Pavel's
   suggestion)
 - Remove io_move_task_work_from_local
 - Fix locking bugs

Dylan Yudaken (7):
  io_uring: remove unnecessary variable
  io_uring: introduce io_has_work
  io_uring: do not run task work at the start of io_uring_enter
  io_uring: add IORING_SETUP_DEFER_TASKRUN
  io_uring: move io_eventfd_put
  io_uring: signal registered eventfd to process deferred task work
  io_uring: trace local task work run

 include/linux/io_uring_types.h  |   3 +
 include/trace/events/io_uring.h |  29 ++++
 include/uapi/linux/io_uring.h   |   7 +
 io_uring/cancel.c               |   2 +-
 io_uring/io_uring.c             | 264 ++++++++++++++++++++++++++------
 io_uring/io_uring.h             |  29 +++-
 io_uring/rsrc.c                 |   2 +-
 7 files changed, 285 insertions(+), 51 deletions(-)


base-commit: 5993000dc6b31b927403cee65fbc5f9f070fa3e4
prerequisite-patch-id: cb1d024945aa728d09a131156140a33d30bc268b
-- 
2.30.2


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH for-next v3 1/7] io_uring: remove unnecessary variable
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
@ 2022-08-19 12:19 ` Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 2/7] io_uring: introduce io_has_work Dylan Yudaken
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

'running' is set once and read once, so can easily just remove it

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index ebfdb2212ec2..0c9fe0f1c174 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1052,12 +1052,9 @@ void io_req_task_work_add(struct io_kiocb *req)
 	struct io_uring_task *tctx = req->task->io_uring;
 	struct io_ring_ctx *ctx = req->ctx;
 	struct llist_node *node;
-	bool running;
-
-	running = !llist_add(&req->io_task_work.node, &tctx->task_list);
 
 	/* task_work already pending, we're done */
-	if (running)
+	if (!llist_add(&req->io_task_work.node, &tctx->task_list))
 		return;
 
 	if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v3 2/7] io_uring: introduce io_has_work
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 1/7] io_uring: remove unnecessary variable Dylan Yudaken
@ 2022-08-19 12:19 ` Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 3/7] io_uring: do not run task work at the start of io_uring_enter Dylan Yudaken
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

This will be used later to know if the ring has outstanding work. Right
now just if there is overflow CQEs to copy to the main CQE ring, but later
will include deferred tasks

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 0c9fe0f1c174..19d5d1ab5793 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2144,6 +2144,11 @@ struct io_wait_queue {
 	unsigned nr_timeouts;
 };
 
+static inline bool io_has_work(struct io_ring_ctx *ctx)
+{
+	return test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
+}
+
 static inline bool io_should_wake(struct io_wait_queue *iowq)
 {
 	struct io_ring_ctx *ctx = iowq->ctx;
@@ -2162,13 +2167,13 @@ static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
 {
 	struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
 							wq);
+	struct io_ring_ctx *ctx = iowq->ctx;
 
 	/*
 	 * Cannot safely flush overflowed CQEs from here, ensure we wake up
 	 * the task, and the next invocation will do it.
 	 */
-	if (io_should_wake(iowq) ||
-	    test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &iowq->ctx->check_cq))
+	if (io_should_wake(iowq) || io_has_work(ctx))
 		return autoremove_wake_function(curr, mode, wake_flags, key);
 	return -1;
 }
@@ -2504,8 +2509,8 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait)
 	 * Users may get EPOLLIN meanwhile seeing nothing in cqring, this
 	 * pushs them to do the flush.
 	 */
-	if (io_cqring_events(ctx) ||
-	    test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
+
+	if (io_cqring_events(ctx) || io_has_work(ctx))
 		mask |= EPOLLIN | EPOLLRDNORM;
 
 	return mask;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v3 3/7] io_uring: do not run task work at the start of io_uring_enter
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 1/7] io_uring: remove unnecessary variable Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 2/7] io_uring: introduce io_has_work Dylan Yudaken
@ 2022-08-19 12:19 ` Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN Dylan Yudaken
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

This is not needed, and it is normally better to wait for task work until
after submissions. This will allow greater batching if either work arrives
in the meanwhile, or if the submissions cause task work to be queued up.

For SQPOLL this also no longer runs task work, but this is handled inside
the SQPOLL loop anyway.

For IOPOLL io_iopoll_check will run task work anyway

And otherwise io_cqring_wait will run task work

Suggested-by: Pavel Begunkov <[email protected]>
Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 19d5d1ab5793..53696dd90626 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2990,8 +2990,6 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 	struct fd f;
 	long ret;
 
-	io_run_task_work();
-
 	if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
 			       IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG |
 			       IORING_ENTER_REGISTERED_RING)))
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
                   ` (2 preceding siblings ...)
  2022-08-19 12:19 ` [PATCH for-next v3 3/7] io_uring: do not run task work at the start of io_uring_enter Dylan Yudaken
@ 2022-08-19 12:19 ` Dylan Yudaken
  2022-08-22 11:34   ` Pavel Begunkov
  2022-08-30 13:19   ` Hao Xu
  2022-08-19 12:19 ` [PATCH for-next v3 5/7] io_uring: move io_eventfd_put Dylan Yudaken
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

Allow deferring async tasks until the user calls io_uring_enter(2) with
the IORING_ENTER_GETEVENTS flag. Enable this mode with a flag at
io_uring_setup time. This functionality requires that the later
io_uring_enter will be called from the same submission task, and therefore
restrict this flag to work only when IORING_SETUP_SINGLE_ISSUER is also
set.

Being able to hand pick when tasks are run prevents the problem where
there is current work to be done, however task work runs anyway.

For example, a common workload would obtain a batch of CQEs, and process
each one. Interrupting this to additional taskwork would add latency but
not gain anything. If instead task work is deferred to just before more
CQEs are obtained then no additional latency is added.

The way this is implemented is by trying to keep task work local to a
io_ring_ctx, rather than to the submission task. This is required, as the
application will want to wake up only a single io_ring_ctx at a time to
process work, and so the lists of work have to be kept separate.

This has some other benefits like not having to check the task continually
in handle_tw_list (and potentially unlocking/locking those), and reducing
locks in the submit & process completions path.

There are networking cases where using this option can reduce request
latency by 50%. For example a contrived example using [1] where the client
sends 2k data and receives the same data back while doing some system
calls (to trigger task work) shows this reduction. The reason ends up
being that if sending responses is delayed by processing task work, then
the client side sits idle. Whereas reordering the sends first means that
the client runs it's workload in parallel with the local task work.

[1]:
Using https://github.com/DylanZA/netbench/tree/defer_run
Client:
./netbench  --client_only 1 --control_port 10000 --host <host> --tx "epoll --threads 16 --per_thread 1 --size 2048 --resp 2048 --workload 1000"
Server:
./netbench  --server_only 1 --control_port 10000  --rx "io_uring --defer_taskrun 0 --workload 100"   --rx "io_uring  --defer_taskrun 1 --workload 100"

Signed-off-by: Dylan Yudaken <[email protected]>
---
 include/linux/io_uring_types.h |   2 +
 include/uapi/linux/io_uring.h  |   7 ++
 io_uring/cancel.c              |   2 +-
 io_uring/io_uring.c            | 158 ++++++++++++++++++++++++++++++---
 io_uring/io_uring.h            |  29 +++++-
 io_uring/rsrc.c                |   2 +-
 6 files changed, 184 insertions(+), 16 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 677a25d44d7f..d56ff2185168 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -301,6 +301,8 @@ struct io_ring_ctx {
 		struct io_hash_table	cancel_table;
 		bool			poll_multi_queue;
 
+		struct llist_head	work_llist;
+
 		struct list_head	io_buffers_comp;
 	} ____cacheline_aligned_in_smp;
 
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 1463cfecb56b..be8d1801bf4a 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -153,6 +153,13 @@ enum {
  */
 #define IORING_SETUP_SINGLE_ISSUER	(1U << 12)
 
+/*
+ * Defer running task work to get events.
+ * Rather than running bits of task work whenever the task transitions
+ * try to do it just before it is needed.
+ */
+#define IORING_SETUP_DEFER_TASKRUN	(1U << 13)
+
 enum io_uring_op {
 	IORING_OP_NOP,
 	IORING_OP_READV,
diff --git a/io_uring/cancel.c b/io_uring/cancel.c
index e4e1dc0325f0..db6180b62e41 100644
--- a/io_uring/cancel.c
+++ b/io_uring/cancel.c
@@ -292,7 +292,7 @@ int io_sync_cancel(struct io_ring_ctx *ctx, void __user *arg)
 			break;
 
 		mutex_unlock(&ctx->uring_lock);
-		ret = io_run_task_work_sig();
+		ret = io_run_task_work_sig(ctx);
 		if (ret < 0) {
 			mutex_lock(&ctx->uring_lock);
 			break;
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 53696dd90626..6572d2276750 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -142,7 +142,7 @@ static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
 static void io_dismantle_req(struct io_kiocb *req);
 static void io_clean_op(struct io_kiocb *req);
 static void io_queue_sqe(struct io_kiocb *req);
-
+static void io_move_task_work_from_local(struct io_ring_ctx *ctx);
 static void __io_submit_flush_completions(struct io_ring_ctx *ctx);
 
 static struct kmem_cache *req_cachep;
@@ -316,6 +316,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	INIT_LIST_HEAD(&ctx->rsrc_ref_list);
 	INIT_DELAYED_WORK(&ctx->rsrc_put_work, io_rsrc_put_work);
 	init_llist_head(&ctx->rsrc_put_llist);
+	init_llist_head(&ctx->work_llist);
 	INIT_LIST_HEAD(&ctx->tctx_list);
 	ctx->submit_state.free_list.next = NULL;
 	INIT_WQ_LIST(&ctx->locked_free_list);
@@ -1047,12 +1048,36 @@ void tctx_task_work(struct callback_head *cb)
 	trace_io_uring_task_work_run(tctx, count, loops);
 }
 
-void io_req_task_work_add(struct io_kiocb *req)
+static void io_req_local_work_add(struct io_kiocb *req)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+
+	if (!llist_add(&req->io_task_work.node, &ctx->work_llist))
+		return;
+
+	if (unlikely(atomic_read(&req->task->io_uring->in_idle))) {
+		io_move_task_work_from_local(ctx);
+		return;
+	}
+
+	if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
+		atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
+
+	io_cqring_wake(ctx);
+
+}
+
+static inline void __io_req_task_work_add(struct io_kiocb *req, bool allow_local)
 {
 	struct io_uring_task *tctx = req->task->io_uring;
 	struct io_ring_ctx *ctx = req->ctx;
 	struct llist_node *node;
 
+	if (allow_local && ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
+		io_req_local_work_add(req);
+		return;
+	}
+
 	/* task_work already pending, we're done */
 	if (!llist_add(&req->io_task_work.node, &tctx->task_list))
 		return;
@@ -1074,6 +1099,76 @@ void io_req_task_work_add(struct io_kiocb *req)
 	}
 }
 
+void io_req_task_work_add(struct io_kiocb *req)
+{
+	__io_req_task_work_add(req, true);
+}
+
+static void __cold io_move_task_work_from_local(struct io_ring_ctx *ctx)
+{
+	struct llist_node *node;
+
+	node = llist_del_all(&ctx->work_llist);
+	while (node) {
+		struct io_kiocb *req = container_of(node, struct io_kiocb,
+						    io_task_work.node);
+
+		node = node->next;
+		__io_req_task_work_add(req, false);
+	}
+}
+
+int io_run_local_work(struct io_ring_ctx *ctx, bool locked)
+{
+	struct llist_node *node;
+	struct llist_node fake;
+	struct llist_node *current_final = NULL;
+	int ret;
+
+	if (unlikely(ctx->submitter_task != current)) {
+		if (locked)
+			mutex_unlock(&ctx->uring_lock);
+
+		/* maybe this is before any submissions */
+		if (!ctx->submitter_task)
+			return 0;
+
+		return -EEXIST;
+	}
+
+	if (!locked)
+		locked = mutex_trylock(&ctx->uring_lock);
+
+	node = io_llist_xchg(&ctx->work_llist, &fake);
+	ret = 0;
+again:
+	while (node != current_final) {
+		struct llist_node *next = node->next;
+		struct io_kiocb *req = container_of(node, struct io_kiocb,
+						    io_task_work.node);
+		prefetch(container_of(next, struct io_kiocb, io_task_work.node));
+		req->io_task_work.func(req, &locked);
+		ret++;
+		node = next;
+	}
+
+	if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
+		atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
+
+	node = io_llist_cmpxchg(&ctx->work_llist, &fake, NULL);
+	if (node != &fake) {
+		current_final = &fake;
+		node = io_llist_xchg(&ctx->work_llist, &fake);
+		goto again;
+	}
+
+	if (locked) {
+		io_submit_flush_completions(ctx);
+		mutex_unlock(&ctx->uring_lock);
+	}
+	return ret;
+}
+
 static void io_req_tw_post(struct io_kiocb *req, bool *locked)
 {
 	io_req_complete_post(req);
@@ -1284,9 +1379,10 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
 		if (wq_list_empty(&ctx->iopoll_list)) {
 			u32 tail = ctx->cached_cq_tail;
 
-			mutex_unlock(&ctx->uring_lock);
-			io_run_task_work();
+			ret = io_run_task_work_unlock_ctx(ctx);
 			mutex_lock(&ctx->uring_lock);
+			if (ret < 0)
+				break;
 
 			/* some requests don't go through iopoll_list */
 			if (tail != ctx->cached_cq_tail ||
@@ -2146,7 +2242,9 @@ struct io_wait_queue {
 
 static inline bool io_has_work(struct io_ring_ctx *ctx)
 {
-	return test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
+	return test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq) ||
+	       ((ctx->flags & IORING_SETUP_DEFER_TASKRUN) &&
+		!llist_empty(&ctx->work_llist));
 }
 
 static inline bool io_should_wake(struct io_wait_queue *iowq)
@@ -2178,9 +2276,9 @@ static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
 	return -1;
 }
 
-int io_run_task_work_sig(void)
+int io_run_task_work_sig(struct io_ring_ctx *ctx)
 {
-	if (io_run_task_work())
+	if (io_run_task_work_ctx(ctx))
 		return 1;
 	if (task_sigpending(current))
 		return -EINTR;
@@ -2196,7 +2294,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	unsigned long check_cq;
 
 	/* make sure we run task_work before checking for signals */
-	ret = io_run_task_work_sig();
+	ret = io_run_task_work_sig(ctx);
 	if (ret || io_should_wake(iowq))
 		return ret;
 
@@ -2230,7 +2328,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 		io_cqring_overflow_flush(ctx);
 		if (io_cqring_events(ctx) >= min_events)
 			return 0;
-		if (!io_run_task_work())
+		if (!io_run_task_work_ctx(ctx))
 			break;
 	} while (1);
 
@@ -2573,6 +2671,9 @@ static __cold void io_ring_exit_work(struct work_struct *work)
 	 * as nobody else will be looking for them.
 	 */
 	do {
+		if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+			io_move_task_work_from_local(ctx);
+
 		while (io_uring_try_cancel_requests(ctx, NULL, true))
 			cond_resched();
 
@@ -2768,6 +2869,8 @@ static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
 		}
 	}
 
+	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+		ret |= io_run_local_work(ctx, false) > 0;
 	ret |= io_cancel_defer_files(ctx, task, cancel_all);
 	mutex_lock(&ctx->uring_lock);
 	ret |= io_poll_remove_all(ctx, task, cancel_all);
@@ -3057,10 +3160,20 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 		}
 		if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
 			goto iopoll_locked;
+		if ((flags & IORING_ENTER_GETEVENTS) &&
+			(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) {
+			int ret2 = io_run_local_work(ctx, true);
+
+			if (unlikely(ret2 < 0))
+				goto out;
+			goto getevents_ran_local;
+		}
 		mutex_unlock(&ctx->uring_lock);
 	}
+
 	if (flags & IORING_ENTER_GETEVENTS) {
 		int ret2;
+
 		if (ctx->syscall_iopoll) {
 			/*
 			 * We disallow the app entering submit/complete with
@@ -3081,6 +3194,12 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 			const sigset_t __user *sig;
 			struct __kernel_timespec __user *ts;
 
+			if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
+				ret2 = io_run_local_work(ctx, false);
+				if (unlikely(ret2 < 0))
+					goto getevents_out;
+			}
+getevents_ran_local:
 			ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
 			if (likely(!ret2)) {
 				min_complete = min(min_complete,
@@ -3090,6 +3209,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 			}
 		}
 
+getevents_out:
 		if (!ret) {
 			ret = ret2;
 
@@ -3289,17 +3409,29 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
 	if (ctx->flags & IORING_SETUP_SQPOLL) {
 		/* IPI related flags don't make sense with SQPOLL */
 		if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
-				  IORING_SETUP_TASKRUN_FLAG))
+				  IORING_SETUP_TASKRUN_FLAG |
+				  IORING_SETUP_DEFER_TASKRUN))
 			goto err;
 		ctx->notify_method = TWA_SIGNAL_NO_IPI;
 	} else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
 		ctx->notify_method = TWA_SIGNAL_NO_IPI;
 	} else {
-		if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
+		if (ctx->flags & IORING_SETUP_TASKRUN_FLAG &&
+		    !(ctx->flags & IORING_SETUP_DEFER_TASKRUN))
 			goto err;
 		ctx->notify_method = TWA_SIGNAL;
 	}
 
+	/*
+	 * For DEFER_TASKRUN we require the completion task to be the same as the
+	 * submission task. This implies that there is only one submitter, so enforce
+	 * that.
+	 */
+	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN &&
+	    !(ctx->flags & IORING_SETUP_SINGLE_ISSUER)) {
+		goto err;
+	}
+
 	/*
 	 * This is just grabbed for accounting purposes. When a process exits,
 	 * the mm is exited and dropped before the files, hence we need to hang
@@ -3400,7 +3532,7 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
 			IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL |
 			IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG |
 			IORING_SETUP_SQE128 | IORING_SETUP_CQE32 |
-			IORING_SETUP_SINGLE_ISSUER))
+			IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN))
 		return -EINVAL;
 
 	return io_uring_create(entries, &p, params);
@@ -3872,7 +4004,7 @@ SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
 
 	ctx = f.file->private_data;
 
-	io_run_task_work();
+	io_run_task_work_ctx(ctx);
 
 	mutex_lock(&ctx->uring_lock);
 	ret = __io_uring_register(ctx, opcode, arg, nr_args);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 2f73f83af960..a9fb115234af 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -26,7 +26,8 @@ enum {
 
 struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx);
 bool io_req_cqe_overflow(struct io_kiocb *req);
-int io_run_task_work_sig(void);
+int io_run_task_work_sig(struct io_ring_ctx *ctx);
+int io_run_local_work(struct io_ring_ctx *ctx, bool locked);
 void io_req_complete_failed(struct io_kiocb *req, s32 res);
 void __io_req_complete(struct io_kiocb *req, unsigned issue_flags);
 void io_req_complete_post(struct io_kiocb *req);
@@ -234,6 +235,32 @@ static inline bool io_run_task_work(void)
 	return false;
 }
 
+static inline bool io_run_task_work_ctx(struct io_ring_ctx *ctx)
+{
+	bool ret = false;
+
+	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+		ret = io_run_local_work(ctx, false) > 0;
+
+	/* want to run this after in case more is added */
+	ret  |= io_run_task_work();
+	return ret;
+}
+
+static inline int io_run_task_work_unlock_ctx(struct io_ring_ctx *ctx)
+{
+	int ret;
+
+	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
+		ret = io_run_local_work(ctx, true);
+	} else {
+		mutex_unlock(&ctx->uring_lock);
+		ret = (int)io_run_task_work();
+	}
+
+	return ret;
+}
+
 static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
 {
 	if (!*locked) {
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 71359a4d0bd4..80cda6e2067f 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -343,7 +343,7 @@ __cold static int io_rsrc_ref_quiesce(struct io_rsrc_data *data,
 		flush_delayed_work(&ctx->rsrc_put_work);
 		reinit_completion(&data->done);
 
-		ret = io_run_task_work_sig();
+		ret = io_run_task_work_sig(ctx);
 		mutex_lock(&ctx->uring_lock);
 	} while (ret >= 0);
 	data->quiesce = false;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v3 5/7] io_uring: move io_eventfd_put
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
                   ` (3 preceding siblings ...)
  2022-08-19 12:19 ` [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN Dylan Yudaken
@ 2022-08-19 12:19 ` Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 6/7] io_uring: signal registered eventfd to process deferred task work Dylan Yudaken
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

Non functional change: move this function above io_eventfd_signal so it
can be used from there

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 6572d2276750..509bb52d15f3 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -478,6 +478,14 @@ static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
 	}
 }
 
+static void io_eventfd_put(struct rcu_head *rcu)
+{
+	struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
+
+	eventfd_ctx_put(ev_fd->cq_ev_fd);
+	kfree(ev_fd);
+}
+
 static void io_eventfd_signal(struct io_ring_ctx *ctx)
 {
 	struct io_ev_fd *ev_fd;
@@ -2467,14 +2475,6 @@ static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
 	return 0;
 }
 
-static void io_eventfd_put(struct rcu_head *rcu)
-{
-	struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
-
-	eventfd_ctx_put(ev_fd->cq_ev_fd);
-	kfree(ev_fd);
-}
-
 static int io_eventfd_unregister(struct io_ring_ctx *ctx)
 {
 	struct io_ev_fd *ev_fd;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v3 6/7] io_uring: signal registered eventfd to process deferred task work
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
                   ` (4 preceding siblings ...)
  2022-08-19 12:19 ` [PATCH for-next v3 5/7] io_uring: move io_eventfd_put Dylan Yudaken
@ 2022-08-19 12:19 ` Dylan Yudaken
  2022-08-19 12:19 ` [PATCH for-next v3 7/7] io_uring: trace local task work run Dylan Yudaken
  2022-08-29  7:01 ` [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Hao Xu
  7 siblings, 0 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

Some workloads rely on a registered eventfd (via
io_uring_register_eventfd(3)) in order to wake up and process the
io_uring.

In the case of a ring setup with IORING_SETUP_DEFER_TASKRUN, that eventfd
also needs to be signalled when there are tasks to run.

This changes an old behaviour which assumed 1 eventfd signal implied at
least 1 CQE, however only when this new flag is set (and so old users will
not notice). This should be expected with the IORING_SETUP_DEFER_TASKRUN
flag as it is not guaranteed that every task will result in a CQE.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 include/linux/io_uring_types.h |  1 +
 io_uring/io_uring.c            | 75 ++++++++++++++++++++++++----------
 2 files changed, 55 insertions(+), 21 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index d56ff2185168..42494176434a 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -184,6 +184,7 @@ struct io_ev_fd {
 	struct eventfd_ctx	*cq_ev_fd;
 	unsigned int		eventfd_async: 1;
 	struct rcu_head		rcu;
+	atomic_t		refs;
 };
 
 struct io_alloc_cache {
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 509bb52d15f3..774ca31cb763 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -478,33 +478,33 @@ static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
 	}
 }
 
+
+static inline void __io_eventfd_put(struct io_ev_fd *ev_fd)
+{
+	if (atomic_dec_and_test(&ev_fd->refs)) {
+		eventfd_ctx_put(ev_fd->cq_ev_fd);
+		kfree(ev_fd);
+	}
+}
+
+static void io_eventfd_signal_put(struct rcu_head *rcu)
+{
+	struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
+
+	eventfd_signal(ev_fd->cq_ev_fd, 1);
+	__io_eventfd_put(ev_fd);
+}
+
 static void io_eventfd_put(struct rcu_head *rcu)
 {
 	struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
 
-	eventfd_ctx_put(ev_fd->cq_ev_fd);
-	kfree(ev_fd);
+	__io_eventfd_put(ev_fd);
 }
 
 static void io_eventfd_signal(struct io_ring_ctx *ctx)
 {
-	struct io_ev_fd *ev_fd;
-	bool skip;
-
-	spin_lock(&ctx->completion_lock);
-	/*
-	 * Eventfd should only get triggered when at least one event has been
-	 * posted. Some applications rely on the eventfd notification count only
-	 * changing IFF a new CQE has been added to the CQ ring. There's no
-	 * depedency on 1:1 relationship between how many times this function is
-	 * called (and hence the eventfd count) and number of CQEs posted to the
-	 * CQ ring.
-	 */
-	skip = ctx->cached_cq_tail == ctx->evfd_last_cq_tail;
-	ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
-	spin_unlock(&ctx->completion_lock);
-	if (skip)
-		return;
+	struct io_ev_fd *ev_fd = NULL;
 
 	rcu_read_lock();
 	/*
@@ -522,13 +522,43 @@ static void io_eventfd_signal(struct io_ring_ctx *ctx)
 		goto out;
 	if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
 		goto out;
+	if (ev_fd->eventfd_async && !io_wq_current_is_worker())
+		goto out;
 
-	if (!ev_fd->eventfd_async || io_wq_current_is_worker())
+	if (likely(eventfd_signal_allowed())) {
 		eventfd_signal(ev_fd->cq_ev_fd, 1);
+	} else {
+		atomic_inc(&ev_fd->refs);
+		call_rcu(&ev_fd->rcu, io_eventfd_signal_put);
+	}
+
 out:
 	rcu_read_unlock();
 }
 
+static void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
+{
+	bool skip;
+
+	spin_lock(&ctx->completion_lock);
+
+	/*
+	 * Eventfd should only get triggered when at least one event has been
+	 * posted. Some applications rely on the eventfd notification count
+	 * only changing IFF a new CQE has been added to the CQ ring. There's
+	 * no depedency on 1:1 relationship between how many times this
+	 * function is called (and hence the eventfd count) and number of CQEs
+	 * posted to the CQ ring.
+	 */
+	skip = ctx->cached_cq_tail == ctx->evfd_last_cq_tail;
+	ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
+	spin_unlock(&ctx->completion_lock);
+	if (skip)
+		return;
+
+	io_eventfd_signal(ctx);
+}
+
 void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
 {
 	if (ctx->off_timeout_used || ctx->drain_active) {
@@ -540,7 +570,7 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
 		spin_unlock(&ctx->completion_lock);
 	}
 	if (ctx->has_evfd)
-		io_eventfd_signal(ctx);
+		io_eventfd_flush_signal(ctx);
 }
 
 static inline void io_cqring_ev_posted(struct io_ring_ctx *ctx)
@@ -1071,6 +1101,8 @@ static void io_req_local_work_add(struct io_kiocb *req)
 	if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
 		atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
 
+	if (ctx->has_evfd)
+		io_eventfd_signal(ctx);
 	io_cqring_wake(ctx);
 
 }
@@ -2472,6 +2504,7 @@ static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
 	ev_fd->eventfd_async = eventfd_async;
 	ctx->has_evfd = true;
 	rcu_assign_pointer(ctx->io_ev_fd, ev_fd);
+	atomic_set(&ev_fd->refs, 1);
 	return 0;
 }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v3 7/7] io_uring: trace local task work run
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
                   ` (5 preceding siblings ...)
  2022-08-19 12:19 ` [PATCH for-next v3 6/7] io_uring: signal registered eventfd to process deferred task work Dylan Yudaken
@ 2022-08-19 12:19 ` Dylan Yudaken
  2022-08-29  7:01 ` [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Hao Xu
  7 siblings, 0 replies; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-19 12:19 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team, Dylan Yudaken

Add tracing for io_run_local_task_work

Signed-off-by: Dylan Yudaken <[email protected]>
---
 include/trace/events/io_uring.h | 29 +++++++++++++++++++++++++++++
 io_uring/io_uring.c             |  3 +++
 2 files changed, 32 insertions(+)

diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
index c5b21ff0ac85..936fd41bf147 100644
--- a/include/trace/events/io_uring.h
+++ b/include/trace/events/io_uring.h
@@ -655,6 +655,35 @@ TRACE_EVENT(io_uring_short_write,
 			  __entry->wanted, __entry->got)
 );
 
+/*
+ * io_uring_local_work_run - ran ring local task work
+ *
+ * @tctx:		pointer to a io_uring_ctx
+ * @count:		how many functions it ran
+ * @loops:		how many loops it ran
+ *
+ */
+TRACE_EVENT(io_uring_local_work_run,
+
+	TP_PROTO(void *ctx, int count, unsigned int loops),
+
+	TP_ARGS(ctx, count, loops),
+
+	TP_STRUCT__entry (
+		__field(void *,		ctx	)
+		__field(int,		count	)
+		__field(unsigned int,	loops	)
+	),
+
+	TP_fast_assign(
+		__entry->ctx		= ctx;
+		__entry->count		= count;
+		__entry->loops		= loops;
+	),
+
+	TP_printk("ring %p, count %d, loops %u", __entry->ctx, __entry->count, __entry->loops)
+);
+
 #endif /* _TRACE_IO_URING_H */
 
 /* This part must be outside protection */
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 774ca31cb763..acb5aaa80164 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1164,6 +1164,7 @@ int io_run_local_work(struct io_ring_ctx *ctx, bool locked)
 	struct llist_node fake;
 	struct llist_node *current_final = NULL;
 	int ret;
+	unsigned int loops = 1;
 
 	if (unlikely(ctx->submitter_task != current)) {
 		if (locked)
@@ -1197,6 +1198,7 @@ int io_run_local_work(struct io_ring_ctx *ctx, bool locked)
 
 	node = io_llist_cmpxchg(&ctx->work_llist, &fake, NULL);
 	if (node != &fake) {
+		loops++;
 		current_final = &fake;
 		node = io_llist_xchg(&ctx->work_llist, &fake);
 		goto again;
@@ -1206,6 +1208,7 @@ int io_run_local_work(struct io_ring_ctx *ctx, bool locked)
 		io_submit_flush_completions(ctx);
 		mutex_unlock(&ctx->uring_lock);
 	}
+	trace_io_uring_local_work_run(ctx, ret, loops);
 	return ret;
 }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-19 12:19 ` [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN Dylan Yudaken
@ 2022-08-22 11:34   ` Pavel Begunkov
  2022-08-29  6:32     ` Hao Xu
  2022-08-30  9:54     ` Dylan Yudaken
  2022-08-30 13:19   ` Hao Xu
  1 sibling, 2 replies; 18+ messages in thread
From: Pavel Begunkov @ 2022-08-22 11:34 UTC (permalink / raw)
  To: Dylan Yudaken, Jens Axboe, io-uring; +Cc: Kernel-team

On 8/19/22 13:19, Dylan Yudaken wrote:
> Allow deferring async tasks until the user calls io_uring_enter(2) with
> the IORING_ENTER_GETEVENTS flag. Enable this mode with a flag at
> io_uring_setup time. This functionality requires that the later
> io_uring_enter will be called from the same submission task, and therefore
> restrict this flag to work only when IORING_SETUP_SINGLE_ISSUER is also
> set.

Looks ok, a couple of small comments below, but I don't see anything
blocking it.

> Being able to hand pick when tasks are run prevents the problem where
> there is current work to be done, however task work runs anyway.
> 
> For example, a common workload would obtain a batch of CQEs, and process
> each one. Interrupting this to additional taskwork would add latency but
> not gain anything. If instead task work is deferred to just before more
> CQEs are obtained then no additional latency is added.
> 
> The way this is implemented is by trying to keep task work local to a
> io_ring_ctx, rather than to the submission task. This is required, as the
> application will want to wake up only a single io_ring_ctx at a time to
> process work, and so the lists of work have to be kept separate.
> 
> This has some other benefits like not having to check the task continually
> in handle_tw_list (and potentially unlocking/locking those), and reducing
> locks in the submit & process completions path.
> 
> There are networking cases where using this option can reduce request
> latency by 50%. For example a contrived example using [1] where the client
> sends 2k data and receives the same data back while doing some system
> calls (to trigger task work) shows this reduction. The reason ends up
> being that if sending responses is delayed by processing task work, then
> the client side sits idle. Whereas reordering the sends first means that
> the client runs it's workload in parallel with the local task work.

Quite contrived, for some it may cut latency in half but for others
as easily increate it twofold. In any case, it's not a critique of the
feature as it's optional, but rather raises a question whether we
need to add some fairness / scheduling here.

> [1]:
> Using https://github.com/DylanZA/netbench/tree/defer_run
> Client:
> ./netbench  --client_only 1 --control_port 10000 --host <host> --tx "epoll --threads 16 --per_thread 1 --size 2048 --resp 2048 --workload 1000"
> Server:
> ./netbench  --server_only 1 --control_port 10000  --rx "io_uring --defer_taskrun 0 --workload 100"   --rx "io_uring  --defer_taskrun 1 --workload 100"
> 
> Signed-off-by: Dylan Yudaken <[email protected]>
> ---

> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 53696dd90626..6572d2276750 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
[...]

> +int io_run_local_work(struct io_ring_ctx *ctx, bool locked)
> +{
> +	struct llist_node *node;
> +	struct llist_node fake;
> +	struct llist_node *current_final = NULL;
> +	int ret;
> +
> +	if (unlikely(ctx->submitter_task != current)) {
> +		if (locked)
> +			mutex_unlock(&ctx->uring_lock);
> +
> +		/* maybe this is before any submissions */
> +		if (!ctx->submitter_task)
> +			return 0;
> +
> +		return -EEXIST;
> +	}
> +
> +	if (!locked)
> +		locked = mutex_trylock(&ctx->uring_lock);
> +
> +	node = io_llist_xchg(&ctx->work_llist, &fake);
> +	ret = 0;
> +again:
> +	while (node != current_final) {
> +		struct llist_node *next = node->next;
> +		struct io_kiocb *req = container_of(node, struct io_kiocb,
> +						    io_task_work.node);
> +		prefetch(container_of(next, struct io_kiocb, io_task_work.node));
> +		req->io_task_work.func(req, &locked);
> +		ret++;
> +		node = next;
> +	}
> +
> +	if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
> +		atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
> +
> +	node = io_llist_cmpxchg(&ctx->work_llist, &fake, NULL);
> +	if (node != &fake) {
> +		current_final = &fake;
> +		node = io_llist_xchg(&ctx->work_llist, &fake);
> +		goto again;
> +	}
> +
> +	if (locked) {
> +		io_submit_flush_completions(ctx);
> +		mutex_unlock(&ctx->uring_lock);
> +	}
> +	return ret;
> +}

I was thinking about:

int io_run_local_work(struct io_ring_ctx *ctx, bool *locked)
{
	locked = try_lock();
}

bool locked = false;
io_run_local_work(ctx, *locked);
if (locked)
	unlock();

// or just as below when already holding it
bool locked = true;
io_run_local_work(ctx, *locked);

Which would replace

if (DEFER) {
	// we're assuming that it'll unlock
	io_run_local_work(true);
} else {
	unlock();
}

with

if (DEFER) {
	bool locked = true;
	io_run_local_work(&locked);
}
unlock();

But anyway, it can be mulled later.


> -int io_run_task_work_sig(void)
> +int io_run_task_work_sig(struct io_ring_ctx *ctx)
>   {
> -	if (io_run_task_work())
> +	if (io_run_task_work_ctx(ctx))
>   		return 1;
>   	if (task_sigpending(current))
>   		return -EINTR;
> @@ -2196,7 +2294,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
>   	unsigned long check_cq;
>   
>   	/* make sure we run task_work before checking for signals */
> -	ret = io_run_task_work_sig();
> +	ret = io_run_task_work_sig(ctx);
>   	if (ret || io_should_wake(iowq))
>   		return ret;
>   
> @@ -2230,7 +2328,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
>   		io_cqring_overflow_flush(ctx);
>   		if (io_cqring_events(ctx) >= min_events)
>   			return 0;
> -		if (!io_run_task_work())
> +		if (!io_run_task_work_ctx(ctx))
>   			break;
>   	} while (1);
>   
> @@ -2573,6 +2671,9 @@ static __cold void io_ring_exit_work(struct work_struct *work)
>   	 * as nobody else will be looking for them.
>   	 */
>   	do {
> +		if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
> +			io_move_task_work_from_local(ctx);
> +
>   		while (io_uring_try_cancel_requests(ctx, NULL, true))
>   			cond_resched();
>   
> @@ -2768,6 +2869,8 @@ static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
>   		}
>   	}
>   
> +	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
> +		ret |= io_run_local_work(ctx, false) > 0;
>   	ret |= io_cancel_defer_files(ctx, task, cancel_all);
>   	mutex_lock(&ctx->uring_lock);
>   	ret |= io_poll_remove_all(ctx, task, cancel_all);
> @@ -3057,10 +3160,20 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
>   		}
>   		if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
>   			goto iopoll_locked;
> +		if ((flags & IORING_ENTER_GETEVENTS) &&
> +			(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) {
> +			int ret2 = io_run_local_work(ctx, true);
> +
> +			if (unlikely(ret2 < 0))
> +				goto out;

It's an optimisation and we don't have to handle errors here,
let's ignore them and make it looking a bit better.

> +			goto getevents_ran_local;
> +		}
>   		mutex_unlock(&ctx->uring_lock);
>   	}
> +
>   	if (flags & IORING_ENTER_GETEVENTS) {
>   		int ret2;
> +
>   		if (ctx->syscall_iopoll) {
>   			/*
>   			 * We disallow the app entering submit/complete with
> @@ -3081,6 +3194,12 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
>   			const sigset_t __user *sig;
>   			struct __kernel_timespec __user *ts;
>   
> +			if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {

I think it should be in io_cqring_wait(), which calls it anyway
in the beginning. Instead of

	do {
		io_cqring_overflow_flush(ctx);
		if (io_cqring_events(ctx) >= min_events)
			return 0;
		if (!io_run_task_work())
			break;
	} while (1);

Let's have

	do {
		ret = io_run_task_work_ctx();
		// handle ret
		io_cqring_overflow_flush(ctx);
		if (io_cqring_events(ctx) >= min_events)
			return 0;
	} while (1);

> +				ret2 = io_run_local_work(ctx, false);
> +				if (unlikely(ret2 < 0))
> +					goto getevents_out;
> +			}
> +getevents_ran_local:
>   			ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
>   			if (likely(!ret2)) {
>   				min_complete = min(min_complete,
> @@ -3090,6 +3209,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
>   			}
>   		}
>   
> +getevents_out:
>   		if (!ret) {
>   			ret = ret2;
>   
> @@ -3289,17 +3409,29 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
>   	if (ctx->flags & IORING_SETUP_SQPOLL) {
>   		/* IPI related flags don't make sense with SQPOLL */
>   		if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
> -				  IORING_SETUP_TASKRUN_FLAG))
> +				  IORING_SETUP_TASKRUN_FLAG |
> +				  IORING_SETUP_DEFER_TASKRUN))

Sounds like we should also fail if SQPOLL is set, especially with
the task check on the waiting side.

>   			goto err;
>   		ctx->notify_method = TWA_SIGNAL_NO_IPI;
>   	} else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
>   		ctx->notify_method = TWA_SIGNAL_NO_IPI;
[...]
>   	mutex_lock(&ctx->uring_lock);
>   	ret = __io_uring_register(ctx, opcode, arg, nr_args);
> diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
> index 2f73f83af960..a9fb115234af 100644
> --- a/io_uring/io_uring.h
> +++ b/io_uring/io_uring.h
> @@ -26,7 +26,8 @@ enum {
[...]
> +static inline int io_run_task_work_unlock_ctx(struct io_ring_ctx *ctx)
> +{
> +	int ret;
> +
> +	if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
> +		ret = io_run_local_work(ctx, true);
> +	} else {
> +		mutex_unlock(&ctx->uring_lock);
> +		ret = (int)io_run_task_work();

Why do we need a cast? let's keep the return type same


-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-22 11:34   ` Pavel Begunkov
@ 2022-08-29  6:32     ` Hao Xu
  2022-08-30  7:23       ` Dylan Yudaken
  2022-08-30  9:54     ` Dylan Yudaken
  1 sibling, 1 reply; 18+ messages in thread
From: Hao Xu @ 2022-08-29  6:32 UTC (permalink / raw)
  To: Pavel Begunkov, Dylan Yudaken, Jens Axboe, io-uring; +Cc: Kernel-team

On 8/22/22 19:34, Pavel Begunkov wrote:
> On 8/19/22 13:19, Dylan Yudaken wrote:
>> Allow deferring async tasks until the user calls io_uring_enter(2) with
>> the IORING_ENTER_GETEVENTS flag. Enable this mode with a flag at
>> io_uring_setup time. This functionality requires that the later
>> io_uring_enter will be called from the same submission task, and 
>> therefore
>> restrict this flag to work only when IORING_SETUP_SINGLE_ISSUER is also
>> set.
> 
> Looks ok, a couple of small comments below, but I don't see anything
> blocking it.
> 
>> Being able to hand pick when tasks are run prevents the problem where
>> there is current work to be done, however task work runs anyway.
>>
>> For example, a common workload would obtain a batch of CQEs, and process
>> each one. Interrupting this to additional taskwork would add latency but
>> not gain anything. If instead task work is deferred to just before more
>> CQEs are obtained then no additional latency is added.
>>
>> The way this is implemented is by trying to keep task work local to a
>> io_ring_ctx, rather than to the submission task. This is required, as the
>> application will want to wake up only a single io_ring_ctx at a time to
>> process work, and so the lists of work have to be kept separate.
>>
>> This has some other benefits like not having to check the task 
>> continually
>> in handle_tw_list (and potentially unlocking/locking those), and reducing
>> locks in the submit & process completions path.
>>
>> There are networking cases where using this option can reduce request
>> latency by 50%. For example a contrived example using [1] where the 
>> client
>> sends 2k data and receives the same data back while doing some system
>> calls (to trigger task work) shows this reduction. The reason ends up
>> being that if sending responses is delayed by processing task work, then
>> the client side sits idle. Whereas reordering the sends first means that
>> the client runs it's workload in parallel with the local task work.
> 
> Quite contrived, for some it may cut latency in half but for others
> as easily increate it twofold. In any case, it's not a critique of the
> feature as it's optional, but rather raises a question whether we
> need to add some fairness / scheduling here.
> 
>> [1]:
>> Using https://github.com/DylanZA/netbench/tree/defer_run
>> Client:
>> ./netbench  --client_only 1 --control_port 10000 --host <host> --tx 
>> "epoll --threads 16 --per_thread 1 --size 2048 --resp 2048 --workload 
>> 1000"
>> Server:
>> ./netbench  --server_only 1 --control_port 10000  --rx "io_uring 
>> --defer_taskrun 0 --workload 100"   --rx "io_uring  --defer_taskrun 1 
>> --workload 100"
>>
>> Signed-off-by: Dylan Yudaken <[email protected]>
>> ---
> 
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index 53696dd90626..6572d2276750 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
> [...]
> 
>> +int io_run_local_work(struct io_ring_ctx *ctx, bool locked)
>> +{
>> +    struct llist_node *node;
>> +    struct llist_node fake;
>> +    struct llist_node *current_final = NULL;
>> +    int ret;
>> +
>> +    if (unlikely(ctx->submitter_task != current)) {
>> +        if (locked)
>> +            mutex_unlock(&ctx->uring_lock);
>> +
>> +        /* maybe this is before any submissions */
>> +        if (!ctx->submitter_task)
>> +            return 0;
>> +
>> +        return -EEXIST;
>> +    }
>> +
>> +    if (!locked)
>> +        locked = mutex_trylock(&ctx->uring_lock);
>> +
>> +    node = io_llist_xchg(&ctx->work_llist, &fake);
>> +    ret = 0;
>> +again:
>> +    while (node != current_final) {
>> +        struct llist_node *next = node->next;
>> +        struct io_kiocb *req = container_of(node, struct io_kiocb,
>> +                            io_task_work.node);
>> +        prefetch(container_of(next, struct io_kiocb, 
>> io_task_work.node));
>> +        req->io_task_work.func(req, &locked);
>> +        ret++;
>> +        node = next;
>> +    }
>> +
>> +    if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
>> +        atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
>> +
>> +    node = io_llist_cmpxchg(&ctx->work_llist, &fake, NULL);
>> +    if (node != &fake) {
>> +        current_final = &fake;
>> +        node = io_llist_xchg(&ctx->work_llist, &fake);
>> +        goto again;
>> +    }
>> +
>> +    if (locked) {
>> +        io_submit_flush_completions(ctx);
>> +        mutex_unlock(&ctx->uring_lock);
>> +    }
>> +    return ret;
>> +}
> 
> I was thinking about:
> 
> int io_run_local_work(struct io_ring_ctx *ctx, bool *locked)
> {
>      locked = try_lock();
> }
> 
> bool locked = false;
> io_run_local_work(ctx, *locked);
> if (locked)
>      unlock();
> 
> // or just as below when already holding it
> bool locked = true;
> io_run_local_work(ctx, *locked);
> 
> Which would replace
> 
> if (DEFER) {
>      // we're assuming that it'll unlock
>      io_run_local_work(true);
> } else {
>      unlock();
> }
> 
> with
> 
> if (DEFER) {
>      bool locked = true;
>      io_run_local_work(&locked);
> }
> unlock();
> 
> But anyway, it can be mulled later.
> 
> 
>> -int io_run_task_work_sig(void)
>> +int io_run_task_work_sig(struct io_ring_ctx *ctx)
>>   {
>> -    if (io_run_task_work())
>> +    if (io_run_task_work_ctx(ctx))
>>           return 1;
>>       if (task_sigpending(current))
>>           return -EINTR;
>> @@ -2196,7 +2294,7 @@ static inline int io_cqring_wait_schedule(struct 
>> io_ring_ctx *ctx,
>>       unsigned long check_cq;
>>       /* make sure we run task_work before checking for signals */
>> -    ret = io_run_task_work_sig();
>> +    ret = io_run_task_work_sig(ctx);
>>       if (ret || io_should_wake(iowq))
>>           return ret;
>> @@ -2230,7 +2328,7 @@ static int io_cqring_wait(struct io_ring_ctx 
>> *ctx, int min_events,
>>           io_cqring_overflow_flush(ctx);
>>           if (io_cqring_events(ctx) >= min_events)
>>               return 0;
>> -        if (!io_run_task_work())
>> +        if (!io_run_task_work_ctx(ctx))
>>               break;
>>       } while (1);
>> @@ -2573,6 +2671,9 @@ static __cold void io_ring_exit_work(struct 
>> work_struct *work)
>>        * as nobody else will be looking for them.
>>        */
>>       do {
>> +        if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
>> +            io_move_task_work_from_local(ctx);
>> +
>>           while (io_uring_try_cancel_requests(ctx, NULL, true))
>>               cond_resched();
>> @@ -2768,6 +2869,8 @@ static __cold bool 
>> io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
>>           }
>>       }
>> +    if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
>> +        ret |= io_run_local_work(ctx, false) > 0;
>>       ret |= io_cancel_defer_files(ctx, task, cancel_all);
>>       mutex_lock(&ctx->uring_lock);
>>       ret |= io_poll_remove_all(ctx, task, cancel_all);
>> @@ -3057,10 +3160,20 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, 
>> fd, u32, to_submit,
>>           }
>>           if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
>>               goto iopoll_locked;
>> +        if ((flags & IORING_ENTER_GETEVENTS) &&
>> +            (ctx->flags & IORING_SETUP_DEFER_TASKRUN)) {
>> +            int ret2 = io_run_local_work(ctx, true);
>> +
>> +            if (unlikely(ret2 < 0))
>> +                goto out;
> 
> It's an optimisation and we don't have to handle errors here,
> let's ignore them and make it looking a bit better.
> 
>> +            goto getevents_ran_local;
>> +        }
>>           mutex_unlock(&ctx->uring_lock);
>>       }
>> +
>>       if (flags & IORING_ENTER_GETEVENTS) {
>>           int ret2;
>> +
>>           if (ctx->syscall_iopoll) {
>>               /*
>>                * We disallow the app entering submit/complete with
>> @@ -3081,6 +3194,12 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, 
>> fd, u32, to_submit,
>>               const sigset_t __user *sig;
>>               struct __kernel_timespec __user *ts;
>> +            if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
> 
> I think it should be in io_cqring_wait(), which calls it anyway
> in the beginning. Instead of
> 
>      do {
>          io_cqring_overflow_flush(ctx);
>          if (io_cqring_events(ctx) >= min_events)
>              return 0;
>          if (!io_run_task_work())
>              break;
>      } while (1);
> 
> Let's have
> 
>      do {
>          ret = io_run_task_work_ctx();
>          // handle ret
>          io_cqring_overflow_flush(ctx);
>          if (io_cqring_events(ctx) >= min_events)
>              return 0;
>      } while (1);
> 
>> +                ret2 = io_run_local_work(ctx, false);
>> +                if (unlikely(ret2 < 0))
>> +                    goto getevents_out;
>> +            }
>> +getevents_ran_local:
>>               ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
>>               if (likely(!ret2)) {
>>                   min_complete = min(min_complete,
>> @@ -3090,6 +3209,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, 
>> fd, u32, to_submit,
>>               }
>>           }
>> +getevents_out:
>>           if (!ret) {
>>               ret = ret2;
>> @@ -3289,17 +3409,29 @@ static __cold int io_uring_create(unsigned 
>> entries, struct io_uring_params *p,
>>       if (ctx->flags & IORING_SETUP_SQPOLL) {
>>           /* IPI related flags don't make sense with SQPOLL */
>>           if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
>> -                  IORING_SETUP_TASKRUN_FLAG))
>> +                  IORING_SETUP_TASKRUN_FLAG |
>> +                  IORING_SETUP_DEFER_TASKRUN))
> 
> Sounds like we should also fail if SQPOLL is set, especially with
> the task check on the waiting side.

sqpoll as a natural single issuer case, shouldn't we support this
feature for it? And surely, in that case, don't do local task work check
in cqring wait time and be careful in other places like
io_uring_register

> 
>>               goto err;
>>           ctx->notify_method = TWA_SIGNAL_NO_IPI;
>>       } else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
>>           ctx->notify_method = TWA_SIGNAL_NO_IPI;
> [...]
>>       mutex_lock(&ctx->uring_lock);
>>       ret = __io_uring_register(ctx, opcode, arg, nr_args);
>> diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
>> index 2f73f83af960..a9fb115234af 100644
>> --- a/io_uring/io_uring.h
>> +++ b/io_uring/io_uring.h
>> @@ -26,7 +26,8 @@ enum {
> [...]
>> +static inline int io_run_task_work_unlock_ctx(struct io_ring_ctx *ctx)
>> +{
>> +    int ret;
>> +
>> +    if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
>> +        ret = io_run_local_work(ctx, true);
>> +    } else {
>> +        mutex_unlock(&ctx->uring_lock);
>> +        ret = (int)io_run_task_work();
> 
> Why do we need a cast? let's keep the return type same
> 
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed
  2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
                   ` (6 preceding siblings ...)
  2022-08-19 12:19 ` [PATCH for-next v3 7/7] io_uring: trace local task work run Dylan Yudaken
@ 2022-08-29  7:01 ` Hao Xu
  7 siblings, 0 replies; 18+ messages in thread
From: Hao Xu @ 2022-08-29  7:01 UTC (permalink / raw)
  To: Dylan Yudaken, Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team

On 8/19/22 20:19, Dylan Yudaken wrote:
> We have seen workloads which suffer due to the way task work is currently
> scheduled. This scheduling can cause non-trivial tasks to run interrupting
> useful work on the workload. For example in network servers, a large async
> recv may run, calling memcpy on a large packet, interrupting a send. Which
> would add latency.
> 
> This series adds an option to defer async work until user space calls
> io_uring_enter with the GETEVENTS flag. This allows the workload to choose
> when to schedule async work and have finer control (at the expense of
> complexity of managing this) of scheduling.
> 
> Patches 1,2 are prep patches
> Patch 3 changes io_uring_enter to not pre-run task work
> Patch 4/5/6 adds the new flag and functionality
> Patch 7 adds tracing for the local task work running
> 
> Changes since v2:
>   - add a patch to trace local task work run
>   - return -EEXIST if calling from the wrong task
>   - properly handle shutting down due to an exec
>   - remove 'all' parameter from io_run_task_work_ctx
>   
> Changes since v1:
>   - Removed the first patch (using ctx variable) which was broken
>   - Require IORING_SETUP_SINGLE_ISSUER and make sure waiter task
>     is the same as the submitter task
>   - Just don't run task work at the start of io_uring_enter (Pavel's
>     suggestion)
>   - Remove io_move_task_work_from_local
>   - Fix locking bugs
> 
> Dylan Yudaken (7):
>    io_uring: remove unnecessary variable
>    io_uring: introduce io_has_work
>    io_uring: do not run task work at the start of io_uring_enter
>    io_uring: add IORING_SETUP_DEFER_TASKRUN
>    io_uring: move io_eventfd_put
>    io_uring: signal registered eventfd to process deferred task work
>    io_uring: trace local task work run
> 
>   include/linux/io_uring_types.h  |   3 +
>   include/trace/events/io_uring.h |  29 ++++
>   include/uapi/linux/io_uring.h   |   7 +
>   io_uring/cancel.c               |   2 +-
>   io_uring/io_uring.c             | 264 ++++++++++++++++++++++++++------
>   io_uring/io_uring.h             |  29 +++-
>   io_uring/rsrc.c                 |   2 +-
>   7 files changed, 285 insertions(+), 51 deletions(-)
> 
> 
> base-commit: 5993000dc6b31b927403cee65fbc5f9f070fa3e4
> prerequisite-patch-id: cb1d024945aa728d09a131156140a33d30bc268b

Apart from the comments, others looks good to me,

Acked-by: Hao Xu <[email protected]>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-29  6:32     ` Hao Xu
@ 2022-08-30  7:23       ` Dylan Yudaken
  2022-08-30  7:54         ` Hao Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-30  7:23 UTC (permalink / raw)
  To: [email protected], [email protected], [email protected],
	[email protected]
  Cc: Kernel Team

On Mon, 2022-08-29 at 14:32 +0800, Hao Xu wrote:
> > > @@ -3289,17 +3409,29 @@ static __cold int
> > > io_uring_create(unsigned 
> > > entries, struct io_uring_params *p,
> > >       if (ctx->flags & IORING_SETUP_SQPOLL) {
> > >           /* IPI related flags don't make sense with SQPOLL */
> > >           if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
> > > -                  IORING_SETUP_TASKRUN_FLAG))
> > > +                  IORING_SETUP_TASKRUN_FLAG |
> > > +                  IORING_SETUP_DEFER_TASKRUN))
> > 
> > Sounds like we should also fail if SQPOLL is set, especially with
> > the task check on the waiting side.
> 
> sqpoll as a natural single issuer case, shouldn't we support this
> feature for it? And surely, in that case, don't do local task work
> check
> in cqring wait time and be careful in other places like
> io_uring_register

I think there is definitely scope for that - but it's less obvious how
to do it.
i.e. in it's current form DEFER_TASKRUN requires the GETEVENTS to be
submitted on the same task as the initial submission, but with SQPOLL
the submission task is a kernel thread so would have to have some
difference in the API.

As an idea for a later patch set - perhaps the semantics should be to
keep task work local but only run it once submissions have been
processed for a ctx. I suspect that will require some care to ensure
the wakeup flag is set correctly and that it cleans up properly.

Dylan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-30  7:23       ` Dylan Yudaken
@ 2022-08-30  7:54         ` Hao Xu
  0 siblings, 0 replies; 18+ messages in thread
From: Hao Xu @ 2022-08-30  7:54 UTC (permalink / raw)
  To: Dylan Yudaken, [email protected], [email protected],
	[email protected]
  Cc: Kernel Team

On 8/30/22 15:23, Dylan Yudaken wrote:
> On Mon, 2022-08-29 at 14:32 +0800, Hao Xu wrote:
>>>> @@ -3289,17 +3409,29 @@ static __cold int
>>>> io_uring_create(unsigned
>>>> entries, struct io_uring_params *p,
>>>>        if (ctx->flags & IORING_SETUP_SQPOLL) {
>>>>            /* IPI related flags don't make sense with SQPOLL */
>>>>            if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
>>>> -                  IORING_SETUP_TASKRUN_FLAG))
>>>> +                  IORING_SETUP_TASKRUN_FLAG |
>>>> +                  IORING_SETUP_DEFER_TASKRUN))
>>>
>>> Sounds like we should also fail if SQPOLL is set, especially with
>>> the task check on the waiting side.
>>
>> sqpoll as a natural single issuer case, shouldn't we support this
>> feature for it? And surely, in that case, don't do local task work
>> check
>> in cqring wait time and be careful in other places like
>> io_uring_register
> 
> I think there is definitely scope for that - but it's less obvious how
> to do it.
> i.e. in it's current form DEFER_TASKRUN requires the GETEVENTS to be
> submitted on the same task as the initial submission, but with SQPOLL
> the submission task is a kernel thread so would have to have some
> difference in the API.

Yea, just like what I said, in sqpoll mode, we shouldn't do the tw
handle in the io_uring_enter.

> 
> As an idea for a later patch set - perhaps the semantics should be to
> keep task work local but only run it once submissions have been
> processed for a ctx. I suspect that will require some care to ensure
> the wakeup flag is set correctly and that it cleans up properly.
> 

Yea, it should be a separate follow-up patchset, currently there is a
io_run_task_work() after submitted sqes for all ctxes and before going
to sleep, that may be a good place for it. I haven't think about it in
detail, but there should be a viable way.

> Dylan
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-22 11:34   ` Pavel Begunkov
  2022-08-29  6:32     ` Hao Xu
@ 2022-08-30  9:54     ` Dylan Yudaken
  2022-08-30 10:29       ` Pavel Begunkov
  1 sibling, 1 reply; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-30  9:54 UTC (permalink / raw)
  To: [email protected], [email protected], [email protected]
  Cc: Kernel Team

On Mon, 2022-08-22 at 12:34 +0100, Pavel Begunkov wrote:
> On 8/19/22 13:19, Dylan Yudaken wrote:
> > Allow deferring async tasks until the user calls io_uring_enter(2)
> > with
> > the IORING_ENTER_GETEVENTS flag. Enable this mode with a flag at
> > io_uring_setup time. This functionality requires that the later
> > io_uring_enter will be called from the same submission task, and
> > therefore
> > restrict this flag to work only when IORING_SETUP_SINGLE_ISSUER is
> > also
> > set.
> 
> Looks ok, a couple of small comments below, but I don't see anything
> blocking it.
> 
> > Being able to hand pick when tasks are run prevents the problem
> > where
> > there is current work to be done, however task work runs anyway.
> > 
> > For example, a common workload would obtain a batch of CQEs, and
> > process
> > each one. Interrupting this to additional taskwork would add
> > latency but
> > not gain anything. If instead task work is deferred to just before
> > more
> > CQEs are obtained then no additional latency is added.
> > 
> > The way this is implemented is by trying to keep task work local to
> > a
> > io_ring_ctx, rather than to the submission task. This is required,
> > as the
> > application will want to wake up only a single io_ring_ctx at a
> > time to
> > process work, and so the lists of work have to be kept separate.
> > 
> > This has some other benefits like not having to check the task
> > continually
> > in handle_tw_list (and potentially unlocking/locking those), and
> > reducing
> > locks in the submit & process completions path.
> > 
> > There are networking cases where using this option can reduce
> > request
> > latency by 50%. For example a contrived example using [1] where the
> > client
> > sends 2k data and receives the same data back while doing some
> > system
> > calls (to trigger task work) shows this reduction. The reason ends
> > up
> > being that if sending responses is delayed by processing task work,
> > then
> > the client side sits idle. Whereas reordering the sends first means
> > that
> > the client runs it's workload in parallel with the local task work.
> 
> Quite contrived, for some it may cut latency in half but for others
> as easily increate it twofold. In any case, it's not a critique of
> the
> feature as it's optional, but rather raises a question whether we
> need to add some fairness / scheduling here.
> 
> > [1]:
> > Using https://github.com/DylanZA/netbench/tree/defer_run
> > Client:
> > ./netbench  --client_only 1 --control_port 10000 --host <host> --tx
> > "epoll --threads 16 --per_thread 1 --size 2048 --resp 2048 --
> > workload 1000"
> > Server:
> > ./netbench  --server_only 1 --control_port 10000  --rx "io_uring --
> > defer_taskrun 0 --workload 100"   --rx "io_uring  --defer_taskrun 1
> > --workload 100"
> > 
> > Signed-off-by: Dylan Yudaken <[email protected]>
> > ---
> 
> > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > index 53696dd90626..6572d2276750 100644
> > --- a/io_uring/io_uring.c
> > +++ b/io_uring/io_uring.c
> [...]
> 
> > +int io_run_local_work(struct io_ring_ctx *ctx, bool locked)
> > +{
> > +       struct llist_node *node;
> > +       struct llist_node fake;
> > +       struct llist_node *current_final = NULL;
> > +       int ret;
> > +
> > +       if (unlikely(ctx->submitter_task != current)) {
> > +               if (locked)
> > +                       mutex_unlock(&ctx->uring_lock);
> > +
> > +               /* maybe this is before any submissions */
> > +               if (!ctx->submitter_task)
> > +                       return 0;
> > +
> > +               return -EEXIST;
> > +       }
> > +
> > +       if (!locked)
> > +               locked = mutex_trylock(&ctx->uring_lock);
> > +
> > +       node = io_llist_xchg(&ctx->work_llist, &fake);
> > +       ret = 0;
> > +again:
> > +       while (node != current_final) {
> > +               struct llist_node *next = node->next;
> > +               struct io_kiocb *req = container_of(node, struct
> > io_kiocb,
> > +                                                  
> > io_task_work.node);
> > +               prefetch(container_of(next, struct io_kiocb,
> > io_task_work.node));
> > +               req->io_task_work.func(req, &locked);
> > +               ret++;
> > +               node = next;
> > +       }
> > +
> > +       if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
> > +               atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings-
> > >sq_flags);
> > +
> > +       node = io_llist_cmpxchg(&ctx->work_llist, &fake, NULL);
> > +       if (node != &fake) {
> > +               current_final = &fake;
> > +               node = io_llist_xchg(&ctx->work_llist, &fake);
> > +               goto again;
> > +       }
> > +
> > +       if (locked) {
> > +               io_submit_flush_completions(ctx);
> > +               mutex_unlock(&ctx->uring_lock);
> > +       }
> > +       return ret;
> > +}
> 
> I was thinking about:
> 
> int io_run_local_work(struct io_ring_ctx *ctx, bool *locked)
> {
>         locked = try_lock();
> }
> 
> bool locked = false;
> io_run_local_work(ctx, *locked);
> if (locked)
>         unlock();
> 
> // or just as below when already holding it
> bool locked = true;
> io_run_local_work(ctx, *locked);
> 
> Which would replace
> 
> if (DEFER) {
>         // we're assuming that it'll unlock
>         io_run_local_work(true);
> } else {
>         unlock();
> }
> 
> with
> 
> if (DEFER) {
>         bool locked = true;
>         io_run_local_work(&locked);
> }
> unlock();
> 
> But anyway, it can be mulled later.

I think there is an easier way to clean it up if we allow an extra
unlock/lock in io_uring_enter (see below). Will do that in v4

> 
> 
> > -int io_run_task_work_sig(void)
> > +int io_run_task_work_sig(struct io_ring_ctx *ctx)
> >   {
> > -       if (io_run_task_work())
> > +       if (io_run_task_work_ctx(ctx))
> >                 return 1;
> >         if (task_sigpending(current))
> >                 return -EINTR;
> > @@ -2196,7 +2294,7 @@ static inline int
> > io_cqring_wait_schedule(struct io_ring_ctx *ctx,
> >         unsigned long check_cq;
> >   
> >         /* make sure we run task_work before checking for signals
> > */
> > -       ret = io_run_task_work_sig();
> > +       ret = io_run_task_work_sig(ctx);
> >         if (ret || io_should_wake(iowq))
> >                 return ret;
> >   
> > @@ -2230,7 +2328,7 @@ static int io_cqring_wait(struct io_ring_ctx
> > *ctx, int min_events,
> >                 io_cqring_overflow_flush(ctx);
> >                 if (io_cqring_events(ctx) >= min_events)
> >                         return 0;
> > -               if (!io_run_task_work())
> > +               if (!io_run_task_work_ctx(ctx))
> >                         break;
> >         } while (1);
> >   
> > @@ -2573,6 +2671,9 @@ static __cold void io_ring_exit_work(struct
> > work_struct *work)
> >          * as nobody else will be looking for them.
> >          */
> >         do {
> > +               if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
> > +                       io_move_task_work_from_local(ctx);
> > +
> >                 while (io_uring_try_cancel_requests(ctx, NULL,
> > true))
> >                         cond_resched();
> >   
> > @@ -2768,6 +2869,8 @@ static __cold bool
> > io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
> >                 }
> >         }
> >   
> > +       if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
> > +               ret |= io_run_local_work(ctx, false) > 0;
> >         ret |= io_cancel_defer_files(ctx, task, cancel_all);
> >         mutex_lock(&ctx->uring_lock);
> >         ret |= io_poll_remove_all(ctx, task, cancel_all);
> > @@ -3057,10 +3160,20 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned
> > int, fd, u32, to_submit,
> >                 }
> >                 if ((flags & IORING_ENTER_GETEVENTS) && ctx-
> > >syscall_iopoll)
> >                         goto iopoll_locked;
> > +               if ((flags & IORING_ENTER_GETEVENTS) &&
> > +                       (ctx->flags & IORING_SETUP_DEFER_TASKRUN))
> > {
> > +                       int ret2 = io_run_local_work(ctx, true);
> > +
> > +                       if (unlikely(ret2 < 0))
> > +                               goto out;
> 
> It's an optimisation and we don't have to handle errors here,
> let's ignore them and make it looking a bit better.

I'm not convinced about that - as then there is no way the application
will know it is trying to complete events on the wrong thread. Work
will just silently pile up instead.
That being said - with the changes below I can just get rid of this
code I think.

> 
> > +                       goto getevents_ran_local;
> > +               }
> >                 mutex_unlock(&ctx->uring_lock);
> >         }
> > +
> >         if (flags & IORING_ENTER_GETEVENTS) {
> >                 int ret2;
> > +
> >                 if (ctx->syscall_iopoll) {
> >                         /*
> >                          * We disallow the app entering
> > submit/complete with
> > @@ -3081,6 +3194,12 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned
> > int, fd, u32, to_submit,
> >                         const sigset_t __user *sig;
> >                         struct __kernel_timespec __user *ts;
> >   
> > +                       if (ctx->flags &
> > IORING_SETUP_DEFER_TASKRUN) {
> 
> I think it should be in io_cqring_wait(), which calls it anyway
> in the beginning. Instead of
> 
>         do {
>                 io_cqring_overflow_flush(ctx);
>                 if (io_cqring_events(ctx) >= min_events)
>                         return 0;
>                 if (!io_run_task_work())
>                         break;
>         } while (1);
> 
> Let's have
> 
>         do {
>                 ret = io_run_task_work_ctx();
>                 // handle ret
>                 io_cqring_overflow_flush(ctx);
>                 if (io_cqring_events(ctx) >= min_events)
>                         return 0;
>         } while (1);

I think that is ok.
The downside is that it adds an extra lock/unlock of the ctx in some
cases. I assume that will be neglegible?

> 
> > +                               ret2 = io_run_local_work(ctx,
> > false);
> > +                               if (unlikely(ret2 < 0))
> > +                                       goto getevents_out;
> > +                       }
> > +getevents_ran_local:
> >                         ret2 = io_get_ext_arg(flags, argp, &argsz,
> > &ts, &sig);
> >                         if (likely(!ret2)) {
> >                                 min_complete = min(min_complete,
> > @@ -3090,6 +3209,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int,
> > fd, u32, to_submit,
> >                         }
> >                 }
> >   
> > +getevents_out:
> >                 if (!ret) {
> >                         ret = ret2;
> >   
> > @@ -3289,17 +3409,29 @@ static __cold int io_uring_create(unsigned
> > entries, struct io_uring_params *p,
> >         if (ctx->flags & IORING_SETUP_SQPOLL) {
> >                 /* IPI related flags don't make sense with SQPOLL
> > */
> >                 if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
> > -                                 IORING_SETUP_TASKRUN_FLAG))
> > +                                 IORING_SETUP_TASKRUN_FLAG |
> > +                                 IORING_SETUP_DEFER_TASKRUN))
> 
> Sounds like we should also fail if SQPOLL is set, especially with
> the task check on the waiting side.
> 

That is what this code is doing I think? Did I miss something?


> >                         goto err;
> >                 ctx->notify_method = TWA_SIGNAL_NO_IPI;
> >         } else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
> >                 ctx->notify_method = TWA_SIGNAL_NO_IPI;
> [...]
> >         mutex_lock(&ctx->uring_lock);
> >         ret = __io_uring_register(ctx, opcode, arg, nr_args);
> > diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
> > index 2f73f83af960..a9fb115234af 100644
> > --- a/io_uring/io_uring.h
> > +++ b/io_uring/io_uring.h
> > @@ -26,7 +26,8 @@ enum {
> [...]
> > +static inline int io_run_task_work_unlock_ctx(struct io_ring_ctx
> > *ctx)
> > +{
> > +       int ret;
> > +
> > +       if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
> > +               ret = io_run_local_work(ctx, true);
> > +       } else {
> > +               mutex_unlock(&ctx->uring_lock);
> > +               ret = (int)io_run_task_work();
> 
> Why do we need a cast? let's keep the return type same

Ok I'll update the return types here


Dylan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-30  9:54     ` Dylan Yudaken
@ 2022-08-30 10:29       ` Pavel Begunkov
  0 siblings, 0 replies; 18+ messages in thread
From: Pavel Begunkov @ 2022-08-30 10:29 UTC (permalink / raw)
  To: Dylan Yudaken, [email protected], [email protected]; +Cc: Kernel Team

On 8/30/22 10:54, Dylan Yudaken wrote:
> On Mon, 2022-08-22 at 12:34 +0100, Pavel Begunkov wrote:
[...]
>>> +
>>> +       node = io_llist_cmpxchg(&ctx->work_llist, &fake, NULL);
>>> +       if (node != &fake) {
>>> +               current_final = &fake;
>>> +               node = io_llist_xchg(&ctx->work_llist, &fake);
>>> +               goto again;
>>> +       }
>>> +
>>> +       if (locked) {
>>> +               io_submit_flush_completions(ctx);
>>> +               mutex_unlock(&ctx->uring_lock);
>>> +       }
>>> +       return ret;
>>> +}
>>
>> I was thinking about:
>>
>> int io_run_local_work(struct io_ring_ctx *ctx, bool *locked)
>> {
>>          locked = try_lock();
>> }
>>
>> bool locked = false;
>> io_run_local_work(ctx, *locked);
>> if (locked)
>>          unlock();
>>
>> // or just as below when already holding it
>> bool locked = true;
>> io_run_local_work(ctx, *locked);
>>
>> Which would replace
>>
>> if (DEFER) {
>>          // we're assuming that it'll unlock
>>          io_run_local_work(true);
>> } else {
>>          unlock();
>> }
>>
>> with
>>
>> if (DEFER) {
>>          bool locked = true;
>>          io_run_local_work(&locked);
>> }
>> unlock();
>>
>> But anyway, it can be mulled later.
> 
> I think there is an easier way to clean it up if we allow an extra
> unlock/lock in io_uring_enter (see below). Will do that in v4

fwiw, I'm fine with the current code, the rest can
be cleaned up later if you'd prefer so.

[...]
>>> @@ -3057,10 +3160,20 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned
>>> int, fd, u32, to_submit,
>>>                  }
>>>                  if ((flags & IORING_ENTER_GETEVENTS) && ctx-
>>>> syscall_iopoll)
>>>                          goto iopoll_locked;
>>> +               if ((flags & IORING_ENTER_GETEVENTS) &&
>>> +                       (ctx->flags & IORING_SETUP_DEFER_TASKRUN))
>>> {
>>> +                       int ret2 = io_run_local_work(ctx, true);
>>> +
>>> +                       if (unlikely(ret2 < 0))
>>> +                               goto out;
>>
>> It's an optimisation and we don't have to handle errors here,
>> let's ignore them and make it looking a bit better.
> 
> I'm not convinced about that - as then there is no way the application
> will know it is trying to complete events on the wrong thread. Work
> will just silently pile up instead.

by optimisation I mean exactly this chunk right after submsission.
If it's a wrong thread this will be ignored, then control flow will
fall into cq_wait and then fail there returning an error. So, the
userspace should get an error in the end but the handling would be
consolidated in cq_wait.

> That being said - with the changes below I can just get rid of this
> code I think.
> 
>>
>>> +                       goto getevents_ran_local;
>>> +               }
>>>                  mutex_unlock(&ctx->uring_lock);
>>>          }
>>> +
>>>          if (flags & IORING_ENTER_GETEVENTS) {
>>>                  int ret2;
>>> +
>>>                  if (ctx->syscall_iopoll) {
>>>                          /*
>>>                           * We disallow the app entering
>>> submit/complete with
>>> @@ -3081,6 +3194,12 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned
>>> int, fd, u32, to_submit,
>>>                          const sigset_t __user *sig;
>>>                          struct __kernel_timespec __user *ts;
>>>    
>>> +                       if (ctx->flags &
>>> IORING_SETUP_DEFER_TASKRUN) {
>>
>> I think it should be in io_cqring_wait(), which calls it anyway
>> in the beginning. Instead of
>>
>>          do {
>>                  io_cqring_overflow_flush(ctx);
>>                  if (io_cqring_events(ctx) >= min_events)
>>                          return 0;
>>                  if (!io_run_task_work())
>>                          break;
>>          } while (1);
>>
>> Let's have
>>
>>          do {
>>                  ret = io_run_task_work_ctx();
>>                  // handle ret
>>                  io_cqring_overflow_flush(ctx);
>>                  if (io_cqring_events(ctx) >= min_events)
>>                          return 0;
>>          } while (1);
> 
> I think that is ok.
> The downside is that it adds an extra lock/unlock of the ctx in some
> cases. I assume that will be neglegible?

Not sure there will be any extra locking. IIRC, it was about replacing

// io_uring_enter() -> GETEVENTS path
run_tw();
// io_cqring_wait()
while (cqes_ready() < needed)
	run_tw();

With:

// io_uring_enter()
do {
	run_tw();
} while(cqes_ready() < needed);


>>> +                               ret2 = io_run_local_work(ctx,
>>> false);
>>> +                               if (unlikely(ret2 < 0))
>>> +                                       goto getevents_out;
>>> +                       }
>>> +getevents_ran_local:
>>>                          ret2 = io_get_ext_arg(flags, argp, &argsz,
>>> &ts, &sig);
>>>                          if (likely(!ret2)) {
>>>                                  min_complete = min(min_complete,
>>> @@ -3090,6 +3209,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int,
>>> fd, u32, to_submit,
>>>                          }
>>>                  }
>>>    
>>> +getevents_out:
>>>                  if (!ret) {
>>>                          ret = ret2;
>>>    
>>> @@ -3289,17 +3409,29 @@ static __cold int io_uring_create(unsigned
>>> entries, struct io_uring_params *p,
>>>          if (ctx->flags & IORING_SETUP_SQPOLL) {
>>>                  /* IPI related flags don't make sense with SQPOLL
>>> */
>>>                  if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
>>> -                                 IORING_SETUP_TASKRUN_FLAG))
>>> +                                 IORING_SETUP_TASKRUN_FLAG |
>>> +                                 IORING_SETUP_DEFER_TASKRUN))
>>
>> Sounds like we should also fail if SQPOLL is set, especially with
>> the task check on the waiting side.
>>
> 
> That is what this code is doing I think? Did I miss something?

Ok, great then

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-19 12:19 ` [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN Dylan Yudaken
  2022-08-22 11:34   ` Pavel Begunkov
@ 2022-08-30 13:19   ` Hao Xu
  2022-08-30 13:34     ` Dylan Yudaken
  1 sibling, 1 reply; 18+ messages in thread
From: Hao Xu @ 2022-08-30 13:19 UTC (permalink / raw)
  To: Dylan Yudaken, Jens Axboe, Pavel Begunkov, io-uring; +Cc: Kernel-team

On 8/19/22 20:19, Dylan Yudaken wrote:
> Allow deferring async tasks until the user calls io_uring_enter(2) with
> the IORING_ENTER_GETEVENTS flag. Enable this mode with a flag at
> io_uring_setup time. This functionality requires that the later
> io_uring_enter will be called from the same submission task, and therefore
> restrict this flag to work only when IORING_SETUP_SINGLE_ISSUER is also
> set.
> 
> Being able to hand pick when tasks are run prevents the problem where
> there is current work to be done, however task work runs anyway.
> 
> For example, a common workload would obtain a batch of CQEs, and process
> each one. Interrupting this to additional taskwork would add latency but
> not gain anything. If instead task work is deferred to just before more
> CQEs are obtained then no additional latency is added.
> 
> The way this is implemented is by trying to keep task work local to a
> io_ring_ctx, rather than to the submission task. This is required, as the
> application will want to wake up only a single io_ring_ctx at a time to
> process work, and so the lists of work have to be kept separate.
> 
> This has some other benefits like not having to check the task continually
> in handle_tw_list (and potentially unlocking/locking those), and reducing
> locks in the submit & process completions path.
> 
> There are networking cases where using this option can reduce request
> latency by 50%. For example a contrived example using [1] where the client
> sends 2k data and receives the same data back while doing some system
> calls (to trigger task work) shows this reduction. The reason ends up
> being that if sending responses is delayed by processing task work, then
> the client side sits idle. Whereas reordering the sends first means that
> the client runs it's workload in parallel with the local task work.
> 

Sorry, seems I misunderstood the purpose of this patchset. Allow me to
ask a question: "we always first submit sqes then handle task work
(in IORING_SETUP_COOP_TASKRUN mode), how could the sending be
interrupted by task works?"

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-30 13:19   ` Hao Xu
@ 2022-08-30 13:34     ` Dylan Yudaken
  2022-08-30 14:04       ` Hao Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Dylan Yudaken @ 2022-08-30 13:34 UTC (permalink / raw)
  To: [email protected], [email protected], [email protected],
	[email protected]
  Cc: Kernel Team

On Tue, 2022-08-30 at 21:19 +0800, Hao Xu wrote:
> On 8/19/22 20:19, Dylan Yudaken wrote:
> > Allow deferring async tasks until the user calls io_uring_enter(2)
> > with
> > the IORING_ENTER_GETEVENTS flag. Enable this mode with a flag at
> > io_uring_setup time. This functionality requires that the later
> > io_uring_enter will be called from the same submission task, and
> > therefore
> > restrict this flag to work only when IORING_SETUP_SINGLE_ISSUER is
> > also
> > set.
> > 
> > Being able to hand pick when tasks are run prevents the problem
> > where
> > there is current work to be done, however task work runs anyway.
> > 
> > For example, a common workload would obtain a batch of CQEs, and
> > process
> > each one. Interrupting this to additional taskwork would add
> > latency but
> > not gain anything. If instead task work is deferred to just before
> > more
> > CQEs are obtained then no additional latency is added.
> > 
> > The way this is implemented is by trying to keep task work local to
> > a
> > io_ring_ctx, rather than to the submission task. This is required,
> > as the
> > application will want to wake up only a single io_ring_ctx at a
> > time to
> > process work, and so the lists of work have to be kept separate.
> > 
> > This has some other benefits like not having to check the task
> > continually
> > in handle_tw_list (and potentially unlocking/locking those), and
> > reducing
> > locks in the submit & process completions path.
> > 
> > There are networking cases where using this option can reduce
> > request
> > latency by 50%. For example a contrived example using [1] where the
> > client
> > sends 2k data and receives the same data back while doing some
> > system
> > calls (to trigger task work) shows this reduction. The reason ends
> > up
> > being that if sending responses is delayed by processing task work,
> > then
> > the client side sits idle. Whereas reordering the sends first means
> > that
> > the client runs it's workload in parallel with the local task work.
> > 
> 
> Sorry, seems I misunderstood the purpose of this patchset. Allow me
> to
> ask a question: "we always first submit sqes then handle task work
> (in IORING_SETUP_COOP_TASKRUN mode), how could the sending be
> interrupted by task works?"

IORING_SETUP_COOP_TASKRUN causes the task to not be interrupted simply
for task work, however it will still be run on every system call even
if completions are not about to be processed. 

IoUring task work (unlike say epoll wakeups) can take a non-trivial
amount of time, and so running them closer to when they are used can
reduce latency of other unrelated operations by not unnecessarily
stalling them.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN
  2022-08-30 13:34     ` Dylan Yudaken
@ 2022-08-30 14:04       ` Hao Xu
  0 siblings, 0 replies; 18+ messages in thread
From: Hao Xu @ 2022-08-30 14:04 UTC (permalink / raw)
  To: Dylan Yudaken, [email protected], [email protected],
	[email protected]
  Cc: Kernel Team

On 8/30/22 21:34, Dylan Yudaken wrote:
> On Tue, 2022-08-30 at 21:19 +0800, Hao Xu wrote:
>> On 8/19/22 20:19, Dylan Yudaken wrote:
>>> Allow deferring async tasks until the user calls io_uring_enter(2)
>>> with
>>> the IORING_ENTER_GETEVENTS flag. Enable this mode with a flag at
>>> io_uring_setup time. This functionality requires that the later
>>> io_uring_enter will be called from the same submission task, and
>>> therefore
>>> restrict this flag to work only when IORING_SETUP_SINGLE_ISSUER is
>>> also
>>> set.
>>>
>>> Being able to hand pick when tasks are run prevents the problem
>>> where
>>> there is current work to be done, however task work runs anyway.
>>>
>>> For example, a common workload would obtain a batch of CQEs, and
>>> process
>>> each one. Interrupting this to additional taskwork would add
>>> latency but
>>> not gain anything. If instead task work is deferred to just before
>>> more
>>> CQEs are obtained then no additional latency is added.
>>>
>>> The way this is implemented is by trying to keep task work local to
>>> a
>>> io_ring_ctx, rather than to the submission task. This is required,
>>> as the
>>> application will want to wake up only a single io_ring_ctx at a
>>> time to
>>> process work, and so the lists of work have to be kept separate.
>>>
>>> This has some other benefits like not having to check the task
>>> continually
>>> in handle_tw_list (and potentially unlocking/locking those), and
>>> reducing
>>> locks in the submit & process completions path.
>>>
>>> There are networking cases where using this option can reduce
>>> request
>>> latency by 50%. For example a contrived example using [1] where the
>>> client
>>> sends 2k data and receives the same data back while doing some
>>> system
>>> calls (to trigger task work) shows this reduction. The reason ends
>>> up
>>> being that if sending responses is delayed by processing task work,
>>> then
>>> the client side sits idle. Whereas reordering the sends first means
>>> that
>>> the client runs it's workload in parallel with the local task work.
>>>
>>
>> Sorry, seems I misunderstood the purpose of this patchset. Allow me
>> to
>> ask a question: "we always first submit sqes then handle task work
>> (in IORING_SETUP_COOP_TASKRUN mode), how could the sending be
>> interrupted by task works?"
> 
> IORING_SETUP_COOP_TASKRUN causes the task to not be interrupted simply
> for task work, however it will still be run on every system call even
> if completions are not about to be processed.
> 

gotcha, then sqpoll may not be a tenant of this feature..


> IoUring task work (unlike say epoll wakeups) can take a non-trivial
> amount of time, and so running them closer to when they are used can
> reduce latency of other unrelated operations by not unnecessarily
> stalling them.
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-08-30 14:04 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-08-19 12:19 [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Dylan Yudaken
2022-08-19 12:19 ` [PATCH for-next v3 1/7] io_uring: remove unnecessary variable Dylan Yudaken
2022-08-19 12:19 ` [PATCH for-next v3 2/7] io_uring: introduce io_has_work Dylan Yudaken
2022-08-19 12:19 ` [PATCH for-next v3 3/7] io_uring: do not run task work at the start of io_uring_enter Dylan Yudaken
2022-08-19 12:19 ` [PATCH for-next v3 4/7] io_uring: add IORING_SETUP_DEFER_TASKRUN Dylan Yudaken
2022-08-22 11:34   ` Pavel Begunkov
2022-08-29  6:32     ` Hao Xu
2022-08-30  7:23       ` Dylan Yudaken
2022-08-30  7:54         ` Hao Xu
2022-08-30  9:54     ` Dylan Yudaken
2022-08-30 10:29       ` Pavel Begunkov
2022-08-30 13:19   ` Hao Xu
2022-08-30 13:34     ` Dylan Yudaken
2022-08-30 14:04       ` Hao Xu
2022-08-19 12:19 ` [PATCH for-next v3 5/7] io_uring: move io_eventfd_put Dylan Yudaken
2022-08-19 12:19 ` [PATCH for-next v3 6/7] io_uring: signal registered eventfd to process deferred task work Dylan Yudaken
2022-08-19 12:19 ` [PATCH for-next v3 7/7] io_uring: trace local task work run Dylan Yudaken
2022-08-29  7:01 ` [PATCH for-next v3 0/7] io_uring: defer task work to when it is needed Hao Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox