public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-next 00/10] io_uring: batch multishot completions
@ 2022-11-21 10:03 Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 01/10] io_uring: merge io_req_tw_post and io_req_task_complete Dylan Yudaken
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Multishot completions currently all go through io_post_aux_cqe which will
do a lock/unlock pair of the completion spinlock, and also possibly signal
an eventfd if registered. This can slow down applications that use these
features.

This series allows the posted completions to be batched using the same
IO_URING_F_COMPLETE_DEFER as exists for non multishot completions. A
critical property of this is that all multishot completions must be
flushed to the CQ ring before the non-multishot completion (say an error)
or else ordering will break. This implies that if some completions were
deferred, then the rest must also be to keep that ordering. In order to do
this the first few patches move all the completion code into a simpler
path that defers completions when possible.

The batching is done by keeping an array of 16 CQEs, and adding to it
rather than posting immediately. If it fills up the posting happens then.

A microbenchmark was run ([1]) to test this and showed a 2.3x rps
improvment (8.3 M/s vs 19.3 M/s).

Patches 1-7 clean up the completion paths
Patch 8 introduces the cqe array
Patch 9 allows io_post_aux_cqe to use the cqe array to defer completions
Patch 10 enables defered completions for multishot polled requests

[1]: https://github.com/DylanZA/liburing/commit/9ac66b36bcf4477bfafeff1c5f107896b7ae31cf
Run with $ make -j && ./benchmark/reg.b -s 1 -t 2000 -r 10

Note - I this will have a merge conflict with the recent
"io_uring: inline __io_req_complete_post()" commit. I can respin once that
is in for-next.

Dylan Yudaken (10):
  io_uring: merge io_req_tw_post and io_req_task_complete
  io_uring: __io_req_complete should defer if available
  io_uring: split io_req_complete_failed into post/defer
  io_uring: lock on remove in io_apoll_task_func
  io_uring: timeout should use io_req_task_complete
  io_uring: simplify io_issue_sqe
  io_uring: make io_req_complete_post static
  io_uring: allow defer completion for aux posted cqes
  io_uring: allow io_post_aux_cqe to defer completion
  io_uring: allow multishot polled reqs to defer completion

 include/linux/io_uring_types.h |   2 +
 io_uring/io_uring.c            | 133 +++++++++++++++++++++++++--------
 io_uring/io_uring.h            |   5 +-
 io_uring/msg_ring.c            |  10 ++-
 io_uring/net.c                 |  15 ++--
 io_uring/poll.c                |   7 +-
 io_uring/rsrc.c                |   4 +-
 io_uring/timeout.c             |   3 +-
 8 files changed, 126 insertions(+), 53 deletions(-)


base-commit: 40fa774af7fd04d06014ac74947c351649b6f64f
-- 
2.30.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH for-next 01/10] io_uring: merge io_req_tw_post and io_req_task_complete
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 02/10] io_uring: __io_req_complete should defer if available Dylan Yudaken
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Merge these functions that have the same logic

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 9e868a83e472..f15aca039db6 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1227,15 +1227,18 @@ int io_run_local_work(struct io_ring_ctx *ctx)
 	return ret;
 }
 
-static void io_req_tw_post(struct io_kiocb *req, bool *locked)
+void io_req_task_complete(struct io_kiocb *req, bool *locked)
 {
-	io_req_complete_post(req);
+	if (*locked)
+		io_req_complete_defer(req);
+	else
+		io_req_complete_post(req);
 }
 
 void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags)
 {
 	io_req_set_res(req, res, cflags);
-	req->io_task_work.func = io_req_tw_post;
+	req->io_task_work.func = io_req_task_complete;
 	io_req_task_work_add(req);
 }
 
@@ -1464,14 +1467,6 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
 	return ret;
 }
 
-void io_req_task_complete(struct io_kiocb *req, bool *locked)
-{
-	if (*locked)
-		io_req_complete_defer(req);
-	else
-		io_req_complete_post(req);
-}
-
 /*
  * After the iocb has been issued, it's safe to be found on the poll list.
  * Adding the kiocb to the list AFTER submission ensures that we don't
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 02/10] io_uring: __io_req_complete should defer if available
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 01/10] io_uring: merge io_req_tw_post and io_req_task_complete Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 03/10] io_uring: split io_req_complete_failed into post/defer Dylan Yudaken
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

For consistency always defer completion if specified in the issue flags.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index f15aca039db6..208afb944b0c 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -862,7 +862,10 @@ void io_req_complete_post(struct io_kiocb *req)
 
 inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags)
 {
-	io_req_complete_post(req);
+	if (issue_flags & IO_URING_F_COMPLETE_DEFER)
+		io_req_complete_defer(req);
+	else
+		io_req_complete_post(req);
 }
 
 void io_req_complete_failed(struct io_kiocb *req, s32 res)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 03/10] io_uring: split io_req_complete_failed into post/defer
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 01/10] io_uring: merge io_req_tw_post and io_req_task_complete Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 02/10] io_uring: __io_req_complete should defer if available Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 04/10] io_uring: lock on remove in io_apoll_task_func Dylan Yudaken
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Different use cases might want to defer failure completion if available,
or post the completion immediately if the lock is not definitely taken.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 28 ++++++++++++++++++++--------
 io_uring/io_uring.h |  2 +-
 io_uring/poll.c     |  2 +-
 3 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 208afb944b0c..d9bd18e3a603 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -868,7 +868,7 @@ inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags)
 		io_req_complete_post(req);
 }
 
-void io_req_complete_failed(struct io_kiocb *req, s32 res)
+static inline void io_req_prep_failed(struct io_kiocb *req, s32 res)
 {
 	const struct io_op_def *def = &io_op_defs[req->opcode];
 
@@ -876,6 +876,18 @@ void io_req_complete_failed(struct io_kiocb *req, s32 res)
 	io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
 	if (def->fail)
 		def->fail(req);
+}
+
+static void io_req_defer_failed(struct io_kiocb *req, s32 res)
+	__must_hold(&ctx->uring_lock)
+{
+	io_req_prep_failed(req, res);
+	io_req_complete_defer(req);
+}
+
+void io_req_post_failed(struct io_kiocb *req, s32 res)
+{
+	io_req_prep_failed(req, res);
 	io_req_complete_post(req);
 }
 
@@ -1249,7 +1261,7 @@ static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
 {
 	/* not needed for normal modes, but SQPOLL depends on it */
 	io_tw_lock(req->ctx, locked);
-	io_req_complete_failed(req, req->cqe.res);
+	io_req_defer_failed(req, req->cqe.res);
 }
 
 void io_req_task_submit(struct io_kiocb *req, bool *locked)
@@ -1259,7 +1271,7 @@ void io_req_task_submit(struct io_kiocb *req, bool *locked)
 	if (likely(!(req->task->flags & PF_EXITING)))
 		io_queue_sqe(req);
 	else
-		io_req_complete_failed(req, -EFAULT);
+		io_req_defer_failed(req, -EFAULT);
 }
 
 void io_req_task_queue_fail(struct io_kiocb *req, int ret)
@@ -1637,7 +1649,7 @@ static __cold void io_drain_req(struct io_kiocb *req)
 	ret = io_req_prep_async(req);
 	if (ret) {
 fail:
-		io_req_complete_failed(req, ret);
+		io_req_defer_failed(req, ret);
 		return;
 	}
 	io_prep_async_link(req);
@@ -1867,7 +1879,7 @@ static void io_queue_async(struct io_kiocb *req, int ret)
 	struct io_kiocb *linked_timeout;
 
 	if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
-		io_req_complete_failed(req, ret);
+		io_req_defer_failed(req, ret);
 		return;
 	}
 
@@ -1917,14 +1929,14 @@ static void io_queue_sqe_fallback(struct io_kiocb *req)
 		 */
 		req->flags &= ~REQ_F_HARDLINK;
 		req->flags |= REQ_F_LINK;
-		io_req_complete_failed(req, req->cqe.res);
+		io_req_defer_failed(req, req->cqe.res);
 	} else if (unlikely(req->ctx->drain_active)) {
 		io_drain_req(req);
 	} else {
 		int ret = io_req_prep_async(req);
 
 		if (unlikely(ret))
-			io_req_complete_failed(req, ret);
+			io_req_defer_failed(req, ret);
 		else
 			io_queue_iowq(req, NULL);
 	}
@@ -2851,7 +2863,7 @@ static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,
 	while (!list_empty(&list)) {
 		de = list_first_entry(&list, struct io_defer_entry, list);
 		list_del_init(&de->list);
-		io_req_complete_failed(de->req, -ECANCELED);
+		io_req_post_failed(de->req, -ECANCELED);
 		kfree(de);
 	}
 	return true;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index af3f82bd4017..ee3139947fcc 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -29,7 +29,7 @@ bool io_req_cqe_overflow(struct io_kiocb *req);
 int io_run_task_work_sig(struct io_ring_ctx *ctx);
 int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked);
 int io_run_local_work(struct io_ring_ctx *ctx);
-void io_req_complete_failed(struct io_kiocb *req, s32 res);
+void io_req_post_failed(struct io_kiocb *req, s32 res);
 void __io_req_complete(struct io_kiocb *req, unsigned issue_flags);
 void io_req_complete_post(struct io_kiocb *req);
 bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 989b72a47331..e0a4faa010b3 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -304,7 +304,7 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
 	else if (ret == IOU_POLL_DONE)
 		io_req_task_submit(req, locked);
 	else
-		io_req_complete_failed(req, ret);
+		io_req_post_failed(req, ret);
 }
 
 static void __io_poll_execute(struct io_kiocb *req, int mask)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 04/10] io_uring: lock on remove in io_apoll_task_func
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
                   ` (2 preceding siblings ...)
  2022-11-21 10:03 ` [PATCH for-next 03/10] io_uring: split io_req_complete_failed into post/defer Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 05/10] io_uring: timeout should use io_req_task_complete Dylan Yudaken
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

This allows using io_req_defer_failed rather than post in all cases. The
alternative would be to branch based on *locked and decide whether to post
or defer the completion.
However all of the non-error paths in io_poll_check_events that do not do
not return IOU_POLL_NO_ACTION end up locking anyway, and locking here does
reduce the logic complexity, so  it seems reasonable to lock always and
then also defer the completion on failure always.

This also means that only io_req_defer_failed needs exporting from
io_uring.h

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 4 ++--
 io_uring/io_uring.h | 2 +-
 io_uring/poll.c     | 5 +++--
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index d9bd18e3a603..03946f46dadc 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -878,14 +878,14 @@ static inline void io_req_prep_failed(struct io_kiocb *req, s32 res)
 		def->fail(req);
 }
 
-static void io_req_defer_failed(struct io_kiocb *req, s32 res)
+void io_req_defer_failed(struct io_kiocb *req, s32 res)
 	__must_hold(&ctx->uring_lock)
 {
 	io_req_prep_failed(req, res);
 	io_req_complete_defer(req);
 }
 
-void io_req_post_failed(struct io_kiocb *req, s32 res)
+static void io_req_post_failed(struct io_kiocb *req, s32 res)
 {
 	io_req_prep_failed(req, res);
 	io_req_complete_post(req);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index ee3139947fcc..1daf236513cc 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -29,7 +29,7 @@ bool io_req_cqe_overflow(struct io_kiocb *req);
 int io_run_task_work_sig(struct io_ring_ctx *ctx);
 int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked);
 int io_run_local_work(struct io_ring_ctx *ctx);
-void io_req_post_failed(struct io_kiocb *req, s32 res);
+void io_req_defer_failed(struct io_kiocb *req, s32 res);
 void __io_req_complete(struct io_kiocb *req, unsigned issue_flags);
 void io_req_complete_post(struct io_kiocb *req);
 bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
diff --git a/io_uring/poll.c b/io_uring/poll.c
index e0a4faa010b3..2b77d18a67a7 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -296,15 +296,16 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
 	if (ret == IOU_POLL_NO_ACTION)
 		return;
 
+	io_tw_lock(req->ctx, locked);
 	io_poll_remove_entries(req);
 	io_poll_tw_hash_eject(req, locked);
 
 	if (ret == IOU_POLL_REMOVE_POLL_USE_RES)
-		io_req_complete_post(req);
+		io_req_task_complete(req, locked);
 	else if (ret == IOU_POLL_DONE)
 		io_req_task_submit(req, locked);
 	else
-		io_req_post_failed(req, ret);
+		io_req_defer_failed(req, ret);
 }
 
 static void __io_poll_execute(struct io_kiocb *req, int mask)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 05/10] io_uring: timeout should use io_req_task_complete
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
                   ` (3 preceding siblings ...)
  2022-11-21 10:03 ` [PATCH for-next 04/10] io_uring: lock on remove in io_apoll_task_func Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 06/10] io_uring: simplify io_issue_sqe Dylan Yudaken
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Allow timeouts to defer completions if the ring is locked

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/timeout.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index e8a8c2099480..26b61e62aa9a 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -282,12 +282,11 @@ static void io_req_task_link_timeout(struct io_kiocb *req, bool *locked)
 			ret = io_try_cancel(req->task->io_uring, &cd, issue_flags);
 		}
 		io_req_set_res(req, ret ?: -ETIME, 0);
-		io_req_complete_post(req);
 		io_put_req(prev);
 	} else {
 		io_req_set_res(req, -ETIME, 0);
-		io_req_complete_post(req);
 	}
+	io_req_task_complete(req, locked);
 }
 
 static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 06/10] io_uring: simplify io_issue_sqe
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
                   ` (4 preceding siblings ...)
  2022-11-21 10:03 ` [PATCH for-next 05/10] io_uring: timeout should use io_req_task_complete Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 07/10] io_uring: make io_req_complete_post static Dylan Yudaken
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

io_issue_sqe can reuse __io_req_complete for completion logic

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 03946f46dadc..2177b3ef094a 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1742,12 +1742,9 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 	if (creds)
 		revert_creds(creds);
 
-	if (ret == IOU_OK) {
-		if (issue_flags & IO_URING_F_COMPLETE_DEFER)
-			io_req_complete_defer(req);
-		else
-			io_req_complete_post(req);
-	} else if (ret != IOU_ISSUE_SKIP_COMPLETE)
+	if (ret == IOU_OK)
+		__io_req_complete(req, issue_flags);
+	else if (ret != IOU_ISSUE_SKIP_COMPLETE)
 		return ret;
 
 	/* If the op doesn't have a file, we're not polling for it */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 07/10] io_uring: make io_req_complete_post static
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
                   ` (5 preceding siblings ...)
  2022-11-21 10:03 ` [PATCH for-next 06/10] io_uring: simplify io_issue_sqe Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 08/10] io_uring: allow defer completion for aux posted cqes Dylan Yudaken
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

This is only called from two functions in io_uring.c so remove the header
export.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 2 +-
 io_uring/io_uring.h | 1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2177b3ef094a..715ded749110 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -849,7 +849,7 @@ static void __io_req_complete_put(struct io_kiocb *req)
 	}
 }
 
-void io_req_complete_post(struct io_kiocb *req)
+static void io_req_complete_post(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 1daf236513cc..bfe1b5488c25 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -31,7 +31,6 @@ int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked);
 int io_run_local_work(struct io_ring_ctx *ctx);
 void io_req_defer_failed(struct io_kiocb *req, s32 res);
 void __io_req_complete(struct io_kiocb *req, unsigned issue_flags);
-void io_req_complete_post(struct io_kiocb *req);
 bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
 bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
 void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 08/10] io_uring: allow defer completion for aux posted cqes
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
                   ` (6 preceding siblings ...)
  2022-11-21 10:03 ` [PATCH for-next 07/10] io_uring: make io_req_complete_post static Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion Dylan Yudaken
  2022-11-21 10:03 ` [PATCH for-next 10/10] io_uring: allow multishot polled reqs " Dylan Yudaken
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Multishot ops cannot use the compl_reqs list as the request must stay in
the poll list, but that means they need to run each completion without
benefiting from batching.

Here introduce batching infrastructure for only small (ie 16 byte)
CQEs. This restriction is ok because there are no use cases posting 32
byte CQEs.

In the ring keep a batch of up to 16 posted results, and flush in the same
way as compl_reqs.

16 was chosen through experimentation on a microbenchmark ([1]), as well
as trying not to increase the size of the ring too much. This increases
the size to 1472 bytes from 1216.

[1]: https://github.com/DylanZA/liburing/commit/9ac66b36bcf4477bfafeff1c5f107896b7ae31cf
Run with $ make -j && ./benchmark/reg.b -s 1 -t 2000 -r 10
Gives results:
baseline	8309 k/s
8		18807 k/s
16		19338 k/s
32		20134 k/s

Signed-off-by: Dylan Yudaken <[email protected]>
---
 include/linux/io_uring_types.h |  2 ++
 io_uring/io_uring.c            | 49 +++++++++++++++++++++++++++++++---
 2 files changed, 48 insertions(+), 3 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index f5b687a787a3..accdfecee953 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -174,7 +174,9 @@ struct io_submit_state {
 	bool			plug_started;
 	bool			need_plug;
 	unsigned short		submit_nr;
+	unsigned int		cqes_count;
 	struct blk_plug		plug;
+	struct io_uring_cqe	cqes[16];
 };
 
 struct io_ev_fd {
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 715ded749110..c797f9a75dfe 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -167,7 +167,8 @@ EXPORT_SYMBOL(io_uring_get_socket);
 
 static inline void io_submit_flush_completions(struct io_ring_ctx *ctx)
 {
-	if (!wq_list_empty(&ctx->submit_state.compl_reqs))
+	if (!wq_list_empty(&ctx->submit_state.compl_reqs) ||
+	    ctx->submit_state.cqes_count)
 		__io_submit_flush_completions(ctx);
 }
 
@@ -807,6 +808,43 @@ bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 	return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
 }
 
+static bool __io_fill_cqe_small(struct io_ring_ctx *ctx,
+				 struct io_uring_cqe *cqe)
+{
+	struct io_uring_cqe *cqe_out;
+
+	cqe_out = io_get_cqe(ctx);
+	if (unlikely(!cqe_out)) {
+		return io_cqring_event_overflow(ctx, cqe->user_data,
+						cqe->res, cqe->flags,
+						0, 0);
+	}
+
+	trace_io_uring_complete(ctx, NULL, cqe->user_data,
+				cqe->res, cqe->flags,
+				0, 0);
+
+	memcpy(cqe_out, cqe, sizeof(*cqe_out));
+
+	if (ctx->flags & IORING_SETUP_CQE32) {
+		WRITE_ONCE(cqe_out->big_cqe[0], 0);
+		WRITE_ONCE(cqe_out->big_cqe[1], 0);
+	}
+	return true;
+}
+
+static void __io_flush_post_cqes(struct io_ring_ctx *ctx)
+	__must_hold(&ctx->uring_lock)
+{
+	struct io_submit_state *state = &ctx->submit_state;
+	unsigned int i;
+
+	lockdep_assert_held(&ctx->uring_lock);
+	for (i = 0; i < state->cqes_count; i++)
+		__io_fill_cqe_small(ctx, state->cqes + i);
+	state->cqes_count = 0;
+}
+
 bool io_post_aux_cqe(struct io_ring_ctx *ctx,
 		     u64 user_data, s32 res, u32 cflags)
 {
@@ -1352,6 +1390,9 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 	struct io_submit_state *state = &ctx->submit_state;
 
 	io_cq_lock(ctx);
+	/* post must come first to preserve CQE ordering */
+	if (state->cqes_count)
+		__io_flush_post_cqes(ctx);
 	wq_list_for_each(node, prev, &state->compl_reqs) {
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 					    comp_list);
@@ -1361,8 +1402,10 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 	}
 	__io_cq_unlock_post(ctx);
 
-	io_free_batch_list(ctx, state->compl_reqs.first);
-	INIT_WQ_LIST(&state->compl_reqs);
+	if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
+		io_free_batch_list(ctx, state->compl_reqs.first);
+		INIT_WQ_LIST(&state->compl_reqs);
+	}
 }
 
 /*
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
                   ` (7 preceding siblings ...)
  2022-11-21 10:03 ` [PATCH for-next 08/10] io_uring: allow defer completion for aux posted cqes Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  2022-11-21 16:55   ` Jens Axboe
  2022-11-21 17:31   ` Jens Axboe
  2022-11-21 10:03 ` [PATCH for-next 10/10] io_uring: allow multishot polled reqs " Dylan Yudaken
  9 siblings, 2 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Use the just introduced deferred post cqe completion state when possible
in io_post_aux_cqe.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 21 ++++++++++++++++++++-
 io_uring/io_uring.h |  2 +-
 io_uring/msg_ring.c | 10 ++++++----
 io_uring/net.c      | 15 ++++++++-------
 io_uring/poll.c     |  2 +-
 io_uring/rsrc.c     |  4 ++--
 6 files changed, 38 insertions(+), 16 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c797f9a75dfe..5c240d01278a 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -845,11 +845,30 @@ static void __io_flush_post_cqes(struct io_ring_ctx *ctx)
 	state->cqes_count = 0;
 }
 
-bool io_post_aux_cqe(struct io_ring_ctx *ctx,
+bool io_post_aux_cqe(struct io_ring_ctx *ctx, bool defer,
 		     u64 user_data, s32 res, u32 cflags)
 {
 	bool filled;
 
+	if (defer) {
+		unsigned int length = ARRAY_SIZE(ctx->submit_state.cqes);
+		struct io_uring_cqe *cqe;
+
+		lockdep_assert_held(&ctx->uring_lock);
+
+		if (ctx->submit_state.cqes_count == length) {
+			io_cq_lock(ctx);
+			__io_flush_post_cqes(ctx);
+			/* no need to flush - flush is deferred */
+			spin_unlock(&ctx->completion_lock);
+		}
+
+		cqe  = ctx->submit_state.cqes + ctx->submit_state.cqes_count++;
+		cqe->user_data = user_data;
+		cqe->res = res;
+		cqe->flags = cflags;
+		return true;
+	}
 	io_cq_lock(ctx);
 	filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
 	io_cq_unlock_post(ctx);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index bfe1b5488c25..979a223286bd 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -31,7 +31,7 @@ int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked);
 int io_run_local_work(struct io_ring_ctx *ctx);
 void io_req_defer_failed(struct io_kiocb *req, s32 res);
 void __io_req_complete(struct io_kiocb *req, unsigned issue_flags);
-bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
+bool io_post_aux_cqe(struct io_ring_ctx *ctx, bool defer, u64 user_data, s32 res, u32 cflags);
 bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
 void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
 
diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
index afb543aab9f6..c5e831e3dcfc 100644
--- a/io_uring/msg_ring.c
+++ b/io_uring/msg_ring.c
@@ -23,7 +23,7 @@ struct io_msg {
 	u32 flags;
 };
 
-static int io_msg_ring_data(struct io_kiocb *req)
+static int io_msg_ring_data(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_ring_ctx *target_ctx = req->file->private_data;
 	struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
@@ -31,7 +31,8 @@ static int io_msg_ring_data(struct io_kiocb *req)
 	if (msg->src_fd || msg->dst_fd || msg->flags)
 		return -EINVAL;
 
-	if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0))
+	if (io_post_aux_cqe(target_ctx, false,
+			    msg->user_data, msg->len, 0))
 		return 0;
 
 	return -EOVERFLOW;
@@ -116,7 +117,8 @@ static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags)
 	 * completes with -EOVERFLOW, then the sender must ensure that a
 	 * later IORING_OP_MSG_RING delivers the message.
 	 */
-	if (!io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0))
+	if (!io_post_aux_cqe(target_ctx, false,
+			     msg->user_data, msg->len, 0))
 		ret = -EOVERFLOW;
 out_unlock:
 	io_double_unlock_ctx(ctx, target_ctx, issue_flags);
@@ -153,7 +155,7 @@ int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags)
 
 	switch (msg->cmd) {
 	case IORING_MSG_DATA:
-		ret = io_msg_ring_data(req);
+		ret = io_msg_ring_data(req, issue_flags);
 		break;
 	case IORING_MSG_SEND_FD:
 		ret = io_msg_send_fd(req, issue_flags);
diff --git a/io_uring/net.c b/io_uring/net.c
index a1a0b8f223e0..8c5154b05344 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -592,8 +592,8 @@ static inline void io_recv_prep_retry(struct io_kiocb *req)
  * Returns true if it is actually finished, or false if it should run
  * again (for multishot).
  */
-static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
-				  unsigned int cflags, bool mshot_finished)
+static inline bool io_recv_finish(struct io_kiocb *req, unsigned int issue_flags,
+				  int *ret, unsigned int cflags, bool mshot_finished)
 {
 	if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
 		io_req_set_res(req, *ret, cflags);
@@ -602,8 +602,8 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
 	}
 
 	if (!mshot_finished) {
-		if (io_post_aux_cqe(req->ctx, req->cqe.user_data, *ret,
-				    cflags | IORING_CQE_F_MORE)) {
+		if (io_post_aux_cqe(req->ctx, issue_flags & IO_URING_F_COMPLETE_DEFER,
+				    req->cqe.user_data, *ret, cflags | IORING_CQE_F_MORE)) {
 			io_recv_prep_retry(req);
 			return false;
 		}
@@ -801,7 +801,7 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
 	if (kmsg->msg.msg_inq)
 		cflags |= IORING_CQE_F_SOCK_NONEMPTY;
 
-	if (!io_recv_finish(req, &ret, cflags, mshot_finished))
+	if (!io_recv_finish(req, issue_flags, &ret, cflags, mshot_finished))
 		goto retry_multishot;
 
 	if (mshot_finished) {
@@ -900,7 +900,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
 	if (msg.msg_inq)
 		cflags |= IORING_CQE_F_SOCK_NONEMPTY;
 
-	if (!io_recv_finish(req, &ret, cflags, ret <= 0))
+	if (!io_recv_finish(req, issue_flags, &ret, cflags, ret <= 0))
 		goto retry_multishot;
 
 	return ret;
@@ -1323,7 +1323,8 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags)
 
 	if (ret < 0)
 		return ret;
-	if (io_post_aux_cqe(ctx, req->cqe.user_data, ret, IORING_CQE_F_MORE))
+	if (io_post_aux_cqe(ctx, issue_flags & IO_URING_F_COMPLETE_DEFER,
+			    req->cqe.user_data, ret, IORING_CQE_F_MORE))
 		goto retry;
 	return -ECANCELED;
 }
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 2b77d18a67a7..c4865dd58862 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -245,7 +245,7 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked)
 			__poll_t mask = mangle_poll(req->cqe.res &
 						    req->apoll_events);
 
-			if (!io_post_aux_cqe(ctx, req->cqe.user_data,
+			if (!io_post_aux_cqe(ctx, *locked, req->cqe.user_data,
 					     mask, IORING_CQE_F_MORE))
 				return -ECANCELED;
 		} else {
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 133608200769..f37cdd8cfc95 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -170,10 +170,10 @@ static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
 		if (prsrc->tag) {
 			if (ctx->flags & IORING_SETUP_IOPOLL) {
 				mutex_lock(&ctx->uring_lock);
-				io_post_aux_cqe(ctx, prsrc->tag, 0, 0);
+				io_post_aux_cqe(ctx, false, prsrc->tag, 0, 0);
 				mutex_unlock(&ctx->uring_lock);
 			} else {
-				io_post_aux_cqe(ctx, prsrc->tag, 0, 0);
+				io_post_aux_cqe(ctx, false, prsrc->tag, 0, 0);
 			}
 		}
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-next 10/10] io_uring: allow multishot polled reqs to defer completion
  2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
                   ` (8 preceding siblings ...)
  2022-11-21 10:03 ` [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion Dylan Yudaken
@ 2022-11-21 10:03 ` Dylan Yudaken
  9 siblings, 0 replies; 13+ messages in thread
From: Dylan Yudaken @ 2022-11-21 10:03 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Until now there was no reason for multishot polled requests to defer
completions as there was no functional difference. However now this will
actually defer the completions, for a performance win.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 5c240d01278a..2e12bddcfb2c 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1821,7 +1821,7 @@ int io_poll_issue(struct io_kiocb *req, bool *locked)
 	io_tw_lock(req->ctx, locked);
 	if (unlikely(req->task->flags & PF_EXITING))
 		return -EFAULT;
-	return io_issue_sqe(req, IO_URING_F_NONBLOCK);
+	return io_issue_sqe(req, IO_URING_F_NONBLOCK | IO_URING_F_COMPLETE_DEFER);
 }
 
 struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion
  2022-11-21 10:03 ` [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion Dylan Yudaken
@ 2022-11-21 16:55   ` Jens Axboe
  2022-11-21 17:31   ` Jens Axboe
  1 sibling, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-11-21 16:55 UTC (permalink / raw)
  To: Dylan Yudaken, Pavel Begunkov; +Cc: io-uring, kernel-team

On 11/21/22 3:03 AM, Dylan Yudaken wrote:
> Use the just introduced deferred post cqe completion state when possible
> in io_post_aux_cqe.
> 
> Signed-off-by: Dylan Yudaken <[email protected]>
> ---
>  io_uring/io_uring.c | 21 ++++++++++++++++++++-
>  io_uring/io_uring.h |  2 +-
>  io_uring/msg_ring.c | 10 ++++++----
>  io_uring/net.c      | 15 ++++++++-------
>  io_uring/poll.c     |  2 +-
>  io_uring/rsrc.c     |  4 ++--
>  6 files changed, 38 insertions(+), 16 deletions(-)
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index c797f9a75dfe..5c240d01278a 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -845,11 +845,30 @@ static void __io_flush_post_cqes(struct io_ring_ctx *ctx)
>  	state->cqes_count = 0;
>  }
>  
> -bool io_post_aux_cqe(struct io_ring_ctx *ctx,
> +bool io_post_aux_cqe(struct io_ring_ctx *ctx, bool defer,
>  		     u64 user_data, s32 res, u32 cflags)
>  {
>  	bool filled;
>  
> +	if (defer) {
> +		unsigned int length = ARRAY_SIZE(ctx->submit_state.cqes);
> +		struct io_uring_cqe *cqe;
> +
> +		lockdep_assert_held(&ctx->uring_lock);
> +
> +		if (ctx->submit_state.cqes_count == length) {
> +			io_cq_lock(ctx);
> +			__io_flush_post_cqes(ctx);
> +			/* no need to flush - flush is deferred */
> +			spin_unlock(&ctx->completion_lock);
> +		}
> +
> +		cqe  = ctx->submit_state.cqes + ctx->submit_state.cqes_count++;
> +		cqe->user_data = user_data;
> +		cqe->res = res;
> +		cqe->flags = cflags;
> +		return true;
> +	}
>  	io_cq_lock(ctx);
>  	filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
>  	io_cq_unlock_post(ctx);

Seems like this would be cleaner with a separate helper and make that
decision in the caller. For the ones that just pass false that is
trivial of course, then just gate it on the locked nature of the ring in
the other spots?

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion
  2022-11-21 10:03 ` [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion Dylan Yudaken
  2022-11-21 16:55   ` Jens Axboe
@ 2022-11-21 17:31   ` Jens Axboe
  1 sibling, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-11-21 17:31 UTC (permalink / raw)
  To: Dylan Yudaken, Pavel Begunkov; +Cc: io-uring, kernel-team

On 11/21/22 3:03?AM, Dylan Yudaken wrote:
> diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
> index afb543aab9f6..c5e831e3dcfc 100644
> --- a/io_uring/msg_ring.c
> +++ b/io_uring/msg_ring.c
> @@ -23,7 +23,7 @@ struct io_msg {
>  	u32 flags;
>  };
>  
> -static int io_msg_ring_data(struct io_kiocb *req)
> +static int io_msg_ring_data(struct io_kiocb *req, unsigned int issue_flags)
>  {
>  	struct io_ring_ctx *target_ctx = req->file->private_data;
>  	struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
> @@ -31,7 +31,8 @@ static int io_msg_ring_data(struct io_kiocb *req)
>  	if (msg->src_fd || msg->dst_fd || msg->flags)
>  		return -EINVAL;
>  
> -	if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0))
> +	if (io_post_aux_cqe(target_ctx, false,
> +			    msg->user_data, msg->len, 0))
>  		return 0;
>  
>  	return -EOVERFLOW;
> @@ -116,7 +117,8 @@ static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags)
>  	 * completes with -EOVERFLOW, then the sender must ensure that a
>  	 * later IORING_OP_MSG_RING delivers the message.
>  	 */
> -	if (!io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0))
> +	if (!io_post_aux_cqe(target_ctx, false,
> +			     msg->user_data, msg->len, 0))
>  		ret = -EOVERFLOW;
>  out_unlock:
>  	io_double_unlock_ctx(ctx, target_ctx, issue_flags);
> @@ -153,7 +155,7 @@ int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags)
>  
>  	switch (msg->cmd) {
>  	case IORING_MSG_DATA:
> -		ret = io_msg_ring_data(req);
> +		ret = io_msg_ring_data(req, issue_flags);
>  		break;
>  	case IORING_MSG_SEND_FD:
>  		ret = io_msg_send_fd(req, issue_flags);

This is a bit odd, either we can drop this or it should be wired up for
defer?

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-11-21 17:31 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-21 10:03 [PATCH for-next 00/10] io_uring: batch multishot completions Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 01/10] io_uring: merge io_req_tw_post and io_req_task_complete Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 02/10] io_uring: __io_req_complete should defer if available Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 03/10] io_uring: split io_req_complete_failed into post/defer Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 04/10] io_uring: lock on remove in io_apoll_task_func Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 05/10] io_uring: timeout should use io_req_task_complete Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 06/10] io_uring: simplify io_issue_sqe Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 07/10] io_uring: make io_req_complete_post static Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 08/10] io_uring: allow defer completion for aux posted cqes Dylan Yudaken
2022-11-21 10:03 ` [PATCH for-next 09/10] io_uring: allow io_post_aux_cqe to defer completion Dylan Yudaken
2022-11-21 16:55   ` Jens Axboe
2022-11-21 17:31   ` Jens Axboe
2022-11-21 10:03 ` [PATCH for-next 10/10] io_uring: allow multishot polled reqs " Dylan Yudaken

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox