public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
@ 2024-10-25 12:22 Ming Lei
  2024-10-25 12:22 ` [PATCH V8 1/7] io_uring: add io_link_req() helper Ming Lei
                   ` (7 more replies)
  0 siblings, 8 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei

The 1st 3 patches are cleanup, and prepare for adding sqe group.

The 4th patch supports generic sqe group which is like link chain, but
allows each sqe in group to be issued in parallel and the group shares
same IO_LINK & IO_DRAIN boundary, so N:M dependency can be supported with
sqe group & io link together.

The 5th & 6th patches supports to lease other subsystem's kbuf to
io_uring for use in sqe group wide.

The 7th patch supports ublk zero copy based on io_uring sqe group &
leased kbuf.

Tests:

1) pass liburing test
- make runtests

2) write/pass sqe group test case and sqe provide buffer case:

https://github.com/ming1/liburing/tree/uring_group

- covers related sqe flags combination and linking groups, both nop and
one multi-destination file copy.

- cover failure handling test: fail leader IO or member IO in both single
  group and linked groups, which is done in each sqe flags combination
  test

- cover io_uring with leased group kbuf by adding ublk-loop-zc

V8:
	- simplify & clean up group request completion, don't reuse 
	SQE_GROUP as state; meantime improve document; now group
	implementation is quite clean
	- handle short read/recv correctly by zeroing out the remained
	  part(Pavel)
	- fix one group leader reference(Uday Shankar)
	- only allow ublk provide buffer command in case of zc(Uday Shankar)

V7:
	- remove dead code in sqe group support(Pavel)
	- fail single group request(Pavel)
	- remove IORING_PROVIDE_GROUP_KBUF(Pavel)
	- remove REQ_F_SQE_GROUP_DEP(Pavel)
	- rename as leasing buffer
	- improve commit log
	- map group member's IOSQE_IO_DRAIN to GROUP_KBUF, which
	aligns with buffer select use, and it means that io_uring starts
	to support leased kbuf from other subsystem for group member
	requests only

V6:
	- follow Pavel's suggestion to disallow IOSQE_CQE_SKIP_SUCCESS &
	  LINK_TIMEOUT
	- kill __io_complete_group_member() (Pavel)
	- simplify link failure handling (Pavel)
	- move members' queuing out of completion lock (Pavel)
	- cleanup group io complete handler
	- add more comment
	- add ublk zc into liburing test for covering
	  IOSQE_SQE_GROUP & IORING_PROVIDE_GROUP_KBUF 

V5:
	- follow Pavel's suggestion to minimize change on io_uring fast code
	  path: sqe group code is called in by single 'if (unlikely())' from
	  both issue & completion code path

	- simplify & re-write group request completion
		avoid to touch io-wq code by completing group leader via tw
		directly, just like ->task_complete

		re-write group member & leader completion handling, one
		simplification is always to free leader via the last member

		simplify queueing group members, not support issuing leader
		and members in parallel

	- fail the whole group if IO_*LINK & IO_DRAIN is set on group
	  members, and test code to cover this change

	- misc cleanup

V4:
	- address most comments from Pavel
	- fix request double free
	- don't use io_req_commit_cqe() in io_req_complete_defer()
	- make members' REQ_F_INFLIGHT discoverable
	- use common assembling check in submission code path
	- drop patch 3 and don't move REQ_F_CQE_SKIP out of io_free_req()
	- don't set .accept_group_kbuf for net send zc, in which members
	  need to be queued after buffer notification is got, and can be
	  enabled in future
	- add .grp_leader field via union, and share storage with .grp_link
	- move .grp_refs into one hole of io_kiocb, so that one extra
	cacheline isn't needed for io_kiocb
	- cleanup & document improvement

V3:
	- add IORING_FEAT_SQE_GROUP
	- simplify group completion, and minimize change on io_req_complete_defer()
	- simplify & cleanup io_queue_group_members()
	- fix many failure handling issues
	- cover failure handling code in added liburing tests
	- remove RFC

V2:
	- add generic sqe group, suggested by Kevin Wolf
	- add REQ_F_SQE_GROUP_DEP which is based on IOSQE_SQE_GROUP, for sharing
	  kernel resource in group wide, suggested by Kevin Wolf
	- remove sqe ext flag, and use the last bit for IOSQE_SQE_GROUP(Pavel),
	in future we still can extend sqe flags with one uring context flag
	- initialize group requests via submit state pattern, suggested by Pavel
	- all kinds of cleanup & bug fixes

Ming Lei (7):
  io_uring: add io_link_req() helper
  io_uring: add io_submit_fail_link() helper
  io_uring: add helper of io_req_commit_cqe()
  io_uring: support SQE group
  io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  io_uring/uring_cmd: support leasing device kernel buffer to io_uring
  ublk: support leasing io buffer to io_uring

 drivers/block/ublk_drv.c       | 159 +++++++++++++-
 include/linux/io_uring/cmd.h   |   7 +
 include/linux/io_uring_types.h |  58 +++++
 include/uapi/linux/io_uring.h  |   4 +
 include/uapi/linux/ublk_cmd.h  |  11 +-
 io_uring/io_uring.c            | 389 ++++++++++++++++++++++++++++++---
 io_uring/io_uring.h            |  11 +
 io_uring/kbuf.c                |  58 +++++
 io_uring/kbuf.h                |  31 +++
 io_uring/net.c                 |  25 ++-
 io_uring/rw.c                  |  26 ++-
 io_uring/timeout.c             |   6 +
 io_uring/uring_cmd.c           |  13 ++
 13 files changed, 750 insertions(+), 48 deletions(-)

-- 
2.46.0


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH V8 1/7] io_uring: add io_link_req() helper
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
@ 2024-10-25 12:22 ` Ming Lei
  2024-10-25 12:22 ` [PATCH V8 2/7] io_uring: add io_submit_fail_link() helper Ming Lei
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei

Add io_link_req() helper, so that io_submit_sqe() can become more
readable.

Signed-off-by: Ming Lei <[email protected]>
---
 io_uring/io_uring.c | 41 +++++++++++++++++++++++++++--------------
 1 file changed, 27 insertions(+), 14 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 58b401900b41..02f7dd58d44d 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2155,19 +2155,11 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
 	return 0;
 }
 
-static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			 const struct io_uring_sqe *sqe)
-	__must_hold(&ctx->uring_lock)
+/*
+ * Return NULL if nothing to be queued, otherwise return request for queueing */
+static struct io_kiocb *io_link_sqe(struct io_submit_link *link,
+				    struct io_kiocb *req)
 {
-	struct io_submit_link *link = &ctx->submit_state.link;
-	int ret;
-
-	ret = io_init_req(ctx, req, sqe);
-	if (unlikely(ret))
-		return io_submit_fail_init(sqe, req, ret);
-
-	trace_io_uring_submit_req(req);
-
 	/*
 	 * If we already have a head request, queue this one for async
 	 * submittal once the head completes. If we don't have a head but
@@ -2181,7 +2173,7 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		link->last = req;
 
 		if (req->flags & IO_REQ_LINK_FLAGS)
-			return 0;
+			return NULL;
 		/* last request of the link, flush it */
 		req = link->head;
 		link->head = NULL;
@@ -2197,9 +2189,30 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 fallback:
 			io_queue_sqe_fallback(req);
 		}
-		return 0;
+		return NULL;
 	}
+	return req;
+}
+
+static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+			 const struct io_uring_sqe *sqe)
+	__must_hold(&ctx->uring_lock)
+{
+	struct io_submit_link *link = &ctx->submit_state.link;
+	int ret;
 
+	ret = io_init_req(ctx, req, sqe);
+	if (unlikely(ret))
+		return io_submit_fail_init(sqe, req, ret);
+
+	trace_io_uring_submit_req(req);
+
+	if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
+				    REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
+		req = io_link_sqe(link, req);
+		if (!req)
+			return 0;
+	}
 	io_queue_sqe(req);
 	return 0;
 }
-- 
2.46.0


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH V8 2/7] io_uring: add io_submit_fail_link() helper
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
  2024-10-25 12:22 ` [PATCH V8 1/7] io_uring: add io_link_req() helper Ming Lei
@ 2024-10-25 12:22 ` Ming Lei
  2024-10-25 12:22 ` [PATCH V8 3/7] io_uring: add helper of io_req_commit_cqe() Ming Lei
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei

Add io_submit_fail_link() helper and put linking fail logic into this
helper.

This way simplifies io_submit_fail_init(), and becomes easier to add
sqe group failing logic.

Signed-off-by: Ming Lei <[email protected]>
---
 io_uring/io_uring.c | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 02f7dd58d44d..749ecc18049d 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2118,22 +2118,17 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	return def->prep(req, sqe);
 }
 
-static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
+static __cold int io_submit_fail_link(struct io_submit_link *link,
 				      struct io_kiocb *req, int ret)
 {
-	struct io_ring_ctx *ctx = req->ctx;
-	struct io_submit_link *link = &ctx->submit_state.link;
 	struct io_kiocb *head = link->head;
 
-	trace_io_uring_req_failed(sqe, req, ret);
-
 	/*
 	 * Avoid breaking links in the middle as it renders links with SQPOLL
 	 * unusable. Instead of failing eagerly, continue assembling the link if
 	 * applicable and mark the head with REQ_F_FAIL. The link flushing code
 	 * should find the flag and handle the rest.
 	 */
-	req_fail_link_node(req, ret);
 	if (head && !(head->flags & REQ_F_FAIL))
 		req_fail_link_node(head, -ECANCELED);
 
@@ -2152,9 +2147,24 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
 	else
 		link->head = req;
 	link->last = req;
+
 	return 0;
 }
 
+static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
+				      struct io_kiocb *req, int ret)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+	struct io_submit_link *link = &ctx->submit_state.link;
+
+	trace_io_uring_req_failed(sqe, req, ret);
+
+	req_fail_link_node(req, ret);
+
+	/* cover both linked and non-linked request */
+	return io_submit_fail_link(link, req, ret);
+}
+
 /*
  * Return NULL if nothing to be queued, otherwise return request for queueing */
 static struct io_kiocb *io_link_sqe(struct io_submit_link *link,
-- 
2.46.0


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH V8 3/7] io_uring: add helper of io_req_commit_cqe()
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
  2024-10-25 12:22 ` [PATCH V8 1/7] io_uring: add io_link_req() helper Ming Lei
  2024-10-25 12:22 ` [PATCH V8 2/7] io_uring: add io_submit_fail_link() helper Ming Lei
@ 2024-10-25 12:22 ` Ming Lei
  2024-10-25 12:22 ` [PATCH V8 4/7] io_uring: support SQE group Ming Lei
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei

Add helper of io_req_commit_cqe() for simplifying
__io_submit_flush_completions() a bit.

No functional change, and the added helper will be reused in sqe group
code with same lock rule.

Signed-off-by: Ming Lei <[email protected]>
---
 io_uring/io_uring.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 749ecc18049d..33856560ff87 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -898,6 +898,20 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
 	return posted;
 }
 
+static __always_inline void io_req_commit_cqe(struct io_ring_ctx *ctx,
+		struct io_kiocb *req)
+{
+	if (unlikely(!io_fill_cqe_req(ctx, req))) {
+		if (ctx->lockless_cq) {
+			spin_lock(&ctx->completion_lock);
+			io_req_cqe_overflow(req);
+			spin_unlock(&ctx->completion_lock);
+		} else {
+			io_req_cqe_overflow(req);
+		}
+	}
+}
+
 static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -1436,16 +1450,8 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 					    comp_list);
 
-		if (!(req->flags & REQ_F_CQE_SKIP) &&
-		    unlikely(!io_fill_cqe_req(ctx, req))) {
-			if (ctx->lockless_cq) {
-				spin_lock(&ctx->completion_lock);
-				io_req_cqe_overflow(req);
-				spin_unlock(&ctx->completion_lock);
-			} else {
-				io_req_cqe_overflow(req);
-			}
-		}
+		if (!(req->flags & REQ_F_CQE_SKIP))
+			io_req_commit_cqe(ctx, req);
 	}
 	__io_cq_unlock_post(ctx);
 
-- 
2.46.0


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH V8 4/7] io_uring: support SQE group
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
                   ` (2 preceding siblings ...)
  2024-10-25 12:22 ` [PATCH V8 3/7] io_uring: add helper of io_req_commit_cqe() Ming Lei
@ 2024-10-25 12:22 ` Ming Lei
  2024-10-29  0:12   ` Jens Axboe
  2024-10-31 21:24   ` Jens Axboe
  2024-10-25 12:22 ` [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF Ming Lei
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei, Kevin Wolf

SQE group is defined as one chain of SQEs starting with the first SQE that
has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
doesn't have it set, and it is similar with chain of linked SQEs.

Not like linked SQEs, each sqe is issued after the previous one is
completed. All SQEs in one group can be submitted in parallel. To simplify
the implementation from beginning, all members are queued after the leader
is completed, however, this way may be changed and leader and members may
be issued concurrently in future.

The 1st SQE is group leader, and the other SQEs are group member. The whole
group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
the two flags can't be set for group members. For the sake of
simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.

When the group is in one link chain, this group isn't submitted until the
previous SQE or group is completed. And the following SQE or group can't
be started if this group isn't completed. Failure from any group member will
fail the group leader, then the link chain can be terminated.

When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
group leader only, we respect IO_DRAIN by always completing group leader as
the last one in the group. Meantime it is natural to post leader's CQE
as the last one from application viewpoint.

Working together with IOSQE_IO_LINK, SQE group provides flexible way to
support N:M dependency, such as:

- group A is chained with group B together
- group A has N SQEs
- group B has M SQEs

then M SQEs in group B depend on N SQEs in group A.

N:M dependency can support some interesting use cases in efficient way:

1) read from multiple files, then write the read data into single file

2) read from single file, and write the read data into multiple files

3) write same data into multiple files, and read data from multiple files and
compare if correct data is written

Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
extend sqe->flags with io_uring context flag, such as use __pad3 for
non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.

Suggested-by: Kevin Wolf <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
---
 include/linux/io_uring_types.h |  18 ++
 include/uapi/linux/io_uring.h  |   4 +
 io_uring/io_uring.c            | 306 +++++++++++++++++++++++++++++++--
 io_uring/io_uring.h            |   6 +
 io_uring/timeout.c             |   6 +
 5 files changed, 324 insertions(+), 16 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 6d3ee71bd832..d524be7f6b35 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -201,6 +201,8 @@ struct io_submit_state {
 	/* batch completion logic */
 	struct io_wq_work_list	compl_reqs;
 	struct io_submit_link	link;
+	/* points to current group */
+	struct io_submit_link	group;
 
 	bool			plug_started;
 	bool			need_plug;
@@ -436,6 +438,7 @@ enum {
 	REQ_F_FORCE_ASYNC_BIT	= IOSQE_ASYNC_BIT,
 	REQ_F_BUFFER_SELECT_BIT	= IOSQE_BUFFER_SELECT_BIT,
 	REQ_F_CQE_SKIP_BIT	= IOSQE_CQE_SKIP_SUCCESS_BIT,
+	REQ_F_SQE_GROUP_BIT	= IOSQE_SQE_GROUP_BIT,
 
 	/* first byte is taken by user flags, shift it to not overlap */
 	REQ_F_FAIL_BIT		= 8,
@@ -465,6 +468,7 @@ enum {
 	REQ_F_BL_EMPTY_BIT,
 	REQ_F_BL_NO_RECYCLE_BIT,
 	REQ_F_BUFFERS_COMMIT_BIT,
+	REQ_F_SQE_GROUP_LEADER_BIT,
 
 	/* not a real bit, just to check we're not overflowing the space */
 	__REQ_F_LAST_BIT,
@@ -488,6 +492,8 @@ enum {
 	REQ_F_BUFFER_SELECT	= IO_REQ_FLAG(REQ_F_BUFFER_SELECT_BIT),
 	/* IOSQE_CQE_SKIP_SUCCESS */
 	REQ_F_CQE_SKIP		= IO_REQ_FLAG(REQ_F_CQE_SKIP_BIT),
+	/* IOSQE_SQE_GROUP */
+	REQ_F_SQE_GROUP		= IO_REQ_FLAG(REQ_F_SQE_GROUP_BIT),
 
 	/* fail rest of links */
 	REQ_F_FAIL		= IO_REQ_FLAG(REQ_F_FAIL_BIT),
@@ -541,6 +547,8 @@ enum {
 	REQ_F_BL_NO_RECYCLE	= IO_REQ_FLAG(REQ_F_BL_NO_RECYCLE_BIT),
 	/* buffer ring head needs incrementing on put */
 	REQ_F_BUFFERS_COMMIT	= IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT),
+	/* sqe group lead */
+	REQ_F_SQE_GROUP_LEADER	= IO_REQ_FLAG(REQ_F_SQE_GROUP_LEADER_BIT),
 };
 
 typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
@@ -643,6 +651,8 @@ struct io_kiocb {
 	void				*async_data;
 	/* linked requests, IFF REQ_F_HARDLINK or REQ_F_LINK are set */
 	atomic_t			poll_refs;
+	/* reference for group leader request */
+	int				grp_refs;
 	struct io_kiocb			*link;
 	/* custom credentials, valid IFF REQ_F_CREDS is set */
 	const struct cred		*creds;
@@ -652,6 +662,14 @@ struct io_kiocb {
 		u64			extra1;
 		u64			extra2;
 	} big_cqe;
+
+	union {
+		/* links all group members for leader */
+		struct io_kiocb			*grp_link;
+
+		/* points to group leader for member */
+		struct io_kiocb			*grp_leader;
+	};
 };
 
 struct io_overflow_cqe {
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 86cb385fe0b5..f298dd5fbaa9 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -124,6 +124,7 @@ enum io_uring_sqe_flags_bit {
 	IOSQE_ASYNC_BIT,
 	IOSQE_BUFFER_SELECT_BIT,
 	IOSQE_CQE_SKIP_SUCCESS_BIT,
+	IOSQE_SQE_GROUP_BIT,
 };
 
 /*
@@ -143,6 +144,8 @@ enum io_uring_sqe_flags_bit {
 #define IOSQE_BUFFER_SELECT	(1U << IOSQE_BUFFER_SELECT_BIT)
 /* don't post CQE if request succeeded */
 #define IOSQE_CQE_SKIP_SUCCESS	(1U << IOSQE_CQE_SKIP_SUCCESS_BIT)
+/* defines sqe group */
+#define IOSQE_SQE_GROUP		(1U << IOSQE_SQE_GROUP_BIT)
 
 /*
  * io_uring_setup() flags
@@ -554,6 +557,7 @@ struct io_uring_params {
 #define IORING_FEAT_REG_REG_RING	(1U << 13)
 #define IORING_FEAT_RECVSEND_BUNDLE	(1U << 14)
 #define IORING_FEAT_MIN_TIMEOUT		(1U << 15)
+#define IORING_FEAT_SQE_GROUP		(1U << 16)
 
 /*
  * io_uring_register(2) opcodes and arguments
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 33856560ff87..59e9a01319de 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -112,14 +112,15 @@
 			  IOSQE_IO_HARDLINK | IOSQE_ASYNC)
 
 #define SQE_VALID_FLAGS	(SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
-			IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
+			IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS | \
+			IOSQE_SQE_GROUP)
 
 #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
 				REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
 				REQ_F_ASYNC_DATA)
 
 #define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
-				 IO_REQ_CLEAN_FLAGS)
+				 REQ_F_SQE_GROUP | IO_REQ_CLEAN_FLAGS)
 
 #define IO_TCTX_REFS_CACHE_NR	(1U << 10)
 
@@ -912,6 +913,129 @@ static __always_inline void io_req_commit_cqe(struct io_ring_ctx *ctx,
 	}
 }
 
+/* Can only be called after this request is issued */
+static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
+{
+	if (req->flags & REQ_F_SQE_GROUP) {
+		if (req_is_group_leader(req))
+			return req;
+		return req->grp_leader;
+	}
+	return NULL;
+}
+
+void io_fail_group_members(struct io_kiocb *req)
+{
+	struct io_kiocb *member = req->grp_link;
+
+	while (member) {
+		struct io_kiocb *next = member->grp_link;
+
+		if (!(member->flags & REQ_F_FAIL)) {
+			req_set_fail(member);
+			io_req_set_res(member, -ECANCELED, 0);
+		}
+		member = next;
+	}
+}
+
+static void io_queue_group_members(struct io_kiocb *req)
+{
+	struct io_kiocb *member = req->grp_link;
+
+	if (!member)
+		return;
+
+	req->grp_link = NULL;
+	while (member) {
+		struct io_kiocb *next = member->grp_link;
+
+		member->grp_leader = req;
+		if (unlikely(member->flags & REQ_F_FAIL)) {
+			io_req_task_queue_fail(member, member->cqe.res);
+		} else if (unlikely(req->flags & REQ_F_FAIL)) {
+			io_req_task_queue_fail(member, -ECANCELED);
+		} else {
+			io_req_task_queue(member);
+		}
+		member = next;
+	}
+}
+
+/* called only after the request is completed */
+static bool req_is_last_group_member(struct io_kiocb *req)
+{
+	return req->grp_leader != NULL;
+}
+
+static void io_complete_group_req(struct io_kiocb *req)
+{
+	struct io_kiocb *lead;
+
+	if (req_is_group_leader(req)) {
+		req->grp_refs -= 1;
+		return;
+	}
+
+	lead = get_group_leader(req);
+
+	/* member CQE needs to be posted first */
+	if (!(req->flags & REQ_F_CQE_SKIP))
+		io_req_commit_cqe(req->ctx, req);
+
+	/* Set leader as failed in case of any member failed */
+	if (unlikely((req->flags & REQ_F_FAIL)))
+		req_set_fail(lead);
+
+	WARN_ON_ONCE(lead->grp_refs <= 0);
+	if (!--lead->grp_refs) {
+		/*
+		 * We are the last member, and ->grp_leader isn't cleared,
+		 * so our leader can be found & freed with the last member
+		 */
+		if (!(lead->flags & REQ_F_CQE_SKIP))
+			io_req_commit_cqe(lead->ctx, lead);
+	} else {
+		/* we are done with the group now */
+		req->grp_leader = NULL;
+	}
+}
+
+enum group_mem {
+	GROUP_LEADER,
+	GROUP_LAST_MEMBER,
+	GROUP_OTHER_MEMBER,
+};
+
+static enum group_mem io_prep_free_group_req(struct io_kiocb *req,
+					     struct io_kiocb **leader)
+{
+	/*
+	 * Group completion is done, so clear the flag for avoiding double
+	 * handling in case of io-wq
+	 */
+	req->flags &= ~REQ_F_SQE_GROUP;
+
+	if (req_is_group_leader(req)) {
+		/* Queue members now */
+		if (req->grp_link)
+			io_queue_group_members(req);
+		return GROUP_LEADER;
+	} else {
+		if (!req_is_last_group_member(req))
+			return GROUP_OTHER_MEMBER;
+
+		/*
+		 * Prepare for freeing leader which can only be found from
+		 * the last member
+		 */
+		*leader = req->grp_leader;
+		(*leader)->flags &= ~REQ_F_SQE_GROUP_LEADER;
+		req->grp_leader = NULL;
+		return GROUP_LAST_MEMBER;
+	}
+}
+
 static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -927,7 +1051,8 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 	 * Handle special CQ sync cases via task_work. DEFER_TASKRUN requires
 	 * the submitter task context, IOPOLL protects with uring_lock.
 	 */
-	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL)) {
+	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL) ||
+	    (req->flags & REQ_F_SQE_GROUP)) {
 		req->io_task_work.func = io_req_task_complete;
 		io_req_task_work_add(req);
 		return;
@@ -1411,6 +1536,27 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
 						    comp_list);
 
 		if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
+			if (req->flags & REQ_F_SQE_GROUP) {
+				struct io_kiocb *leader = NULL;
+				enum group_mem mem = io_prep_free_group_req(req, &leader);
+
+				if (mem == GROUP_LEADER) {
+					node = req->comp_list.next;
+					continue;
+				} else if (mem == GROUP_LAST_MEMBER) {
+					/*
+					 * Link leader to current request's next,
+					 * this way works because the iterator
+					 * always check the next node only.
+					 *
+					 * Be careful when you change the iterator
+					 * in future
+					 */
+					wq_stack_add_head(&leader->comp_list,
+							  &req->comp_list);
+				}
+			}
+
 			if (req->flags & REQ_F_REFCOUNT) {
 				node = req->comp_list.next;
 				if (!req_ref_put_and_test(req))
@@ -1450,8 +1596,16 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 					    comp_list);
 
-		if (!(req->flags & REQ_F_CQE_SKIP))
-			io_req_commit_cqe(ctx, req);
+		if (unlikely(req->flags & (REQ_F_CQE_SKIP | REQ_F_SQE_GROUP))) {
+			if (req->flags & REQ_F_SQE_GROUP) {
+				io_complete_group_req(req);
+				continue;
+			}
+
+			if (req->flags & REQ_F_CQE_SKIP)
+				continue;
+		}
+		io_req_commit_cqe(ctx, req);
 	}
 	__io_cq_unlock_post(ctx);
 
@@ -1661,8 +1815,12 @@ static u32 io_get_sequence(struct io_kiocb *req)
 	struct io_kiocb *cur;
 
 	/* need original cached_sq_head, but it was increased for each req */
-	io_for_each_link(cur, req)
-		seq--;
+	io_for_each_link(cur, req) {
+		if (req_is_group_leader(cur))
+			seq -= cur->grp_refs;
+		else
+			seq--;
+	}
 	return seq;
 }
 
@@ -2124,6 +2282,67 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	return def->prep(req, sqe);
 }
 
+static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
+				     struct io_kiocb *req)
+{
+	/*
+	 * Group chain is similar with link chain: starts with 1st sqe with
+	 * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
+	 */
+	if (group->head) {
+		struct io_kiocb *lead = group->head;
+
+		/*
+		 * Members can't be in link chain, can't be drained, but
+		 * the whole group can be linked or drained by setting
+		 * flags on group leader.
+		 *
+		 * IOSQE_CQE_SKIP_SUCCESS can't be set for member
+		 * for the sake of simplicity
+		 */
+		if (req->flags & (IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN |
+				REQ_F_CQE_SKIP))
+			req_fail_link_node(lead, -EINVAL);
+
+		lead->grp_refs += 1;
+		group->last->grp_link = req;
+		group->last = req;
+
+		if (req->flags & REQ_F_SQE_GROUP)
+			return NULL;
+
+		req->grp_link = NULL;
+		req->flags |= REQ_F_SQE_GROUP;
+		group->head = NULL;
+
+		return lead;
+	} else {
+		if (WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP)))
+			return req;
+		group->head = req;
+		group->last = req;
+		req->grp_refs = 1;
+		req->flags |= REQ_F_SQE_GROUP_LEADER;
+		return NULL;
+	}
+}
+
+static __cold struct io_kiocb *io_submit_fail_group(
+		struct io_submit_link *link, struct io_kiocb *req)
+{
+	struct io_kiocb *lead = link->head;
+
+	/*
+	 * Instead of failing eagerly, continue assembling the group link
+	 * if applicable and mark the leader with REQ_F_FAIL. The group
+	 * flushing code should find the flag and handle the rest
+	 */
+	if (lead && !(lead->flags & REQ_F_FAIL))
+		req_fail_link_node(lead, -ECANCELED);
+
+	return io_group_sqe(link, req);
+}
+
 static __cold int io_submit_fail_link(struct io_submit_link *link,
 				      struct io_kiocb *req, int ret)
 {
@@ -2162,11 +2381,18 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct io_submit_link *link = &ctx->submit_state.link;
+	struct io_submit_link *group = &ctx->submit_state.group;
 
 	trace_io_uring_req_failed(sqe, req, ret);
 
 	req_fail_link_node(req, ret);
 
+	if (group->head || (req->flags & REQ_F_SQE_GROUP)) {
+		req = io_submit_fail_group(group, req);
+		if (!req)
+			return 0;
+	}
+
 	/* cover both linked and non-linked request */
 	return io_submit_fail_link(link, req, ret);
 }
@@ -2210,11 +2436,29 @@ static struct io_kiocb *io_link_sqe(struct io_submit_link *link,
 	return req;
 }
 
+static inline bool io_group_assembling(const struct io_submit_state *state,
+				       const struct io_kiocb *req)
+{
+	if (state->group.head || req->flags & REQ_F_SQE_GROUP)
+		return true;
+	return false;
+}
+
+/* Failed request is covered too */
+static inline bool io_link_assembling(const struct io_submit_state *state,
+				      const struct io_kiocb *req)
+{
+	if (state->link.head || (req->flags & (IO_REQ_LINK_FLAGS |
+				 REQ_F_FORCE_ASYNC | REQ_F_FAIL)))
+		return true;
+	return false;
+}
+
 static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			 const struct io_uring_sqe *sqe)
 	__must_hold(&ctx->uring_lock)
 {
-	struct io_submit_link *link = &ctx->submit_state.link;
+	struct io_submit_state *state = &ctx->submit_state;
 	int ret;
 
 	ret = io_init_req(ctx, req, sqe);
@@ -2223,11 +2467,20 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 
 	trace_io_uring_submit_req(req);
 
-	if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
-				    REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
-		req = io_link_sqe(link, req);
-		if (!req)
-			return 0;
+	if (unlikely(io_link_assembling(state, req) ||
+		     io_group_assembling(state, req))) {
+		if (io_group_assembling(state, req)) {
+			req = io_group_sqe(&state->group, req);
+			if (!req)
+				return 0;
+		}
+
+		/* covers non-linked failed request too */
+		if (io_link_assembling(state, req)) {
+			req = io_link_sqe(&state->link, req);
+			if (!req)
+				return 0;
+		}
 	}
 	io_queue_sqe(req);
 	return 0;
@@ -2240,8 +2493,27 @@ static void io_submit_state_end(struct io_ring_ctx *ctx)
 {
 	struct io_submit_state *state = &ctx->submit_state;
 
-	if (unlikely(state->link.head))
-		io_queue_sqe_fallback(state->link.head);
+	if (unlikely(state->group.head || state->link.head)) {
+		/* the last member must set REQ_F_SQE_GROUP */
+		if (state->group.head) {
+			struct io_kiocb *lead = state->group.head;
+			struct io_kiocb *last = state->group.last;
+
+			/* fail group with single leader */
+			if (unlikely(last == lead))
+				req_fail_link_node(lead, -EINVAL);
+
+			last->grp_link = NULL;
+			if (state->link.head)
+				io_link_sqe(&state->link, lead);
+			else
+				io_queue_sqe_fallback(lead);
+		}
+
+		if (unlikely(state->link.head))
+			io_queue_sqe_fallback(state->link.head);
+	}
+
 	/* flush only after queuing links as they can generate completions */
 	io_submit_flush_completions(ctx);
 	if (state->plug_started)
@@ -2259,6 +2531,7 @@ static void io_submit_state_start(struct io_submit_state *state,
 	state->submit_nr = max_ios;
 	/* set only head, no need to init link_last in advance */
 	state->link.head = NULL;
+	state->group.head = NULL;
 }
 
 static void io_commit_sqring(struct io_ring_ctx *ctx)
@@ -3699,7 +3972,8 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
 			IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
 			IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
 			IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING |
-			IORING_FEAT_RECVSEND_BUNDLE | IORING_FEAT_MIN_TIMEOUT;
+			IORING_FEAT_RECVSEND_BUNDLE | IORING_FEAT_MIN_TIMEOUT |
+			IORING_FEAT_SQE_GROUP;
 
 	if (copy_to_user(params, p, sizeof(*p))) {
 		ret = -EFAULT;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 12e8fca73891..ab84b09505fe 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -72,6 +72,7 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
 bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags);
 void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
+void io_fail_group_members(struct io_kiocb *req);
 
 struct file *io_file_get_normal(struct io_kiocb *req, int fd);
 struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
@@ -343,6 +344,11 @@ static inline void io_tw_lock(struct io_ring_ctx *ctx, struct io_tw_state *ts)
 	lockdep_assert_held(&ctx->uring_lock);
 }
 
+static inline bool req_is_group_leader(struct io_kiocb *req)
+{
+	return req->flags & REQ_F_SQE_GROUP_LEADER;
+}
+
 /*
  * Don't complete immediately but use deferred completion infrastructure.
  * Protected by ->uring_lock and can only be used either with
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 9973876d91b0..ed6c74f1a475 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -149,6 +149,8 @@ static void io_req_tw_fail_links(struct io_kiocb *link, struct io_tw_state *ts)
 			res = link->cqe.res;
 		link->link = NULL;
 		io_req_set_res(link, res, 0);
+		if (req_is_group_leader(link))
+			io_fail_group_members(link);
 		io_req_task_complete(link, ts);
 		link = nxt;
 	}
@@ -543,6 +545,10 @@ static int __io_timeout_prep(struct io_kiocb *req,
 	if (is_timeout_link) {
 		struct io_submit_link *link = &req->ctx->submit_state.link;
 
+		/* so far disallow IO group link timeout */
+		if (req->ctx->submit_state.group.head)
+			return -EINVAL;
+
 		if (!link->head)
 			return -EINVAL;
 		if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
-- 
2.46.0


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
                   ` (3 preceding siblings ...)
  2024-10-25 12:22 ` [PATCH V8 4/7] io_uring: support SQE group Ming Lei
@ 2024-10-25 12:22 ` Ming Lei
  2024-10-29 16:47   ` Pavel Begunkov
  2024-10-25 12:22 ` [PATCH V8 6/7] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei

SQE group introduces one new mechanism to share resource among one
group of requests, and all member requests can consume the resource
leased by group leader efficiently in parallel.

This patch uses the added SQE group to lease kernel buffer from group
leader(driver) to members(io_uring) in sqe group:

- this kernel buffer is owned by kernel device(driver), and has very
  short lifetime, such as, it is often aligned with block IO lifetime

- group leader leases the kernel buffer from driver to member requests
  of io_uring subsystem

- member requests uses the leased buffer to do FS or network IO, or
  more operations in future; IOSQE_IO_DRAIN bit isn't used for group
  member IO, so it is mapped to GROUP_KBUF; the actual use becomes
  very similar with buffer select.

- this kernel buffer is returned back after all member requests consume it

io_uring builtin provide/register buffer isn't one good match for this
use case:

- complicated dependency on add/remove buffer
  this buffer has to be added/removed to one global table by add/remove OPs,
  and all consumer OPs have to sync with the add/remove OPs; either
  consumer OPs have to by issued one by one with IO_LINK; or two extra
  syscall are added for one time of buffer consumption, this way slows
  down ublk io handling, and may lose zero copy value

- application becomes more complicated

- application may panic and the kernel buffer is left in io_uring, which
  complicates io_uring shutdown handling since returning back buffer
  needs to cowork with buffer owner

- big change is needed in io_uring provide/register buffer

- the requirement is just to lease the kernel buffer to io_uring subsystem for
  very short time, not necessary to move it into io_uring and make it global

This way looks a bit similar with kernel's pipe/splice, but there are some
important differences:

- splice is for transferring data between two FDs via pipe, and fd_out can
only read data from pipe, but data can't be written to; this feature can
lease buffer from group leader(driver subsystem) to members(io_uring subsystem),
so member request can write data to this buffer if the buffer direction is
allowed to write to.

- splice implements data transfer by moving pages between subsystem and
pipe, that means page ownership is transferred, and this way is one of the
most complicated thing of splice; this patch supports scenarios in which
the buffer can't be transferred, and buffer is only borrowed to member
requests for consumption, and is returned back after member requests
consume the leased buffer, so buffer lifetime is aligned with group leader
lifetime, and buffer lifetime is simplified a lot. Especially the buffer
is guaranteed to be returned back.

- splice can't run in async way basically

It can help to implement generic zero copy between device and related
operations, such as ublk, fuse, vdpa.

Signed-off-by: Ming Lei <[email protected]>
---
 include/linux/io_uring_types.h | 40 +++++++++++++++++++++++
 io_uring/io_uring.c            | 22 ++++++++++---
 io_uring/io_uring.h            |  5 +++
 io_uring/kbuf.c                | 58 ++++++++++++++++++++++++++++++++++
 io_uring/kbuf.h                | 31 ++++++++++++++++++
 io_uring/net.c                 | 25 ++++++++++++++-
 io_uring/rw.c                  | 26 ++++++++++++++-
 7 files changed, 201 insertions(+), 6 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index d524be7f6b35..890bcb5d0c26 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -6,6 +6,7 @@
 #include <linux/task_work.h>
 #include <linux/bitmap.h>
 #include <linux/llist.h>
+#include <linux/bvec.h>
 #include <uapi/linux/io_uring.h>
 
 enum {
@@ -39,6 +40,26 @@ enum io_uring_cmd_flags {
 	IO_URING_F_COMPAT		= (1 << 12),
 };
 
+struct io_uring_kernel_buf;
+typedef void (io_uring_buf_giveback_t) (const struct io_uring_kernel_buf *);
+
+/* kernel owned buffer, leased to io_uring OPs */
+struct io_uring_kernel_buf {
+	unsigned long		len;
+	unsigned short		nr_bvecs;
+	unsigned char		dir;	/* ITER_SOURCE or ITER_DEST */
+
+	/* offset in the 1st bvec */
+	unsigned int		offset;
+	const struct bio_vec	*bvec;
+
+	/* called when we are done with this buffer */
+	io_uring_buf_giveback_t	*grp_kbuf_ack;
+
+	/* private field, user don't touch it */
+	struct bio_vec		__bvec[];
+};
+
 struct io_wq_work_node {
 	struct io_wq_work_node *next;
 };
@@ -469,6 +490,7 @@ enum {
 	REQ_F_BL_NO_RECYCLE_BIT,
 	REQ_F_BUFFERS_COMMIT_BIT,
 	REQ_F_SQE_GROUP_LEADER_BIT,
+	REQ_F_GROUP_KBUF_BIT,
 
 	/* not a real bit, just to check we're not overflowing the space */
 	__REQ_F_LAST_BIT,
@@ -549,6 +571,15 @@ enum {
 	REQ_F_BUFFERS_COMMIT	= IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT),
 	/* sqe group lead */
 	REQ_F_SQE_GROUP_LEADER	= IO_REQ_FLAG(REQ_F_SQE_GROUP_LEADER_BIT),
+	/*
+	 * Group leader leases kbuf to io_uring. Set for leader when the
+	 * leader starts to lease kbuf, and set for member in case that
+	 * the member needs to consume the group kbuf
+	 *
+	 * For group member, this flag is mapped from IOSQE_IO_DRAIN which
+	 * isn't used for group members
+	 */
+	REQ_F_GROUP_KBUF	= IO_REQ_FLAG(REQ_F_GROUP_KBUF_BIT),
 };
 
 typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
@@ -629,6 +660,15 @@ struct io_kiocb {
 		 * REQ_F_BUFFER_RING is set.
 		 */
 		struct io_buffer_list	*buf_list;
+
+		/*
+		 * store kernel buffer leased from sqe group lead, valid
+		 * IFF REQ_F_GROUP_KBUF is set
+		 *
+		 * The buffer meta is immutable since it is shared by
+		 * all member requests
+		 */
+		const struct io_uring_kernel_buf *grp_kbuf;
 	};
 
 	union {
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 59e9a01319de..fcf58d5e698a 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -117,7 +117,7 @@
 
 #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
 				REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
-				REQ_F_ASYNC_DATA)
+				REQ_F_ASYNC_DATA | REQ_F_GROUP_KBUF)
 
 #define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
 				 REQ_F_SQE_GROUP | IO_REQ_CLEAN_FLAGS)
@@ -397,6 +397,8 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
 
 static void io_clean_op(struct io_kiocb *req)
 {
+	if (req->flags & REQ_F_GROUP_KBUF)
+		io_drop_leased_grp_kbuf(req);
 	if (req->flags & REQ_F_BUFFER_SELECTED) {
 		spin_lock(&req->ctx->completion_lock);
 		io_kbuf_drop(req);
@@ -1022,6 +1024,12 @@ static enum group_mem io_prep_free_group_req(struct io_kiocb *req,
 			io_queue_group_members(req);
 		return GROUP_LEADER;
 	} else {
+		/*
+		 * Clear GROUP_KBUF since we are done with leased group
+		 * buffer
+		 */
+		if (req->flags & REQ_F_GROUP_KBUF)
+			req->flags &= ~REQ_F_GROUP_KBUF;
 		if (!req_is_last_group_member(req))
 			return GROUP_OTHER_MEMBER;
 
@@ -2223,9 +2231,15 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
 			ctx->drain_disabled = true;
 		if (sqe_flags & IOSQE_IO_DRAIN) {
-			if (ctx->drain_disabled)
-				return io_init_fail_req(req, -EOPNOTSUPP);
-			io_init_req_drain(req);
+			/* IO_DRAIN is mapped to GROUP_KBUF for group members */
+			if (ctx->submit_state.group.head) {
+				req->flags &= ~REQ_F_IO_DRAIN;
+				req->flags |= REQ_F_GROUP_KBUF;
+			} else {
+				if (ctx->drain_disabled)
+					return io_init_fail_req(req, -EOPNOTSUPP);
+				io_init_req_drain(req);
+			}
 		}
 	}
 	if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index ab84b09505fe..f83e7c13e679 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -349,6 +349,11 @@ static inline bool req_is_group_leader(struct io_kiocb *req)
 	return req->flags & REQ_F_SQE_GROUP_LEADER;
 }
 
+static inline bool req_is_group_member(struct io_kiocb *req)
+{
+	return !req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP);
+}
+
 /*
  * Don't complete immediately but use deferred completion infrastructure.
  * Protected by ->uring_lock and can only be used either with
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index d407576ddfb7..e86dacc7a822 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -838,3 +838,61 @@ int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma)
 	io_put_bl(ctx, bl);
 	return ret;
 }
+
+int io_lease_group_kbuf(struct io_kiocb *req,
+		const struct io_uring_kernel_buf *grp_kbuf)
+{
+	if (!(req->flags & REQ_F_SQE_GROUP_LEADER))
+		return -EINVAL;
+
+	if (req->flags & REQ_F_BUFFER_SELECT)
+		return -EINVAL;
+
+	if (!grp_kbuf->grp_kbuf_ack || !grp_kbuf->bvec)
+		return -EINVAL;
+
+	/*
+	 * Allow io_uring OPs to borrow this leased kbuf, which is returned
+	 * back by calling `grp_kbuf_ack` when the group leader is freed.
+	 *
+	 * Not like pipe/splice, this kernel buffer is always owned by the
+	 * provider, and has to be returned back.
+	 */
+	req->grp_kbuf = grp_kbuf;
+	req->flags |= REQ_F_GROUP_KBUF;
+	return 0;
+}
+
+int io_import_group_kbuf(struct io_kiocb *req, unsigned long buf_off,
+		unsigned int len, int dir, struct iov_iter *iter)
+{
+	struct io_kiocb *lead = req->grp_leader;
+	const struct io_uring_kernel_buf *kbuf;
+	unsigned long offset;
+
+	if (!req_is_group_member(req))
+		return -EINVAL;
+
+	if (!lead || !(lead->flags & REQ_F_GROUP_KBUF))
+		return -EINVAL;
+
+	kbuf = lead->grp_kbuf;
+	offset = kbuf->offset;
+
+	if (dir != kbuf->dir)
+		return -EINVAL;
+
+	if (unlikely(buf_off > kbuf->len))
+		return -EFAULT;
+
+	if (unlikely(len > kbuf->len - buf_off))
+		return -EFAULT;
+
+	offset += buf_off;
+	iov_iter_bvec(iter, dir, kbuf->bvec, kbuf->nr_bvecs, offset + len);
+
+	if (offset)
+		iov_iter_advance(iter, offset);
+
+	return 0;
+}
diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
index 36aadfe5ac00..d54cd4312db9 100644
--- a/io_uring/kbuf.h
+++ b/io_uring/kbuf.h
@@ -89,6 +89,11 @@ struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx,
 				      unsigned long bgid);
 int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma);
 
+int io_lease_group_kbuf(struct io_kiocb *req,
+		const struct io_uring_kernel_buf *grp_kbuf);
+int io_import_group_kbuf(struct io_kiocb *req, unsigned long buf_off,
+		unsigned int len, int dir, struct iov_iter *iter);
+
 static inline bool io_kbuf_recycle_ring(struct io_kiocb *req)
 {
 	/*
@@ -220,4 +225,30 @@ static inline unsigned int io_put_kbufs(struct io_kiocb *req, int len,
 {
 	return __io_put_kbufs(req, len, nbufs, issue_flags);
 }
+
+static inline bool io_use_leased_grp_kbuf(struct io_kiocb *req)
+{
+	/* can't use group kbuf in case of buffer select or fixed buffer */
+	if (req->flags & REQ_F_BUFFER_SELECT)
+		return false;
+
+	return req->flags & REQ_F_GROUP_KBUF;
+}
+
+static inline void io_drop_leased_grp_kbuf(struct io_kiocb *req)
+{
+	const struct io_uring_kernel_buf *gbuf = req->grp_kbuf;
+
+	if (gbuf)
+		gbuf->grp_kbuf_ack(gbuf);
+}
+
+/* zero remained bytes of kernel buffer for avoiding to leak daata */
+static inline void io_req_zero_remained(struct io_kiocb *req, struct iov_iter *iter)
+{
+	size_t left = iov_iter_count(iter);
+
+	if (iov_iter_rw(iter) == READ && left > 0)
+		iov_iter_zero(left, iter);
+}
 #endif
diff --git a/io_uring/net.c b/io_uring/net.c
index 2040195e33ab..c7d58a0c38c3 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -88,6 +88,13 @@ struct io_sr_msg {
  */
 #define MULTISHOT_MAX_RETRY	32
 
+#define user_ptr_to_u64(x) (		\
+{					\
+	typecheck(void __user *, (x));	\
+	(u64)(unsigned long)(x);	\
+}					\
+)
+
 int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_shutdown *shutdown = io_kiocb_to_cmd(req, struct io_shutdown);
@@ -384,7 +391,7 @@ static int io_send_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		kmsg->msg.msg_name = &kmsg->addr;
 		kmsg->msg.msg_namelen = addr_len;
 	}
-	if (!io_do_buffer_select(req)) {
+	if (!io_do_buffer_select(req) && !io_use_leased_grp_kbuf(req)) {
 		ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len,
 				  &kmsg->msg.msg_iter);
 		if (unlikely(ret < 0))
@@ -599,6 +606,15 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
 	if (issue_flags & IO_URING_F_NONBLOCK)
 		flags |= MSG_DONTWAIT;
 
+	if (io_use_leased_grp_kbuf(req)) {
+		ret = io_import_group_kbuf(req,
+					user_ptr_to_u64(sr->buf),
+					sr->len, ITER_SOURCE,
+					&kmsg->msg.msg_iter);
+		if (unlikely(ret))
+			return ret;
+	}
+
 retry_bundle:
 	if (io_do_buffer_select(req)) {
 		struct buf_sel_arg arg = {
@@ -889,6 +905,8 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
 		*ret = IOU_STOP_MULTISHOT;
 	else
 		*ret = IOU_OK;
+	if (io_use_leased_grp_kbuf(req))
+		io_req_zero_remained(req, &kmsg->msg.msg_iter);
 	io_req_msg_cleanup(req, issue_flags);
 	return true;
 }
@@ -1161,6 +1179,11 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
 			goto out_free;
 		}
 		sr->buf = NULL;
+	} else if (io_use_leased_grp_kbuf(req)) {
+		ret = io_import_group_kbuf(req, user_ptr_to_u64(sr->buf),
+				sr->len, ITER_DEST, &kmsg->msg.msg_iter);
+		if (unlikely(ret))
+			goto out_free;
 	}
 
 	kmsg->msg.msg_flags = 0;
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 4bc0d762627d..5a2025d48804 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -245,7 +245,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
 	if (io_rw_alloc_async(req))
 		return -ENOMEM;
 
-	if (!do_import || io_do_buffer_select(req))
+	if (!do_import || io_do_buffer_select(req) ||
+	    io_use_leased_grp_kbuf(req))
 		return 0;
 
 	rw = req->async_data;
@@ -489,6 +490,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
 		}
 		req_set_fail(req);
 		req->cqe.res = res;
+		if (io_use_leased_grp_kbuf(req)) {
+			struct io_async_rw *io = req->async_data;
+
+			io_req_zero_remained(req, &io->iter);
+		}
 	}
 	return false;
 }
@@ -630,11 +636,16 @@ static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
  */
 static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter)
 {
+	struct io_kiocb *req = cmd_to_io_kiocb(rw);
 	struct kiocb *kiocb = &rw->kiocb;
 	struct file *file = kiocb->ki_filp;
 	ssize_t ret = 0;
 	loff_t *ppos;
 
+	/* group buffer is kernel buffer and doesn't have userspace addr */
+	if (io_use_leased_grp_kbuf(req))
+		return -EOPNOTSUPP;
+
 	/*
 	 * Don't support polled IO through this interface, and we can't
 	 * support non-blocking either. For the latter, this just causes
@@ -841,6 +852,12 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
 		ret = io_import_iovec(ITER_DEST, req, io, issue_flags);
 		if (unlikely(ret < 0))
 			return ret;
+	} else if (io_use_leased_grp_kbuf(req)) {
+		ret = io_import_group_kbuf(req, rw->addr, rw->len, ITER_DEST,
+				&io->iter);
+		if (unlikely(ret))
+			return ret;
+		iov_iter_save_state(&io->iter, &io->iter_state);
 	}
 	ret = io_rw_init_file(req, FMODE_READ, READ);
 	if (unlikely(ret))
@@ -1024,6 +1041,13 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
 	ssize_t ret, ret2;
 	loff_t *ppos;
 
+	if (io_use_leased_grp_kbuf(req)) {
+		ret = io_import_group_kbuf(req, rw->addr, rw->len, ITER_SOURCE,
+				&io->iter);
+		if (unlikely(ret))
+			return ret;
+	}
+
 	ret = io_rw_init_file(req, FMODE_WRITE, WRITE);
 	if (unlikely(ret))
 		return ret;
-- 
2.46.0


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH V8 6/7] io_uring/uring_cmd: support leasing device kernel buffer to io_uring
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
                   ` (4 preceding siblings ...)
  2024-10-25 12:22 ` [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF Ming Lei
@ 2024-10-25 12:22 ` Ming Lei
  2024-10-25 12:22 ` [PATCH V8 7/7] ublk: support leasing io " Ming Lei
  2024-10-29 17:01 ` [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Pavel Begunkov
  7 siblings, 0 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei

Add API of io_uring_cmd_lease_kbuf() for driver to lease its kernel
buffer to io_uring.

The leased buffer can only be consumed by io_uring OPs in group wide,
and the uring_cmd has to be one group leader.

This way can support generic device zero copy over device buffer in
userspace:

- create one sqe group
- lease one device buffer to io_uring by the group leader of uring_cmd
- io_uring member OPs consume this kernel buffer by passing IOSQE_IO_DRAIN
  which isn't used for group member, and mapped to GROUP_KBUF.
- the kernel buffer is returned back after all member OPs are completed

Signed-off-by: Ming Lei <[email protected]>
---
 include/linux/io_uring/cmd.h |  7 +++++++
 io_uring/uring_cmd.c         | 13 +++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
index c189d36ad55e..bdf7abfa0d8a 100644
--- a/include/linux/io_uring/cmd.h
+++ b/include/linux/io_uring/cmd.h
@@ -60,6 +60,8 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
 /* Execute the request from a blocking context */
 void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd);
 
+int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
+		const struct io_uring_kernel_buf *grp_kbuf);
 #else
 static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
 			      struct iov_iter *iter, void *ioucmd)
@@ -82,6 +84,11 @@ static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
 static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
 {
 }
+static inline int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
+		const struct io_uring_kernel_buf *grp_kbuf)
+{
+	return -EOPNOTSUPP;
+}
 #endif
 
 /*
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 6994f60d7ec7..2c9c2c60c6cd 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -15,6 +15,7 @@
 #include "alloc_cache.h"
 #include "rsrc.h"
 #include "uring_cmd.h"
+#include "kbuf.h"
 
 static struct uring_cache *io_uring_async_get(struct io_kiocb *req)
 {
@@ -175,6 +176,18 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
 }
 EXPORT_SYMBOL_GPL(io_uring_cmd_done);
 
+int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
+		const struct io_uring_kernel_buf *grp_kbuf)
+{
+	struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+
+	if (unlikely(ioucmd->flags & IORING_URING_CMD_FIXED))
+		return -EINVAL;
+
+	return io_lease_group_kbuf(req, grp_kbuf);
+}
+EXPORT_SYMBOL_GPL(io_uring_cmd_lease_kbuf);
+
 static int io_uring_cmd_prep_setup(struct io_kiocb *req,
 				   const struct io_uring_sqe *sqe)
 {
-- 
2.46.0


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH V8 7/7] ublk: support leasing io buffer to io_uring
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
                   ` (5 preceding siblings ...)
  2024-10-25 12:22 ` [PATCH V8 6/7] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
@ 2024-10-25 12:22 ` Ming Lei
  2024-10-29 17:01 ` [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Pavel Begunkov
  7 siblings, 0 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-25 12:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei

Suopport to lease block IO buffer for userpace to run io_uring operations(FS,
network IO), then ublk zero copy can be supported.

userspace code:

	git clone https://github.com/ublk-org/ublksrv.git -b uring_group

And both loop and nbd zero copy(io_uring send and send zc) are covered.

Performance improvement is quite obvious in big block size test, such as
'loop --buffered_io' perf is doubled in 64KB block test("loop/007 vs
loop/009").

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c      | 159 ++++++++++++++++++++++++++++++++--
 include/uapi/linux/ublk_cmd.h |  11 ++-
 2 files changed, 160 insertions(+), 10 deletions(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index a6c8e5cc6051..91d32ebcad0c 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -51,6 +51,8 @@
 /* private ioctl command mirror */
 #define UBLK_CMD_DEL_DEV_ASYNC	_IOC_NR(UBLK_U_CMD_DEL_DEV_ASYNC)
 
+#define UBLK_IO_PROVIDE_IO_BUF _IOC_NR(UBLK_U_IO_PROVIDE_IO_BUF)
+
 /* All UBLK_F_* have to be included into UBLK_F_ALL */
 #define UBLK_F_ALL (UBLK_F_SUPPORT_ZERO_COPY \
 		| UBLK_F_URING_CMD_COMP_IN_TASK \
@@ -71,6 +73,9 @@ struct ublk_rq_data {
 	struct llist_node node;
 
 	struct kref ref;
+
+	bool allocated_bvec;
+	struct io_uring_kernel_buf buf[0];
 };
 
 struct ublk_uring_cmd_pdu {
@@ -189,11 +194,15 @@ struct ublk_params_header {
 	__u32	types;
 };
 
+static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
+		struct ublk_queue *ubq, int tag, size_t offset);
 static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq);
 
 static inline unsigned int ublk_req_build_flags(struct request *req);
 static inline struct ublksrv_io_desc *ublk_get_iod(struct ublk_queue *ubq,
 						   int tag);
+static void ublk_io_buf_giveback_cb(const struct io_uring_kernel_buf *buf);
+
 static inline bool ublk_dev_is_user_copy(const struct ublk_device *ub)
 {
 	return ub->dev_info.flags & UBLK_F_USER_COPY;
@@ -588,6 +597,11 @@ static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
 	return ublk_support_user_copy(ubq);
 }
 
+static inline bool ublk_support_zc(const struct ublk_queue *ubq)
+{
+	return ubq->flags & UBLK_F_SUPPORT_ZERO_COPY;
+}
+
 static inline void ublk_init_req_ref(const struct ublk_queue *ubq,
 		struct request *req)
 {
@@ -851,6 +865,71 @@ static size_t ublk_copy_user_pages(const struct request *req,
 	return done;
 }
 
+/*
+ * The built command buffer is immutable, so it is fine to feed it to
+ * concurrent io_uring provide buf commands
+ */
+static int ublk_init_zero_copy_buffer(struct request *req)
+{
+	struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
+	struct io_uring_kernel_buf *imu = data->buf;
+	struct req_iterator rq_iter;
+	unsigned int nr_bvecs = 0;
+	struct bio_vec *bvec;
+	unsigned int offset;
+	struct bio_vec bv;
+
+	if (!ublk_rq_has_data(req))
+		goto exit;
+
+	rq_for_each_bvec(bv, req, rq_iter)
+		nr_bvecs++;
+
+	if (!nr_bvecs)
+		goto exit;
+
+	if (req->bio != req->biotail) {
+		int idx = 0;
+
+		bvec = kvmalloc_array(nr_bvecs, sizeof(struct bio_vec),
+				GFP_NOIO);
+		if (!bvec)
+			return -ENOMEM;
+
+		offset = 0;
+		rq_for_each_bvec(bv, req, rq_iter)
+			bvec[idx++] = bv;
+		data->allocated_bvec = true;
+	} else {
+		struct bio *bio = req->bio;
+
+		offset = bio->bi_iter.bi_bvec_done;
+		bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
+	}
+	imu->bvec = bvec;
+	imu->nr_bvecs = nr_bvecs;
+	imu->offset = offset;
+	imu->len = blk_rq_bytes(req);
+	imu->dir = req_op(req) == REQ_OP_READ ? ITER_DEST : ITER_SOURCE;
+	imu->grp_kbuf_ack = ublk_io_buf_giveback_cb;
+
+	return 0;
+exit:
+	imu->bvec = NULL;
+	return 0;
+}
+
+static void ublk_deinit_zero_copy_buffer(struct request *req)
+{
+	struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
+	struct io_uring_kernel_buf *imu = data->buf;
+
+	if (data->allocated_bvec) {
+		kvfree(imu->bvec);
+		data->allocated_bvec = false;
+	}
+}
+
 static inline bool ublk_need_map_req(const struct request *req)
 {
 	return ublk_rq_has_data(req) && req_op(req) == REQ_OP_WRITE;
@@ -862,13 +941,25 @@ static inline bool ublk_need_unmap_req(const struct request *req)
 	       (req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_DRV_IN);
 }
 
-static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
+static int ublk_map_io(const struct ublk_queue *ubq, struct request *req,
 		struct ublk_io *io)
 {
 	const unsigned int rq_bytes = blk_rq_bytes(req);
 
-	if (ublk_support_user_copy(ubq))
+	if (ublk_support_user_copy(ubq)) {
+		if (ublk_support_zc(ubq)) {
+			int ret = ublk_init_zero_copy_buffer(req);
+
+			/*
+			 * The only failure is -ENOMEM for allocating providing
+			 * buffer command, return zero so that we can requeue
+			 * this req.
+			 */
+			if (unlikely(ret))
+				return 0;
+		}
 		return rq_bytes;
+	}
 
 	/*
 	 * no zero copy, we delay copy WRITE request data into ublksrv
@@ -886,13 +977,16 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
 }
 
 static int ublk_unmap_io(const struct ublk_queue *ubq,
-		const struct request *req,
+		struct request *req,
 		struct ublk_io *io)
 {
 	const unsigned int rq_bytes = blk_rq_bytes(req);
 
-	if (ublk_support_user_copy(ubq))
+	if (ublk_support_user_copy(ubq)) {
+		if (ublk_support_zc(ubq))
+			ublk_deinit_zero_copy_buffer(req);
 		return rq_bytes;
+	}
 
 	if (ublk_need_unmap_req(req)) {
 		struct iov_iter iter;
@@ -1038,6 +1132,7 @@ static inline void __ublk_complete_rq(struct request *req)
 
 	return;
 exit:
+	ublk_deinit_zero_copy_buffer(req);
 	blk_mq_end_request(req, res);
 }
 
@@ -1680,6 +1775,45 @@ static inline void ublk_prep_cancel(struct io_uring_cmd *cmd,
 	io_uring_cmd_mark_cancelable(cmd, issue_flags);
 }
 
+static void ublk_io_buf_giveback_cb(const struct io_uring_kernel_buf *buf)
+{
+	struct ublk_rq_data *data = container_of(buf, struct ublk_rq_data, buf[0]);
+	struct request *req = blk_mq_rq_from_pdu(data);
+	struct ublk_queue *ubq = req->mq_hctx->driver_data;
+
+	ublk_put_req_ref(ubq, req);
+}
+
+static int ublk_provide_io_buf(struct io_uring_cmd *cmd,
+		struct ublk_queue *ubq, int tag)
+{
+	struct ublk_device *ub = cmd->file->private_data;
+	struct ublk_rq_data *data;
+	struct request *req;
+
+	if (!ub)
+		return -EPERM;
+
+	req = __ublk_check_and_get_req(ub, ubq, tag, 0);
+	if (!req)
+		return -EINVAL;
+
+	pr_devel("%s: qid %d tag %u request bytes %u\n",
+			__func__, tag, ubq->q_id, blk_rq_bytes(req));
+
+	data = blk_mq_rq_to_pdu(req);
+
+	/*
+	 * io_uring guarantees that the callback will be called after
+	 * the provided buffer is consumed, and it is automatic removal
+	 * before this uring command is freed.
+	 *
+	 * This request won't be completed unless the callback is called,
+	 * so ublk module won't be unloaded too.
+	 */
+	return io_uring_cmd_lease_kbuf(cmd, data->buf);
+}
+
 static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
 			       unsigned int issue_flags,
 			       const struct ublksrv_io_cmd *ub_cmd)
@@ -1731,6 +1865,10 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
 
 	ret = -EINVAL;
 	switch (_IOC_NR(cmd_op)) {
+	case UBLK_IO_PROVIDE_IO_BUF:
+		if (unlikely(!ublk_support_zc(ubq)))
+			goto out;
+		return ublk_provide_io_buf(cmd, ubq, tag);
 	case UBLK_IO_FETCH_REQ:
 		/* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
 		if (ublk_queue_ready(ubq)) {
@@ -2149,11 +2287,14 @@ static void ublk_align_max_io_size(struct ublk_device *ub)
 
 static int ublk_add_tag_set(struct ublk_device *ub)
 {
+	int zc = !!(ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY);
+	struct ublk_rq_data *data;
+
 	ub->tag_set.ops = &ublk_mq_ops;
 	ub->tag_set.nr_hw_queues = ub->dev_info.nr_hw_queues;
 	ub->tag_set.queue_depth = ub->dev_info.queue_depth;
 	ub->tag_set.numa_node = NUMA_NO_NODE;
-	ub->tag_set.cmd_size = sizeof(struct ublk_rq_data);
+	ub->tag_set.cmd_size = struct_size(data, buf, zc);
 	ub->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
 	ub->tag_set.driver_data = ub;
 	return blk_mq_alloc_tag_set(&ub->tag_set);
@@ -2449,8 +2590,12 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
 		goto out_free_dev_number;
 	}
 
-	/* We are not ready to support zero copy */
-	ub->dev_info.flags &= ~UBLK_F_SUPPORT_ZERO_COPY;
+	/* zero copy depends on user copy */
+	if ((ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY) &&
+			!ublk_dev_is_user_copy(ub)) {
+		ret = -EINVAL;
+		goto out_free_dev_number;
+	}
 
 	ub->dev_info.nr_hw_queues = min_t(unsigned int,
 			ub->dev_info.nr_hw_queues, nr_cpu_ids);
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index c8dc5f8ea699..dc720a979186 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -94,6 +94,8 @@
 	_IOWR('u', UBLK_IO_COMMIT_AND_FETCH_REQ, struct ublksrv_io_cmd)
 #define	UBLK_U_IO_NEED_GET_DATA		\
 	_IOWR('u', UBLK_IO_NEED_GET_DATA, struct ublksrv_io_cmd)
+#define	UBLK_U_IO_PROVIDE_IO_BUF	\
+	_IOWR('u', 0x23, struct ublksrv_io_cmd)
 
 /* only ABORT means that no re-fetch */
 #define UBLK_IO_RES_OK			0
@@ -127,9 +129,12 @@
 #define UBLKSRV_IO_BUF_TOTAL_SIZE	(1ULL << UBLKSRV_IO_BUF_TOTAL_BITS)
 
 /*
- * zero copy requires 4k block size, and can remap ublk driver's io
- * request into ublksrv's vm space
- */
+ * io_uring provide kbuf command based zero copy
+ *
+ * Not available for UBLK_F_UNPRIVILEGED_DEV, because we rely on ublk
+ * server to fill up request buffer for READ IO, and ublk server can't
+ * be trusted in case of UBLK_F_UNPRIVILEGED_DEV.
+*/
 #define UBLK_F_SUPPORT_ZERO_COPY	(1ULL << 0)
 
 /*
-- 
2.46.0


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 4/7] io_uring: support SQE group
  2024-10-25 12:22 ` [PATCH V8 4/7] io_uring: support SQE group Ming Lei
@ 2024-10-29  0:12   ` Jens Axboe
  2024-10-29  1:50     ` Ming Lei
  2024-10-31 21:24   ` Jens Axboe
  1 sibling, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-29  0:12 UTC (permalink / raw)
  To: Ming Lei, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Kevin Wolf

On 10/25/24 6:22 AM, Ming Lei wrote:
> SQE group is defined as one chain of SQEs starting with the first SQE that
> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> doesn't have it set, and it is similar with chain of linked SQEs.
> 
> Not like linked SQEs, each sqe is issued after the previous one is
> completed. All SQEs in one group can be submitted in parallel. To simplify
> the implementation from beginning, all members are queued after the leader
> is completed, however, this way may be changed and leader and members may
> be issued concurrently in future.
> 
> The 1st SQE is group leader, and the other SQEs are group member. The whole
> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> the two flags can't be set for group members. For the sake of
> simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.
> 
> When the group is in one link chain, this group isn't submitted until the
> previous SQE or group is completed. And the following SQE or group can't
> be started if this group isn't completed. Failure from any group member will
> fail the group leader, then the link chain can be terminated.
> 
> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> group leader only, we respect IO_DRAIN by always completing group leader as
> the last one in the group. Meantime it is natural to post leader's CQE
> as the last one from application viewpoint.
> 
> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> support N:M dependency, such as:
> 
> - group A is chained with group B together
> - group A has N SQEs
> - group B has M SQEs
> 
> then M SQEs in group B depend on N SQEs in group A.
> 
> N:M dependency can support some interesting use cases in efficient way:
> 
> 1) read from multiple files, then write the read data into single file
> 
> 2) read from single file, and write the read data into multiple files
> 
> 3) write same data into multiple files, and read data from multiple files and
> compare if correct data is written
> 
> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> extend sqe->flags with io_uring context flag, such as use __pad3 for
> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.

Since it's taking the last flag, maybe a better idea to have the last
flag mean "more flags in (for example) __pad3" and put the new flag
there? Not sure you mean in terms of "io_uring context flag", would it
be an enter flag? Ring required to be setup with a certain flag? Neither
of those seem super encouraging, imho.

Apart from that, just a few minor nits below.

> +void io_fail_group_members(struct io_kiocb *req)
> +{
> +	struct io_kiocb *member = req->grp_link;
> +
> +	while (member) {
> +		struct io_kiocb *next = member->grp_link;
> +
> +		if (!(member->flags & REQ_F_FAIL)) {
> +			req_set_fail(member);
> +			io_req_set_res(member, -ECANCELED, 0);
> +		}
> +		member = next;
> +	}
> +}
> +
> +static void io_queue_group_members(struct io_kiocb *req)
> +{
> +	struct io_kiocb *member = req->grp_link;
> +
> +	if (!member)
> +		return;
> +
> +	req->grp_link = NULL;
> +	while (member) {
> +		struct io_kiocb *next = member->grp_link;
> +
> +		member->grp_leader = req;
> +		if (unlikely(member->flags & REQ_F_FAIL)) {
> +			io_req_task_queue_fail(member, member->cqe.res);
> +		} else if (unlikely(req->flags & REQ_F_FAIL)) {
> +			io_req_task_queue_fail(member, -ECANCELED);
> +		} else {
> +			io_req_task_queue(member);
> +		}
> +		member = next;
> +	}
> +}

Was going to say don't check for !member, you have the while loop. Which
is what you do in the helper above. You can also drop the parens in this
one.

> +static enum group_mem io_prep_free_group_req(struct io_kiocb *req,
> +					     struct io_kiocb **leader)
> +{
> +	/*
> +	 * Group completion is done, so clear the flag for avoiding double
> +	 * handling in case of io-wq
> +	 */
> +	req->flags &= ~REQ_F_SQE_GROUP;
> +
> +	if (req_is_group_leader(req)) {
> +		/* Queue members now */
> +		if (req->grp_link)
> +			io_queue_group_members(req);
> +		return GROUP_LEADER;
> +	} else {
> +		if (!req_is_last_group_member(req))
> +			return GROUP_OTHER_MEMBER;
> +
> +		/*
> +		 * Prepare for freeing leader which can only be found from
> +		 * the last member
> +		 */
> +		*leader = req->grp_leader;
> +		(*leader)->flags &= ~REQ_F_SQE_GROUP_LEADER;
> +		req->grp_leader = NULL;
> +		return GROUP_LAST_MEMBER;
> +	}
> +}

Just drop the second indentation here.

> @@ -927,7 +1051,8 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
>  	 * Handle special CQ sync cases via task_work. DEFER_TASKRUN requires
>  	 * the submitter task context, IOPOLL protects with uring_lock.
>  	 */
> -	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL)) {
> +	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL) ||
> +	    (req->flags & REQ_F_SQE_GROUP)) {
>  		req->io_task_work.func = io_req_task_complete;
>  		io_req_task_work_add(req);
>  		return;

Minor detail, but might be nice with a REQ_F_* flag for this in the
future.

> @@ -1450,8 +1596,16 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>  		struct io_kiocb *req = container_of(node, struct io_kiocb,
>  					    comp_list);
>  
> -		if (!(req->flags & REQ_F_CQE_SKIP))
> -			io_req_commit_cqe(ctx, req);
> +		if (unlikely(req->flags & (REQ_F_CQE_SKIP | REQ_F_SQE_GROUP))) {
> +			if (req->flags & REQ_F_SQE_GROUP) {
> +				io_complete_group_req(req);
> +				continue;
> +			}
> +
> +			if (req->flags & REQ_F_CQE_SKIP)
> +				continue;
> +		}
> +		io_req_commit_cqe(ctx, req);
>  	}
>  	__io_cq_unlock_post(ctx);
>  
> @@ -1661,8 +1815,12 @@ static u32 io_get_sequence(struct io_kiocb *req)
>  	struct io_kiocb *cur;
>  
>  	/* need original cached_sq_head, but it was increased for each req */
> -	io_for_each_link(cur, req)
> -		seq--;
> +	io_for_each_link(cur, req) {
> +		if (req_is_group_leader(cur))
> +			seq -= cur->grp_refs;
> +		else
> +			seq--;
> +	}
>  	return seq;
>  }
>  
> @@ -2124,6 +2282,67 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
>  	return def->prep(req, sqe);
>  }
>  
> +static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
> +				     struct io_kiocb *req)
> +{
> +	/*
> +	 * Group chain is similar with link chain: starts with 1st sqe with
> +	 * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
> +	 */
> +	if (group->head) {
> +		struct io_kiocb *lead = group->head;
> +
> +		/*
> +		 * Members can't be in link chain, can't be drained, but
> +		 * the whole group can be linked or drained by setting
> +		 * flags on group leader.
> +		 *
> +		 * IOSQE_CQE_SKIP_SUCCESS can't be set for member
> +		 * for the sake of simplicity
> +		 */
> +		if (req->flags & (IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN |
> +				REQ_F_CQE_SKIP))
> +			req_fail_link_node(lead, -EINVAL);
> +
> +		lead->grp_refs += 1;
> +		group->last->grp_link = req;
> +		group->last = req;
> +
> +		if (req->flags & REQ_F_SQE_GROUP)
> +			return NULL;
> +
> +		req->grp_link = NULL;
> +		req->flags |= REQ_F_SQE_GROUP;
> +		group->head = NULL;
> +
> +		return lead;
> +	} else {
> +		if (WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP)))
> +			return req;
> +		group->head = req;
> +		group->last = req;
> +		req->grp_refs = 1;
> +		req->flags |= REQ_F_SQE_GROUP_LEADER;
> +		return NULL;
> +	}
> +}

Same here, drop the 2nd indentation.

> diff --git a/io_uring/timeout.c b/io_uring/timeout.c
> index 9973876d91b0..ed6c74f1a475 100644
> --- a/io_uring/timeout.c
> +++ b/io_uring/timeout.c
> @@ -149,6 +149,8 @@ static void io_req_tw_fail_links(struct io_kiocb *link, struct io_tw_state *ts)
>  			res = link->cqe.res;
>  		link->link = NULL;
>  		io_req_set_res(link, res, 0);
> +		if (req_is_group_leader(link))
> +			io_fail_group_members(link);
>  		io_req_task_complete(link, ts);
>  		link = nxt;
>  	}
> @@ -543,6 +545,10 @@ static int __io_timeout_prep(struct io_kiocb *req,
>  	if (is_timeout_link) {
>  		struct io_submit_link *link = &req->ctx->submit_state.link;
>  
> +		/* so far disallow IO group link timeout */
> +		if (req->ctx->submit_state.group.head)
> +			return -EINVAL;
> +

For now, disallow IO group linked timeout

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 4/7] io_uring: support SQE group
  2024-10-29  0:12   ` Jens Axboe
@ 2024-10-29  1:50     ` Ming Lei
  2024-10-29 16:38       ` Pavel Begunkov
  0 siblings, 1 reply; 41+ messages in thread
From: Ming Lei @ 2024-10-29  1:50 UTC (permalink / raw)
  To: Jens Axboe
  Cc: io-uring, Pavel Begunkov, linux-block, Uday Shankar,
	Akilesh Kailash, Kevin Wolf

On Mon, Oct 28, 2024 at 06:12:34PM -0600, Jens Axboe wrote:
> On 10/25/24 6:22 AM, Ming Lei wrote:
> > SQE group is defined as one chain of SQEs starting with the first SQE that
> > has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> > doesn't have it set, and it is similar with chain of linked SQEs.
> > 
> > Not like linked SQEs, each sqe is issued after the previous one is
> > completed. All SQEs in one group can be submitted in parallel. To simplify
> > the implementation from beginning, all members are queued after the leader
> > is completed, however, this way may be changed and leader and members may
> > be issued concurrently in future.
> > 
> > The 1st SQE is group leader, and the other SQEs are group member. The whole
> > group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> > the two flags can't be set for group members. For the sake of
> > simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.
> > 
> > When the group is in one link chain, this group isn't submitted until the
> > previous SQE or group is completed. And the following SQE or group can't
> > be started if this group isn't completed. Failure from any group member will
> > fail the group leader, then the link chain can be terminated.
> > 
> > When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> > previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> > group leader only, we respect IO_DRAIN by always completing group leader as
> > the last one in the group. Meantime it is natural to post leader's CQE
> > as the last one from application viewpoint.
> > 
> > Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> > support N:M dependency, such as:
> > 
> > - group A is chained with group B together
> > - group A has N SQEs
> > - group B has M SQEs
> > 
> > then M SQEs in group B depend on N SQEs in group A.
> > 
> > N:M dependency can support some interesting use cases in efficient way:
> > 
> > 1) read from multiple files, then write the read data into single file
> > 
> > 2) read from single file, and write the read data into multiple files
> > 
> > 3) write same data into multiple files, and read data from multiple files and
> > compare if correct data is written
> > 
> > Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> > extend sqe->flags with io_uring context flag, such as use __pad3 for
> > non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
> 
> Since it's taking the last flag, maybe a better idea to have the last
> flag mean "more flags in (for example) __pad3" and put the new flag
> there? Not sure you mean in terms of "io_uring context flag", would it
> be an enter flag? Ring required to be setup with a certain flag? Neither
> of those seem super encouraging, imho.

I meant:

If "more flags in __pad3" is enabled in future we may claim it as one 
feature to userspace, such as IORING_FEAT_EXT_FLAG.

Will improve the above commit log.

> 
> Apart from that, just a few minor nits below.
> 
> > +void io_fail_group_members(struct io_kiocb *req)
> > +{
> > +	struct io_kiocb *member = req->grp_link;
> > +
> > +	while (member) {
> > +		struct io_kiocb *next = member->grp_link;
> > +
> > +		if (!(member->flags & REQ_F_FAIL)) {
> > +			req_set_fail(member);
> > +			io_req_set_res(member, -ECANCELED, 0);
> > +		}
> > +		member = next;
> > +	}
> > +}
> > +
> > +static void io_queue_group_members(struct io_kiocb *req)
> > +{
> > +	struct io_kiocb *member = req->grp_link;
> > +
> > +	if (!member)
> > +		return;
> > +
> > +	req->grp_link = NULL;
> > +	while (member) {
> > +		struct io_kiocb *next = member->grp_link;
> > +
> > +		member->grp_leader = req;
> > +		if (unlikely(member->flags & REQ_F_FAIL)) {
> > +			io_req_task_queue_fail(member, member->cqe.res);
> > +		} else if (unlikely(req->flags & REQ_F_FAIL)) {
> > +			io_req_task_queue_fail(member, -ECANCELED);
> > +		} else {
> > +			io_req_task_queue(member);
> > +		}
> > +		member = next;
> > +	}
> > +}
> 
> Was going to say don't check for !member, you have the while loop. Which
> is what you do in the helper above. You can also drop the parens in this
> one.

OK, will remove the check `!member` and all parens.

> 
> > +static enum group_mem io_prep_free_group_req(struct io_kiocb *req,
> > +					     struct io_kiocb **leader)
> > +{
> > +	/*
> > +	 * Group completion is done, so clear the flag for avoiding double
> > +	 * handling in case of io-wq
> > +	 */
> > +	req->flags &= ~REQ_F_SQE_GROUP;
> > +
> > +	if (req_is_group_leader(req)) {
> > +		/* Queue members now */
> > +		if (req->grp_link)
> > +			io_queue_group_members(req);
> > +		return GROUP_LEADER;
> > +	} else {
> > +		if (!req_is_last_group_member(req))
> > +			return GROUP_OTHER_MEMBER;
> > +
> > +		/*
> > +		 * Prepare for freeing leader which can only be found from
> > +		 * the last member
> > +		 */
> > +		*leader = req->grp_leader;
> > +		(*leader)->flags &= ~REQ_F_SQE_GROUP_LEADER;
> > +		req->grp_leader = NULL;
> > +		return GROUP_LAST_MEMBER;
> > +	}
> > +}
> 
> Just drop the second indentation here.

OK.

> 
> > @@ -927,7 +1051,8 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> >  	 * Handle special CQ sync cases via task_work. DEFER_TASKRUN requires
> >  	 * the submitter task context, IOPOLL protects with uring_lock.
> >  	 */
> > -	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL)) {
> > +	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL) ||
> > +	    (req->flags & REQ_F_SQE_GROUP)) {
> >  		req->io_task_work.func = io_req_task_complete;
> >  		io_req_task_work_add(req);
> >  		return;
> 
> Minor detail, but might be nice with a REQ_F_* flag for this in the
> future.
> 
> > @@ -1450,8 +1596,16 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> >  		struct io_kiocb *req = container_of(node, struct io_kiocb,
> >  					    comp_list);
> >  
> > -		if (!(req->flags & REQ_F_CQE_SKIP))
> > -			io_req_commit_cqe(ctx, req);
> > +		if (unlikely(req->flags & (REQ_F_CQE_SKIP | REQ_F_SQE_GROUP))) {
> > +			if (req->flags & REQ_F_SQE_GROUP) {
> > +				io_complete_group_req(req);
> > +				continue;
> > +			}
> > +
> > +			if (req->flags & REQ_F_CQE_SKIP)
> > +				continue;
> > +		}
> > +		io_req_commit_cqe(ctx, req);
> >  	}
> >  	__io_cq_unlock_post(ctx);
> >  
> > @@ -1661,8 +1815,12 @@ static u32 io_get_sequence(struct io_kiocb *req)
> >  	struct io_kiocb *cur;
> >  
> >  	/* need original cached_sq_head, but it was increased for each req */
> > -	io_for_each_link(cur, req)
> > -		seq--;
> > +	io_for_each_link(cur, req) {
> > +		if (req_is_group_leader(cur))
> > +			seq -= cur->grp_refs;
> > +		else
> > +			seq--;
> > +	}
> >  	return seq;
> >  }
> >  
> > @@ -2124,6 +2282,67 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> >  	return def->prep(req, sqe);
> >  }
> >  
> > +static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
> > +				     struct io_kiocb *req)
> > +{
> > +	/*
> > +	 * Group chain is similar with link chain: starts with 1st sqe with
> > +	 * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
> > +	 */
> > +	if (group->head) {
> > +		struct io_kiocb *lead = group->head;
> > +
> > +		/*
> > +		 * Members can't be in link chain, can't be drained, but
> > +		 * the whole group can be linked or drained by setting
> > +		 * flags on group leader.
> > +		 *
> > +		 * IOSQE_CQE_SKIP_SUCCESS can't be set for member
> > +		 * for the sake of simplicity
> > +		 */
> > +		if (req->flags & (IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN |
> > +				REQ_F_CQE_SKIP))
> > +			req_fail_link_node(lead, -EINVAL);
> > +
> > +		lead->grp_refs += 1;
> > +		group->last->grp_link = req;
> > +		group->last = req;
> > +
> > +		if (req->flags & REQ_F_SQE_GROUP)
> > +			return NULL;
> > +
> > +		req->grp_link = NULL;
> > +		req->flags |= REQ_F_SQE_GROUP;
> > +		group->head = NULL;
> > +
> > +		return lead;
> > +	} else {
> > +		if (WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP)))
> > +			return req;
> > +		group->head = req;
> > +		group->last = req;
> > +		req->grp_refs = 1;
> > +		req->flags |= REQ_F_SQE_GROUP_LEADER;
> > +		return NULL;
> > +	}
> > +}
> 
> Same here, drop the 2nd indentation.

OK.

> 
> > diff --git a/io_uring/timeout.c b/io_uring/timeout.c
> > index 9973876d91b0..ed6c74f1a475 100644
> > --- a/io_uring/timeout.c
> > +++ b/io_uring/timeout.c
> > @@ -149,6 +149,8 @@ static void io_req_tw_fail_links(struct io_kiocb *link, struct io_tw_state *ts)
> >  			res = link->cqe.res;
> >  		link->link = NULL;
> >  		io_req_set_res(link, res, 0);
> > +		if (req_is_group_leader(link))
> > +			io_fail_group_members(link);
> >  		io_req_task_complete(link, ts);
> >  		link = nxt;
> >  	}
> > @@ -543,6 +545,10 @@ static int __io_timeout_prep(struct io_kiocb *req,
> >  	if (is_timeout_link) {
> >  		struct io_submit_link *link = &req->ctx->submit_state.link;
> >  
> > +		/* so far disallow IO group link timeout */
> > +		if (req->ctx->submit_state.group.head)
> > +			return -EINVAL;
> > +
> 
> For now, disallow IO group linked timeout

OK.

thanks,
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 4/7] io_uring: support SQE group
  2024-10-29  1:50     ` Ming Lei
@ 2024-10-29 16:38       ` Pavel Begunkov
  0 siblings, 0 replies; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-29 16:38 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash, Kevin Wolf

On 10/29/24 01:50, Ming Lei wrote:
> On Mon, Oct 28, 2024 at 06:12:34PM -0600, Jens Axboe wrote:
>> On 10/25/24 6:22 AM, Ming Lei wrote:
>>> SQE group is defined as one chain of SQEs starting with the first SQE that
>>> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
>>> doesn't have it set, and it is similar with chain of linked SQEs.
>>>
>>> Not like linked SQEs, each sqe is issued after the previous one is
>>> completed. All SQEs in one group can be submitted in parallel. To simplify
>>> the implementation from beginning, all members are queued after the leader
>>> is completed, however, this way may be changed and leader and members may
>>> be issued concurrently in future.
>>>
>>> The 1st SQE is group leader, and the other SQEs are group member. The whole
>>> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
>>> the two flags can't be set for group members. For the sake of
>>> simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.
>>>
>>> When the group is in one link chain, this group isn't submitted until the
>>> previous SQE or group is completed. And the following SQE or group can't
>>> be started if this group isn't completed. Failure from any group member will
>>> fail the group leader, then the link chain can be terminated.
>>>
>>> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
>>> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
>>> group leader only, we respect IO_DRAIN by always completing group leader as
>>> the last one in the group. Meantime it is natural to post leader's CQE
>>> as the last one from application viewpoint.
>>>
>>> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
>>> support N:M dependency, such as:
>>>
>>> - group A is chained with group B together
>>> - group A has N SQEs
>>> - group B has M SQEs
>>>
>>> then M SQEs in group B depend on N SQEs in group A.
>>>
>>> N:M dependency can support some interesting use cases in efficient way:
>>>
>>> 1) read from multiple files, then write the read data into single file
>>>
>>> 2) read from single file, and write the read data into multiple files
>>>
>>> 3) write same data into multiple files, and read data from multiple files and
>>> compare if correct data is written
>>>
>>> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
>>> extend sqe->flags with io_uring context flag, such as use __pad3 for
>>> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
>>
>> Since it's taking the last flag, maybe a better idea to have the last
>> flag mean "more flags in (for example) __pad3" and put the new flag
>> there? Not sure you mean in terms of "io_uring context flag", would it
>> be an enter flag? Ring required to be setup with a certain flag? Neither
>> of those seem super encouraging, imho.
> 
> I meant:
> 
> If "more flags in __pad3" is enabled in future we may claim it as one
> feature to userspace, such as IORING_FEAT_EXT_FLAG.
> 
> Will improve the above commit log.

And we can't take it in either case. The field is in a union, and
other opcodes use that part of the SQE. Enabling a generic feature
for a subset of requests only is not a good idea.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  2024-10-25 12:22 ` [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF Ming Lei
@ 2024-10-29 16:47   ` Pavel Begunkov
  2024-10-30  0:45     ` Ming Lei
  0 siblings, 1 reply; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-29 16:47 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Uday Shankar, Akilesh Kailash

On 10/25/24 13:22, Ming Lei wrote:
...
> diff --git a/io_uring/rw.c b/io_uring/rw.c
> index 4bc0d762627d..5a2025d48804 100644
> --- a/io_uring/rw.c
> +++ b/io_uring/rw.c
> @@ -245,7 +245,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
>   	if (io_rw_alloc_async(req))
>   		return -ENOMEM;
>   
> -	if (!do_import || io_do_buffer_select(req))
> +	if (!do_import || io_do_buffer_select(req) ||
> +	    io_use_leased_grp_kbuf(req))
>   		return 0;
>   
>   	rw = req->async_data;
> @@ -489,6 +490,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
>   		}
>   		req_set_fail(req);
>   		req->cqe.res = res;
> +		if (io_use_leased_grp_kbuf(req)) {

That's what I'm talking about, we're pushing more and
into the generic paths (or patching every single hot opcode
there is). You said it's fine for ublk the way it was, i.e.
without tracking, so let's then pretend it's a ublk specific
feature, kill that addition and settle at that if that's the
way to go.

> +			struct io_async_rw *io = req->async_data;
> +
> +			io_req_zero_remained(req, &io->iter);
> +		}
>   	}
>   	return false;

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
                   ` (6 preceding siblings ...)
  2024-10-25 12:22 ` [PATCH V8 7/7] ublk: support leasing io " Ming Lei
@ 2024-10-29 17:01 ` Pavel Begunkov
  2024-10-29 17:04   ` Jens Axboe
  7 siblings, 1 reply; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-29 17:01 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, io-uring; +Cc: linux-block, Uday Shankar, Akilesh Kailash

On 10/25/24 13:22, Ming Lei wrote:
> The 1st 3 patches are cleanup, and prepare for adding sqe group.
> 
> The 4th patch supports generic sqe group which is like link chain, but
> allows each sqe in group to be issued in parallel and the group shares
> same IO_LINK & IO_DRAIN boundary, so N:M dependency can be supported with
> sqe group & io link together.
> 
> The 5th & 6th patches supports to lease other subsystem's kbuf to
> io_uring for use in sqe group wide.
> 
> The 7th patch supports ublk zero copy based on io_uring sqe group &
> leased kbuf.
> 
> Tests:
> 
> 1) pass liburing test
> - make runtests
> 
> 2) write/pass sqe group test case and sqe provide buffer case:
> 
> https://github.com/ming1/liburing/tree/uring_group
> 
> - covers related sqe flags combination and linking groups, both nop and
> one multi-destination file copy.
> 
> - cover failure handling test: fail leader IO or member IO in both single
>    group and linked groups, which is done in each sqe flags combination
>    test
> 
> - cover io_uring with leased group kbuf by adding ublk-loop-zc

To make my position clear, I think the table approach will turn
much better API-wise if the performance suffices, and we can only know
that experimentally. I tried that idea with sockets back then, and it
was looking well. It'd be great if someone tries to implement and
compare it, though I don't believe I should be trying it, so maybe Ming
or Jens can, especially since Jens already posted a couple series for
problems standing in the way, i.e global rsrc nodes and late buffer
binding. In any case, I'm not opposing to the series if Jens decides to
merge it.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-29 17:01 ` [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Pavel Begunkov
@ 2024-10-29 17:04   ` Jens Axboe
  2024-10-29 19:18     ` Jens Axboe
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-29 17:04 UTC (permalink / raw)
  To: Pavel Begunkov, Ming Lei, io-uring
  Cc: linux-block, Uday Shankar, Akilesh Kailash

On 10/29/24 11:01 AM, Pavel Begunkov wrote:
> On 10/25/24 13:22, Ming Lei wrote:
>> The 1st 3 patches are cleanup, and prepare for adding sqe group.
>>
>> The 4th patch supports generic sqe group which is like link chain, but
>> allows each sqe in group to be issued in parallel and the group shares
>> same IO_LINK & IO_DRAIN boundary, so N:M dependency can be supported with
>> sqe group & io link together.
>>
>> The 5th & 6th patches supports to lease other subsystem's kbuf to
>> io_uring for use in sqe group wide.
>>
>> The 7th patch supports ublk zero copy based on io_uring sqe group &
>> leased kbuf.
>>
>> Tests:
>>
>> 1) pass liburing test
>> - make runtests
>>
>> 2) write/pass sqe group test case and sqe provide buffer case:
>>
>> https://github.com/ming1/liburing/tree/uring_group
>>
>> - covers related sqe flags combination and linking groups, both nop and
>> one multi-destination file copy.
>>
>> - cover failure handling test: fail leader IO or member IO in both single
>>    group and linked groups, which is done in each sqe flags combination
>>    test
>>
>> - cover io_uring with leased group kbuf by adding ublk-loop-zc
> 
> To make my position clear, I think the table approach will turn
> much better API-wise if the performance suffices, and we can only know
> that experimentally. I tried that idea with sockets back then, and it
> was looking well. It'd be great if someone tries to implement and
> compare it, though I don't believe I should be trying it, so maybe Ming
> or Jens can, especially since Jens already posted a couple series for
> problems standing in the way, i.e global rsrc nodes and late buffer
> binding. In any case, I'm not opposing to the series if Jens decides to
> merge it.

With the rsrc node stuff sorted out, I was thinking last night that I
should take another look at this. While that work was (mostly) done
because of the lingering closes, it does nicely enable ephemeral buffers
too.

I'll take a stab at it... While I would love to make progress on this
feature proposed in this series, it's arguably more important to do it
in such a way that we can live with it, long term.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-29 17:04   ` Jens Axboe
@ 2024-10-29 19:18     ` Jens Axboe
  2024-10-29 20:06       ` Jens Axboe
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-29 19:18 UTC (permalink / raw)
  To: Pavel Begunkov, Ming Lei, io-uring
  Cc: linux-block, Uday Shankar, Akilesh Kailash

[-- Attachment #1: Type: text/plain, Size: 6001 bytes --]

On 10/29/24 11:04 AM, Jens Axboe wrote:
>> To make my position clear, I think the table approach will turn
>> much better API-wise if the performance suffices, and we can only know
>> that experimentally. I tried that idea with sockets back then, and it
>> was looking well. It'd be great if someone tries to implement and
>> compare it, though I don't believe I should be trying it, so maybe Ming
>> or Jens can, especially since Jens already posted a couple series for
>> problems standing in the way, i.e global rsrc nodes and late buffer
>> binding. In any case, I'm not opposing to the series if Jens decides to
>> merge it.
> 
> With the rsrc node stuff sorted out, I was thinking last night that I
> should take another look at this. While that work was (mostly) done
> because of the lingering closes, it does nicely enable ephemeral buffers
> too.
> 
> I'll take a stab at it... While I would love to make progress on this
> feature proposed in this series, it's arguably more important to do it
> in such a way that we can live with it, long term.

Ming, here's another stab at this, see attached patch. It adds a
LOCAL_BUF opcode, which maps a user provided buffer to a io_rsrc_node
that opcodes can then use. The buffer is visible ONLY within a given
submission - in other words, only within a single io_uring_submit()
call. The buffer provided is done so at prep time, which means you don't
need to serialize with the LOCAL_BUF op itself. You can do:

sqe = io_uring_get_sqe(ring);
io_uring_prep_local_buf(sqe, buffer, length, tag);

sqe = io_uring_get_sqe(ring);
io_uring_prep_whatever_op_fixed(sqe, buffer, length, foo);

and have 'whatever' rely on the buffer either being there to use, or the
import failing with -EFAULT. No IOSQE_IO_LINK or similar is needed.
Obviously if you do:

sqe = io_uring_get_sqe(ring);
io_uring_prep_local_buf(sqe, buffer, length, tag);

sqe = io_uring_get_sqe(ring);
io_uring_prep_read_thing_fixed(sqe, buffer, length, foo);
sqe->flags |= IOSQE_IO_LINK;

sqe = io_uring_get_sqe(ring);
io_uring_prep_write_thing_fixed(sqe, buffer, length, foo);

then the filling of the buffer and whoever uses the filled buffer will
need to be serialized, to ensure the buffer content is valid for the
write.

Any opcode using the ephemeral/local buffer will need to grab a
reference to it, just like what is done for normal registered buffers.
If assigned to req->rsrc_node, then it'll be put as part of normal
completion. Hence no special handling is needed for this. The reference
that submit holds is assigned by LOCAL_BUF, and will be put when
submission ends. Hence no requirement that opcodes finish before submit
ends, they have their own ref.

All of that should make sense, I think. I'm attaching the most basic of
test apps I wrote to test this, as well as using:

diff --git a/io_uring/rw.c b/io_uring/rw.c
index 30448f343c7f..89662f305342 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -338,7 +338,10 @@ static int io_prep_rw_fixed(struct io_kiocb *req, const struct io_uring_sqe *sqe
 	if (unlikely(ret))
 		return ret;
 
-	node = io_rsrc_node_lookup(&ctx->buf_table, req->buf_index);
+	if (ctx->submit_state.rsrc_node != rsrc_empty_node)
+		node = ctx->submit_state.rsrc_node;
+	else
+		node = io_rsrc_node_lookup(&ctx->buf_table, req->buf_index);
 	if (!node)
 		return -EFAULT;
 	io_req_assign_rsrc_node(req, node);

just so I could test it with normal read/write fixed and do a zero copy
read/write operation where the write file ends up with the data from the
read file.

When you run the test app, you should see:

axboe@m2max-kvm ~/g/liburing (reg-wait)> test/local-buf.t
buffer 0xaaaada34a000
res=0, ud=0x1
res=4096, ud=0x2
res=4096, ud=0x3
res=0, ud=0xaaaada34a000

which shows LOCAL_BUF completing first, then a 4k read, then a 4k write,
and finally the notification for the buffer being done. The test app
sets up the tag to be the buffer address, could obviously be anything
you want.

Now, this implementation requires a user buffer, and as far as I'm told,
you currently have kernel buffers on the ublk side. There's absolutely
no reason why kernel buffers cannot work, we'd most likely just need to
add a IORING_RSRC_KBUFFER type to handle that. My question here is how
hard is this requirement? Reason I ask is that it's much simpler to work
with userspace buffers. Yes the current implementation maps them
everytime, we could certainly change that, however I don't see this
being an issue. It's really no different than O_DIRECT, and you only
need to map them once for a read + whatever number of writes you'd need
to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever
that buffer is unmapped. This is a notification for the application that
it's done using the buffer. For a pure kernel buffer, we'd either need
to be able to reference it (so that we KNOW it's not going away) and/or
have a callback associated with the buffer.

Would it be possible for ublk to require the user side to register a
range of memory that should be used for the write buffers, such that
they could be mapped in the kernel instead? Maybe this memory is already
registered as such? I don't know all the details of the ublk zero copy,
but I would imagine there's some flexibility here in terms of how it
gets setup.

ublk would then need to add opcodes that utilize LOCAL_BUF for this,
obviously. As it stands, with the patch, nobody can access these
buffers, we'd need a READ_LOCAL_FIXED etc to have opcodes be able to
access them. But that should be fine, you need specific opcodes for zero
copy anyway. You can probably even reuse existing opcodes, and just add
something like IORING_URING_CMD_LOCAL as a flag rather than
IORING_URING_CMD_FIXED that is used now for registered buffers.

Let me know what you think. Like I mentioned, this is just a rough
patch. It does work though and it is safe, but obviously only does
userspace memory right now. It sits on top of my io_uring-rsrc branch,
which rewrites the rsrc handling.

-- 
Jens Axboe

[-- Attachment #2: 0001-io_uring-add-support-for-an-ephemeral-per-submit-buf.patch --]
[-- Type: text/x-patch, Size: 4708 bytes --]

From 01591be7d66879618fb8f8965141ac24e9334068 Mon Sep 17 00:00:00 2001
From: Jens Axboe <[email protected]>
Date: Tue, 29 Oct 2024 12:00:48 -0600
Subject: [PATCH] io_uring: add support for an ephemeral per-submit buffer

Use the rewritten rsrc node management to provide a buffer that's local
to this submission only, it'll get put when done. Opcodes will need
special support for utilizing the buffer, rather than grabbing a
registered buffer from the normal ring buffer table.

The buffer is purely submission wide, it only exists within that
submission. It is provided at prep time, so users of the buffer need
not use serializing IOSQE_IO_LINK to rely on being able to use it.
Obviously multiple requests are using the same buffer and need
serialization between them, those dependencies must be expressed.

Signed-off-by: Jens Axboe <[email protected]>
---
 include/linux/io_uring_types.h |  1 +
 include/uapi/linux/io_uring.h  |  1 +
 io_uring/io_uring.c            |  2 ++
 io_uring/opdef.c               |  7 +++++++
 io_uring/rsrc.c                | 30 ++++++++++++++++++++++++++++++
 io_uring/rsrc.h                |  3 +++
 6 files changed, 44 insertions(+)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index c283179b0c89..0ce155374016 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -208,6 +208,7 @@ struct io_submit_state {
 	bool			need_plug;
 	bool			cq_flush;
 	unsigned short		submit_nr;
+	struct io_rsrc_node	*rsrc_node;
 	struct blk_plug		plug;
 };
 
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index ce58c4590de6..a7d0aaf6daf5 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -259,6 +259,7 @@ enum io_uring_op {
 	IORING_OP_FTRUNCATE,
 	IORING_OP_BIND,
 	IORING_OP_LISTEN,
+	IORING_OP_LOCAL_BUF,
 
 	/* this goes last, obviously */
 	IORING_OP_LAST,
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 3a535e9e8ac3..d517d6a0fd39 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2205,6 +2205,7 @@ static void io_submit_state_end(struct io_ring_ctx *ctx)
 		io_queue_sqe_fallback(state->link.head);
 	/* flush only after queuing links as they can generate completions */
 	io_submit_flush_completions(ctx);
+	io_put_rsrc_node(state->rsrc_node);
 	if (state->plug_started)
 		blk_finish_plug(&state->plug);
 }
@@ -2220,6 +2221,7 @@ static void io_submit_state_start(struct io_submit_state *state,
 	state->submit_nr = max_ios;
 	/* set only head, no need to init link_last in advance */
 	state->link.head = NULL;
+	state->rsrc_node = rsrc_empty_node;
 }
 
 static void io_commit_sqring(struct io_ring_ctx *ctx)
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index 3de75eca1c92..ae18e403a7bc 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -515,6 +515,10 @@ const struct io_issue_def io_issue_defs[] = {
 		.prep			= io_eopnotsupp_prep,
 #endif
 	},
+	[IORING_OP_LOCAL_BUF] = {
+		.prep			= io_local_buf_prep,
+		.issue			= io_local_buf,
+	},
 };
 
 const struct io_cold_def io_cold_defs[] = {
@@ -744,6 +748,9 @@ const struct io_cold_def io_cold_defs[] = {
 	[IORING_OP_LISTEN] = {
 		.name			= "LISTEN",
 	},
+	[IORING_OP_LOCAL_BUF] = {
+		.name			= "LOCAL_BUF",
+	},
 };
 
 const char *io_uring_get_opcode(u8 opcode)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 6e30679175aa..9621ba533b35 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -1069,3 +1069,33 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg)
 		fput(file);
 	return ret;
 }
+
+int io_local_buf_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+	struct io_submit_state *state = &ctx->submit_state;
+	struct page *last_hpage = NULL;
+	struct io_rsrc_node *node;
+	struct iovec iov;
+	__u64 tag;
+
+	if (state->rsrc_node != rsrc_empty_node)
+		return -EBUSY;
+
+	iov.iov_base = u64_to_user_ptr(READ_ONCE(sqe->addr));
+	iov.iov_len = READ_ONCE(sqe->len);
+	tag = READ_ONCE(sqe->addr2);
+
+	node = io_sqe_buffer_register(ctx, &iov, &last_hpage);
+	if (IS_ERR(node))
+		return PTR_ERR(node);
+
+	node->tag = tag;
+	state->rsrc_node = node;
+	return 0;
+}
+
+int io_local_buf(struct io_kiocb *req, unsigned int issue_flags)
+{
+	return IOU_OK;
+}
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index c9f42491c747..be9b490c400e 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -141,4 +141,7 @@ static inline void __io_unaccount_mem(struct user_struct *user,
 	atomic_long_sub(nr_pages, &user->locked_vm);
 }
 
+int io_local_buf_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
+int io_local_buf(struct io_kiocb *req, unsigned int issue_flags);
+
 #endif
-- 
2.45.2


[-- Attachment #3: local-buf.c --]
[-- Type: text/x-csrc, Size: 1710 bytes --]

/* SPDX-License-Identifier: MIT */
#include <errno.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>

#include "liburing.h"
#include "helpers.h"

int main(int argc, char *argv[])
{
	struct io_uring_sqe *sqe;
	struct io_uring_cqe *cqe;
	struct io_uring ring;
	void *buffer;
	int ret, in_fd, out_fd, i;
	char bufr[64], bufw[64];

	if (posix_memalign(&buffer, 4096, 4096))
		return 1;

	printf("buffer %p\n", buffer);

	sprintf(bufr, ".local-buf-read.%d\n", getpid());
	t_create_file_pattern(bufr, 4096, 0x5a);

	sprintf(bufw, ".local-buf-write.%d\n", getpid());
	t_create_file_pattern(bufw, 4096, 0x00);

	in_fd = open(bufr, O_RDONLY | O_DIRECT);
	if (in_fd < 0) {
		perror("open");
		return 1;
	}

	out_fd = open(bufw, O_WRONLY | O_DIRECT);
	if (out_fd < 0) {
		perror("open");
		return 1;
	}

	io_uring_queue_init(8, &ring, IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN);

	/* add local buf */
	sqe = io_uring_get_sqe(&ring);
	io_uring_prep_rw(IORING_OP_LOCAL_BUF, sqe, 0, buffer, 4096, (unsigned long) buffer);
	sqe->user_data = 1;

	sqe = io_uring_get_sqe(&ring);
	io_uring_prep_read_fixed(sqe, in_fd, buffer, 4096, 0, 0);
	sqe->flags |= IOSQE_IO_LINK;
	sqe->user_data = 2;

	sqe = io_uring_get_sqe(&ring);
	io_uring_prep_write_fixed(sqe, out_fd, buffer, 4096, 0, 0);
	sqe->user_data = 3;
	
	ret = io_uring_submit(&ring);
	if (ret != 3) {
		fprintf(stderr, "submit: %d\n", ret);
		return 1;
	}

	for (i = 0; i < 4; i++) {
		ret = io_uring_wait_cqe(&ring, &cqe);
		if (ret) {
			fprintf(stderr, "wait: %d\n", ret);
			return 1;
		}
		printf("res=%d, ud=0x%lx\n", cqe->res, (long) cqe->user_data);
		io_uring_cqe_seen(&ring, cqe);
	}

	return 0;
}

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-29 19:18     ` Jens Axboe
@ 2024-10-29 20:06       ` Jens Axboe
  2024-10-29 21:26         ` Jens Axboe
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-29 20:06 UTC (permalink / raw)
  To: Pavel Begunkov, Ming Lei, io-uring
  Cc: linux-block, Uday Shankar, Akilesh Kailash

On 10/29/24 1:18 PM, Jens Axboe wrote:
> Now, this implementation requires a user buffer, and as far as I'm told,
> you currently have kernel buffers on the ublk side. There's absolutely
> no reason why kernel buffers cannot work, we'd most likely just need to
> add a IORING_RSRC_KBUFFER type to handle that. My question here is how
> hard is this requirement? Reason I ask is that it's much simpler to work
> with userspace buffers. Yes the current implementation maps them
> everytime, we could certainly change that, however I don't see this
> being an issue. It's really no different than O_DIRECT, and you only
> need to map them once for a read + whatever number of writes you'd need
> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever
> that buffer is unmapped. This is a notification for the application that
> it's done using the buffer. For a pure kernel buffer, we'd either need
> to be able to reference it (so that we KNOW it's not going away) and/or
> have a callback associated with the buffer.

Just to expand on this - if a kernel buffer is absolutely required, for
example if you're inheriting pages from the page cache or other
locations you cannot control, we would need to add something ala the
below:

diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 9621ba533b35..b0258eb37681 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -474,6 +474,10 @@ void io_free_rsrc_node(struct io_rsrc_node *node)
 		if (node->buf)
 			io_buffer_unmap(node->ctx, node);
 		break;
+	case IORING_RSRC_KBUFFER:
+		if (node->buf)
+			node->kbuf_fn(node->buf);
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		break;
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index be9b490c400e..8d00460d47ff 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -11,6 +11,7 @@
 enum {
 	IORING_RSRC_FILE		= 0,
 	IORING_RSRC_BUFFER		= 1,
+	IORING_RSRC_KBUFFER		= 2,
 };
 
 struct io_rsrc_node {
@@ -19,6 +20,7 @@ struct io_rsrc_node {
 	u16				type;
 
 	u64 tag;
+	void (*kbuf_fn)(struct io_mapped_ubuf *);
 	union {
 		unsigned long file_ptr;
 		struct io_mapped_ubuf *buf;

and provide a helper that allocates an io_rsrc_node, sets it to type
IORING_RSRC_KBUFFER, and assigns a ->kbuf_fn() that gets a callback when
the final put of the node happens. Whatever ublk command that wants to
do zero copy would call this helper at prep time and set the
io_submit_state buffer to be used.

Likewise, probably provide an io_rsrc helper that can be called by
kbuf_fn as well to do final cleanup, so that the callback itself is only
tasked with whatever it needs to do once it's received the data.

For this to work, we'll absolutely need the provider to guarantee that
the pages mapped will remain persistent until that callback is received.
Or have a way to reference the data inside rsrc.c. I'm imagining this is
just stacking the IO, so eg you get a read with some data already in
there that you don't control, and you don't complete this read until
some other IO is done. That other IO is what is using the buffer
provided here.

Anyway, just a suggestion if the user provided memory is a no-go, there
are certainly ways we can make this trivially work with memory you
cannot control that is received from inside the kernel, without a lot of
additions.

-- 
Jens Axboe

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-29 20:06       ` Jens Axboe
@ 2024-10-29 21:26         ` Jens Axboe
  2024-10-30  2:03           ` Ming Lei
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-29 21:26 UTC (permalink / raw)
  To: Pavel Begunkov, Ming Lei, io-uring
  Cc: linux-block, Uday Shankar, Akilesh Kailash

On 10/29/24 2:06 PM, Jens Axboe wrote:
> On 10/29/24 1:18 PM, Jens Axboe wrote:
>> Now, this implementation requires a user buffer, and as far as I'm told,
>> you currently have kernel buffers on the ublk side. There's absolutely
>> no reason why kernel buffers cannot work, we'd most likely just need to
>> add a IORING_RSRC_KBUFFER type to handle that. My question here is how
>> hard is this requirement? Reason I ask is that it's much simpler to work
>> with userspace buffers. Yes the current implementation maps them
>> everytime, we could certainly change that, however I don't see this
>> being an issue. It's really no different than O_DIRECT, and you only
>> need to map them once for a read + whatever number of writes you'd need
>> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever
>> that buffer is unmapped. This is a notification for the application that
>> it's done using the buffer. For a pure kernel buffer, we'd either need
>> to be able to reference it (so that we KNOW it's not going away) and/or
>> have a callback associated with the buffer.
> 
> Just to expand on this - if a kernel buffer is absolutely required, for
> example if you're inheriting pages from the page cache or other
> locations you cannot control, we would need to add something ala the
> below:

Here's a more complete one, but utterly untested. But it does the same
thing, mapping a struct request, but it maps it to an io_rsrc_node which
in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the
same type, only the destruction is different. Then the callback provided
needs to do something ala:

struct io_mapped_ubuf *imu = node->buf;

if (imu && refcount_dec_and_test(&imu->refs))
	kvfree(imu);

when it's done with the imu. Probably an rsrc helper should just be done
for that, but those are details.

diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 9621ba533b35..050868a4c9f1 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -8,6 +8,8 @@
 #include <linux/nospec.h>
 #include <linux/hugetlb.h>
 #include <linux/compat.h>
+#include <linux/bvec.h>
+#include <linux/blk-mq.h>
 #include <linux/io_uring.h>
 
 #include <uapi/linux/io_uring.h>
@@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node)
 		if (node->buf)
 			io_buffer_unmap(node->ctx, node);
 		break;
+	case IORING_RSRC_KBUFFER:
+		node->kbuf_fn(node);
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		break;
@@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg)
 	return ret;
 }
 
+struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx,
+					 struct request *req,
+					 void (*kbuf_fn)(struct io_rsrc_node *))
+{
+	struct io_mapped_ubuf *imu = NULL;
+	struct io_rsrc_node *node = NULL;
+	struct req_iterator rq_iter;
+	unsigned int offset;
+	struct bio_vec bv;
+	int nr_bvecs;
+
+	if (!bio_has_data(req->bio))
+		goto out;
+
+	nr_bvecs = 0;
+	rq_for_each_bvec(bv, req, rq_iter)
+		nr_bvecs++;
+	if (!nr_bvecs)
+		goto out;
+
+	node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER);
+	if (!node)
+		goto out;
+	node->buf = NULL;
+
+	imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO);
+	if (!imu)
+		goto out;
+
+	imu->ubuf = 0;
+	imu->len = 0;
+	if (req->bio != req->biotail) {
+		int idx = 0;
+
+		offset = 0;
+		rq_for_each_bvec(bv, req, rq_iter) {
+			imu->bvec[idx++] = bv;
+			imu->len += bv.bv_len;
+		}
+	} else {
+		struct bio *bio = req->bio;
+
+		offset = bio->bi_iter.bi_bvec_done;
+		imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
+		imu->len = imu->bvec[0].bv_len;
+	}
+	imu->nr_bvecs = nr_bvecs;
+	imu->folio_shift = PAGE_SHIFT;
+	refcount_set(&imu->refs, 1);
+	node->buf = imu;
+	node->kbuf_fn = kbuf_fn;
+	return node;
+out:
+	if (node)
+		io_put_rsrc_node(node);
+	kfree(imu);
+	return NULL;
+}
+
 int io_local_buf_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_ring_ctx *ctx = req->ctx;
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index be9b490c400e..8d479f765fe0 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -11,6 +11,7 @@
 enum {
 	IORING_RSRC_FILE		= 0,
 	IORING_RSRC_BUFFER		= 1,
+	IORING_RSRC_KBUFFER		= 2,
 };
 
 struct io_rsrc_node {
@@ -19,6 +20,7 @@ struct io_rsrc_node {
 	u16				type;
 
 	u64 tag;
+	void (*kbuf_fn)(struct io_rsrc_node *);
 	union {
 		unsigned long file_ptr;
 		struct io_mapped_ubuf *buf;
@@ -52,6 +54,10 @@ int io_import_fixed(int ddir, struct iov_iter *iter,
 			   struct io_mapped_ubuf *imu,
 			   u64 buf_addr, size_t len);
 
+struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx,
+					 struct request *req,
+					 void (*kbuf_fn)(struct io_rsrc_node *));
+
 int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg);
 int io_sqe_buffers_unregister(struct io_ring_ctx *ctx);
 int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,

-- 
Jens Axboe

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  2024-10-29 16:47   ` Pavel Begunkov
@ 2024-10-30  0:45     ` Ming Lei
  2024-10-30  1:25       ` Pavel Begunkov
  0 siblings, 1 reply; 41+ messages in thread
From: Ming Lei @ 2024-10-30  0:45 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Jens Axboe, io-uring, linux-block, Uday Shankar, Akilesh Kailash

On Tue, Oct 29, 2024 at 04:47:59PM +0000, Pavel Begunkov wrote:
> On 10/25/24 13:22, Ming Lei wrote:
> ...
> > diff --git a/io_uring/rw.c b/io_uring/rw.c
> > index 4bc0d762627d..5a2025d48804 100644
> > --- a/io_uring/rw.c
> > +++ b/io_uring/rw.c
> > @@ -245,7 +245,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
> >   	if (io_rw_alloc_async(req))
> >   		return -ENOMEM;
> > -	if (!do_import || io_do_buffer_select(req))
> > +	if (!do_import || io_do_buffer_select(req) ||
> > +	    io_use_leased_grp_kbuf(req))
> >   		return 0;
> >   	rw = req->async_data;
> > @@ -489,6 +490,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
> >   		}
> >   		req_set_fail(req);
> >   		req->cqe.res = res;
> > +		if (io_use_leased_grp_kbuf(req)) {
> 
> That's what I'm talking about, we're pushing more and
> into the generic paths (or patching every single hot opcode
> there is). You said it's fine for ublk the way it was, i.e.
> without tracking, so let's then pretend it's a ublk specific
> feature, kill that addition and settle at that if that's the
> way to go.

As I mentioned before, it isn't ublk specific, zeroing is required
because the buffer is kernel buffer, that is all. Any other approach
needs this kind of handling too. The coming fuse zc need it.

And it can't be done in driver side, because driver has no idea how
to consume the kernel buffer.

Also it is only required in case of short read/recv, and it isn't
hot path, not mention it is just one check on request flag.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  2024-10-30  0:45     ` Ming Lei
@ 2024-10-30  1:25       ` Pavel Begunkov
  2024-10-30  2:04         ` Ming Lei
  0 siblings, 1 reply; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-30  1:25 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/30/24 00:45, Ming Lei wrote:
> On Tue, Oct 29, 2024 at 04:47:59PM +0000, Pavel Begunkov wrote:
>> On 10/25/24 13:22, Ming Lei wrote:
>> ...
>>> diff --git a/io_uring/rw.c b/io_uring/rw.c
>>> index 4bc0d762627d..5a2025d48804 100644
>>> --- a/io_uring/rw.c
>>> +++ b/io_uring/rw.c
>>> @@ -245,7 +245,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
>>>    	if (io_rw_alloc_async(req))
>>>    		return -ENOMEM;
>>> -	if (!do_import || io_do_buffer_select(req))
>>> +	if (!do_import || io_do_buffer_select(req) ||
>>> +	    io_use_leased_grp_kbuf(req))
>>>    		return 0;
>>>    	rw = req->async_data;
>>> @@ -489,6 +490,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
>>>    		}
>>>    		req_set_fail(req);
>>>    		req->cqe.res = res;
>>> +		if (io_use_leased_grp_kbuf(req)) {
>>
>> That's what I'm talking about, we're pushing more and
>> into the generic paths (or patching every single hot opcode
>> there is). You said it's fine for ublk the way it was, i.e.
>> without tracking, so let's then pretend it's a ublk specific
>> feature, kill that addition and settle at that if that's the
>> way to go.
> 
> As I mentioned before, it isn't ublk specific, zeroing is required
> because the buffer is kernel buffer, that is all. Any other approach
> needs this kind of handling too. The coming fuse zc need it.
> 
> And it can't be done in driver side, because driver has no idea how
> to consume the kernel buffer.
> 
> Also it is only required in case of short read/recv, and it isn't
> hot path, not mention it is just one check on request flag.

I agree, it's not hot, it's a failure path, and the recv side
is of medium hotness, but the main concern is that the feature
is too actively leaking into other requests.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-29 21:26         ` Jens Axboe
@ 2024-10-30  2:03           ` Ming Lei
  2024-10-30  2:43             ` Jens Axboe
  0 siblings, 1 reply; 41+ messages in thread
From: Ming Lei @ 2024-10-30  2:03 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash, ming.lei

On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
> On 10/29/24 2:06 PM, Jens Axboe wrote:
> > On 10/29/24 1:18 PM, Jens Axboe wrote:
> >> Now, this implementation requires a user buffer, and as far as I'm told,
> >> you currently have kernel buffers on the ublk side. There's absolutely
> >> no reason why kernel buffers cannot work, we'd most likely just need to
> >> add a IORING_RSRC_KBUFFER type to handle that. My question here is how
> >> hard is this requirement? Reason I ask is that it's much simpler to work
> >> with userspace buffers. Yes the current implementation maps them
> >> everytime, we could certainly change that, however I don't see this
> >> being an issue. It's really no different than O_DIRECT, and you only
> >> need to map them once for a read + whatever number of writes you'd need
> >> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever
> >> that buffer is unmapped. This is a notification for the application that
> >> it's done using the buffer. For a pure kernel buffer, we'd either need
> >> to be able to reference it (so that we KNOW it's not going away) and/or
> >> have a callback associated with the buffer.
> > 
> > Just to expand on this - if a kernel buffer is absolutely required, for
> > example if you're inheriting pages from the page cache or other
> > locations you cannot control, we would need to add something ala the
> > below:
> 
> Here's a more complete one, but utterly untested. But it does the same
> thing, mapping a struct request, but it maps it to an io_rsrc_node which
> in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the
> same type, only the destruction is different. Then the callback provided
> needs to do something ala:
> 
> struct io_mapped_ubuf *imu = node->buf;
> 
> if (imu && refcount_dec_and_test(&imu->refs))
> 	kvfree(imu);
> 
> when it's done with the imu. Probably an rsrc helper should just be done
> for that, but those are details.
> 
> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> index 9621ba533b35..050868a4c9f1 100644
> --- a/io_uring/rsrc.c
> +++ b/io_uring/rsrc.c
> @@ -8,6 +8,8 @@
>  #include <linux/nospec.h>
>  #include <linux/hugetlb.h>
>  #include <linux/compat.h>
> +#include <linux/bvec.h>
> +#include <linux/blk-mq.h>
>  #include <linux/io_uring.h>
>  
>  #include <uapi/linux/io_uring.h>
> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node)
>  		if (node->buf)
>  			io_buffer_unmap(node->ctx, node);
>  		break;
> +	case IORING_RSRC_KBUFFER:
> +		node->kbuf_fn(node);
> +		break;

Here 'node' is freed later, and it may not work because ->imu is bound
with node.

>  	default:
>  		WARN_ON_ONCE(1);
>  		break;
> @@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg)
>  	return ret;
>  }
>  
> +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx,
> +					 struct request *req,
> +					 void (*kbuf_fn)(struct io_rsrc_node *))
> +{
> +	struct io_mapped_ubuf *imu = NULL;
> +	struct io_rsrc_node *node = NULL;
> +	struct req_iterator rq_iter;
> +	unsigned int offset;
> +	struct bio_vec bv;
> +	int nr_bvecs;
> +
> +	if (!bio_has_data(req->bio))
> +		goto out;
> +
> +	nr_bvecs = 0;
> +	rq_for_each_bvec(bv, req, rq_iter)
> +		nr_bvecs++;
> +	if (!nr_bvecs)
> +		goto out;
> +
> +	node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER);
> +	if (!node)
> +		goto out;
> +	node->buf = NULL;
> +
> +	imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO);
> +	if (!imu)
> +		goto out;
> +
> +	imu->ubuf = 0;
> +	imu->len = 0;
> +	if (req->bio != req->biotail) {
> +		int idx = 0;
> +
> +		offset = 0;
> +		rq_for_each_bvec(bv, req, rq_iter) {
> +			imu->bvec[idx++] = bv;
> +			imu->len += bv.bv_len;
> +		}
> +	} else {
> +		struct bio *bio = req->bio;
> +
> +		offset = bio->bi_iter.bi_bvec_done;
> +		imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
> +		imu->len = imu->bvec[0].bv_len;
> +	}
> +	imu->nr_bvecs = nr_bvecs;
> +	imu->folio_shift = PAGE_SHIFT;
> +	refcount_set(&imu->refs, 1);

One big problem is how to initialize the reference count, because this
buffer need to be used in the following more than one request. Without
one perfect counter, the buffer won't be freed in the exact time without
extra OP.

I think the reference should be in `node` which need to be live if any
consumer OP isn't completed.

> +	node->buf = imu;
> +	node->kbuf_fn = kbuf_fn;
> +	return node;

Also this function needs to register the buffer to table with one
pre-defined buf index, then the following request can use it by
the way of io_prep_rw_fixed().

If OP dependency can be avoided, I think this approach is fine,
otherwise I still suggest sqe group. Not only performance, but
application becomes too complicated.

We also we need to provide ->prep() callback for uring_cmd driver, so
that io_rsrc_map_request() can be called by driver in ->prep(),
meantime `io_ring_ctx` and `io_rsrc_node` need to be visible for driver.
What do you think of these kind of changes?


thanks, 
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  2024-10-30  1:25       ` Pavel Begunkov
@ 2024-10-30  2:04         ` Ming Lei
  2024-10-31 13:16           ` Pavel Begunkov
  0 siblings, 1 reply; 41+ messages in thread
From: Ming Lei @ 2024-10-30  2:04 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Jens Axboe, io-uring, linux-block, Uday Shankar, Akilesh Kailash

On Wed, Oct 30, 2024 at 01:25:33AM +0000, Pavel Begunkov wrote:
> On 10/30/24 00:45, Ming Lei wrote:
> > On Tue, Oct 29, 2024 at 04:47:59PM +0000, Pavel Begunkov wrote:
> > > On 10/25/24 13:22, Ming Lei wrote:
> > > ...
> > > > diff --git a/io_uring/rw.c b/io_uring/rw.c
> > > > index 4bc0d762627d..5a2025d48804 100644
> > > > --- a/io_uring/rw.c
> > > > +++ b/io_uring/rw.c
> > > > @@ -245,7 +245,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
> > > >    	if (io_rw_alloc_async(req))
> > > >    		return -ENOMEM;
> > > > -	if (!do_import || io_do_buffer_select(req))
> > > > +	if (!do_import || io_do_buffer_select(req) ||
> > > > +	    io_use_leased_grp_kbuf(req))
> > > >    		return 0;
> > > >    	rw = req->async_data;
> > > > @@ -489,6 +490,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
> > > >    		}
> > > >    		req_set_fail(req);
> > > >    		req->cqe.res = res;
> > > > +		if (io_use_leased_grp_kbuf(req)) {
> > > 
> > > That's what I'm talking about, we're pushing more and
> > > into the generic paths (or patching every single hot opcode
> > > there is). You said it's fine for ublk the way it was, i.e.
> > > without tracking, so let's then pretend it's a ublk specific
> > > feature, kill that addition and settle at that if that's the
> > > way to go.
> > 
> > As I mentioned before, it isn't ublk specific, zeroing is required
> > because the buffer is kernel buffer, that is all. Any other approach
> > needs this kind of handling too. The coming fuse zc need it.
> > 
> > And it can't be done in driver side, because driver has no idea how
> > to consume the kernel buffer.
> > 
> > Also it is only required in case of short read/recv, and it isn't
> > hot path, not mention it is just one check on request flag.
> 
> I agree, it's not hot, it's a failure path, and the recv side
> is of medium hotness, but the main concern is that the feature
> is too actively leaking into other requests.
 
The point is that if you'd like to support kernel buffer. If yes, this
kind of change can't be avoided.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-30  2:03           ` Ming Lei
@ 2024-10-30  2:43             ` Jens Axboe
  2024-10-30  3:08               ` Ming Lei
  2024-10-31 13:25               ` Pavel Begunkov
  0 siblings, 2 replies; 41+ messages in thread
From: Jens Axboe @ 2024-10-30  2:43 UTC (permalink / raw)
  To: Ming Lei
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash

On 10/29/24 8:03 PM, Ming Lei wrote:
> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
>> On 10/29/24 2:06 PM, Jens Axboe wrote:
>>> On 10/29/24 1:18 PM, Jens Axboe wrote:
>>>> Now, this implementation requires a user buffer, and as far as I'm told,
>>>> you currently have kernel buffers on the ublk side. There's absolutely
>>>> no reason why kernel buffers cannot work, we'd most likely just need to
>>>> add a IORING_RSRC_KBUFFER type to handle that. My question here is how
>>>> hard is this requirement? Reason I ask is that it's much simpler to work
>>>> with userspace buffers. Yes the current implementation maps them
>>>> everytime, we could certainly change that, however I don't see this
>>>> being an issue. It's really no different than O_DIRECT, and you only
>>>> need to map them once for a read + whatever number of writes you'd need
>>>> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever
>>>> that buffer is unmapped. This is a notification for the application that
>>>> it's done using the buffer. For a pure kernel buffer, we'd either need
>>>> to be able to reference it (so that we KNOW it's not going away) and/or
>>>> have a callback associated with the buffer.
>>>
>>> Just to expand on this - if a kernel buffer is absolutely required, for
>>> example if you're inheriting pages from the page cache or other
>>> locations you cannot control, we would need to add something ala the
>>> below:
>>
>> Here's a more complete one, but utterly untested. But it does the same
>> thing, mapping a struct request, but it maps it to an io_rsrc_node which
>> in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the
>> same type, only the destruction is different. Then the callback provided
>> needs to do something ala:
>>
>> struct io_mapped_ubuf *imu = node->buf;
>>
>> if (imu && refcount_dec_and_test(&imu->refs))
>> 	kvfree(imu);
>>
>> when it's done with the imu. Probably an rsrc helper should just be done
>> for that, but those are details.
>>
>> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
>> index 9621ba533b35..050868a4c9f1 100644
>> --- a/io_uring/rsrc.c
>> +++ b/io_uring/rsrc.c
>> @@ -8,6 +8,8 @@
>>  #include <linux/nospec.h>
>>  #include <linux/hugetlb.h>
>>  #include <linux/compat.h>
>> +#include <linux/bvec.h>
>> +#include <linux/blk-mq.h>
>>  #include <linux/io_uring.h>
>>  
>>  #include <uapi/linux/io_uring.h>
>> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node)
>>  		if (node->buf)
>>  			io_buffer_unmap(node->ctx, node);
>>  		break;
>> +	case IORING_RSRC_KBUFFER:
>> +		node->kbuf_fn(node);
>> +		break;
> 
> Here 'node' is freed later, and it may not work because ->imu is bound
> with node.

Not sure why this matters? imu can be bound to any node (and has a
separate ref), but the node will remain for as long as the submission
runs. It has to, because the last reference is put when submission of
all requests in that series ends.

>> @@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg)
>>  	return ret;
>>  }
>>  
>> +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx,
>> +					 struct request *req,
>> +					 void (*kbuf_fn)(struct io_rsrc_node *))
>> +{
>> +	struct io_mapped_ubuf *imu = NULL;
>> +	struct io_rsrc_node *node = NULL;
>> +	struct req_iterator rq_iter;
>> +	unsigned int offset;
>> +	struct bio_vec bv;
>> +	int nr_bvecs;
>> +
>> +	if (!bio_has_data(req->bio))
>> +		goto out;
>> +
>> +	nr_bvecs = 0;
>> +	rq_for_each_bvec(bv, req, rq_iter)
>> +		nr_bvecs++;
>> +	if (!nr_bvecs)
>> +		goto out;
>> +
>> +	node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER);
>> +	if (!node)
>> +		goto out;
>> +	node->buf = NULL;
>> +
>> +	imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO);
>> +	if (!imu)
>> +		goto out;
>> +
>> +	imu->ubuf = 0;
>> +	imu->len = 0;
>> +	if (req->bio != req->biotail) {
>> +		int idx = 0;
>> +
>> +		offset = 0;
>> +		rq_for_each_bvec(bv, req, rq_iter) {
>> +			imu->bvec[idx++] = bv;
>> +			imu->len += bv.bv_len;
>> +		}
>> +	} else {
>> +		struct bio *bio = req->bio;
>> +
>> +		offset = bio->bi_iter.bi_bvec_done;
>> +		imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
>> +		imu->len = imu->bvec[0].bv_len;
>> +	}
>> +	imu->nr_bvecs = nr_bvecs;
>> +	imu->folio_shift = PAGE_SHIFT;
>> +	refcount_set(&imu->refs, 1);
> 
> One big problem is how to initialize the reference count, because this
> buffer need to be used in the following more than one request. Without
> one perfect counter, the buffer won't be freed in the exact time without
> extra OP.

Each request that uses the node, will grab a reference to the node. The
node holds a reference to the buffer. So at least as the above works,
the buf will be put when submission ends, as that puts the node and
subsequently the one reference the imu has by default. It'll outlast any
of the requests that use it during submission, and there cannot be any
other users of it as it isn't discoverable outside of that.

> I think the reference should be in `node` which need to be live if any
> consumer OP isn't completed.

That is how it works... io_req_assign_rsrc_node() will assign a node to
a request, which will be there until the request completes.

>> +	node->buf = imu;
>> +	node->kbuf_fn = kbuf_fn;
>> +	return node;
> 
> Also this function needs to register the buffer to table with one
> pre-defined buf index, then the following request can use it by
> the way of io_prep_rw_fixed().

It should not register it with the table, the whole point is to keep
this node only per-submission discoverable. If you're grabbing random
request pages, then it very much is a bit finicky and needs to be of
limited scope.

Each request type would need to support it. For normal read/write, I'd
suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that.

> If OP dependency can be avoided, I think this approach is fine,
> otherwise I still suggest sqe group. Not only performance, but
> application becomes too complicated.

You could avoid the OP dependency with just a flag, if you really wanted
to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
simpler than the sqe group scheme, which I'm a bit worried about as it's
a bit complicated in how deep it needs to go in the code. This one
stands alone, so I'd strongly encourage we pursue this a bit further and
iron out the kinks. Maybe it won't work in the end, I don't know, but it
seems pretty promising and it's soooo much simpler.

> We also we need to provide ->prep() callback for uring_cmd driver, so
> that io_rsrc_map_request() can be called by driver in ->prep(),
> meantime `io_ring_ctx` and `io_rsrc_node` need to be visible for driver.
> What do you think of these kind of changes?

io_ring_ctx is already visible in the normal system headers,
io_rsrc_node we certainly could make visible. That's not a big deal. It
makes a lot more sense to export than some of the other stuff we have in
there! As long as it's all nicely handled by helpers, then we'd be fine.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-30  2:43             ` Jens Axboe
@ 2024-10-30  3:08               ` Ming Lei
  2024-10-30  4:11                 ` Ming Lei
  2024-10-30 13:18                 ` Jens Axboe
  2024-10-31 13:25               ` Pavel Begunkov
  1 sibling, 2 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-30  3:08 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash

On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:
> On 10/29/24 8:03 PM, Ming Lei wrote:
> > On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
> >> On 10/29/24 2:06 PM, Jens Axboe wrote:
> >>> On 10/29/24 1:18 PM, Jens Axboe wrote:
> >>>> Now, this implementation requires a user buffer, and as far as I'm told,
> >>>> you currently have kernel buffers on the ublk side. There's absolutely
> >>>> no reason why kernel buffers cannot work, we'd most likely just need to
> >>>> add a IORING_RSRC_KBUFFER type to handle that. My question here is how
> >>>> hard is this requirement? Reason I ask is that it's much simpler to work
> >>>> with userspace buffers. Yes the current implementation maps them
> >>>> everytime, we could certainly change that, however I don't see this
> >>>> being an issue. It's really no different than O_DIRECT, and you only
> >>>> need to map them once for a read + whatever number of writes you'd need
> >>>> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever
> >>>> that buffer is unmapped. This is a notification for the application that
> >>>> it's done using the buffer. For a pure kernel buffer, we'd either need
> >>>> to be able to reference it (so that we KNOW it's not going away) and/or
> >>>> have a callback associated with the buffer.
> >>>
> >>> Just to expand on this - if a kernel buffer is absolutely required, for
> >>> example if you're inheriting pages from the page cache or other
> >>> locations you cannot control, we would need to add something ala the
> >>> below:
> >>
> >> Here's a more complete one, but utterly untested. But it does the same
> >> thing, mapping a struct request, but it maps it to an io_rsrc_node which
> >> in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the
> >> same type, only the destruction is different. Then the callback provided
> >> needs to do something ala:
> >>
> >> struct io_mapped_ubuf *imu = node->buf;
> >>
> >> if (imu && refcount_dec_and_test(&imu->refs))
> >> 	kvfree(imu);
> >>
> >> when it's done with the imu. Probably an rsrc helper should just be done
> >> for that, but those are details.
> >>
> >> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> >> index 9621ba533b35..050868a4c9f1 100644
> >> --- a/io_uring/rsrc.c
> >> +++ b/io_uring/rsrc.c
> >> @@ -8,6 +8,8 @@
> >>  #include <linux/nospec.h>
> >>  #include <linux/hugetlb.h>
> >>  #include <linux/compat.h>
> >> +#include <linux/bvec.h>
> >> +#include <linux/blk-mq.h>
> >>  #include <linux/io_uring.h>
> >>  
> >>  #include <uapi/linux/io_uring.h>
> >> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node)
> >>  		if (node->buf)
> >>  			io_buffer_unmap(node->ctx, node);
> >>  		break;
> >> +	case IORING_RSRC_KBUFFER:
> >> +		node->kbuf_fn(node);
> >> +		break;
> > 
> > Here 'node' is freed later, and it may not work because ->imu is bound
> > with node.
> 
> Not sure why this matters? imu can be bound to any node (and has a
> separate ref), but the node will remain for as long as the submission
> runs. It has to, because the last reference is put when submission of
> all requests in that series ends.

Fine, how is the imu found from OP? Not see related code to add the
allocated node into submission_state or ctx->buf_table.

io_rsrc_node_lookup() needs to find the buffer any way, right?

> 
> >> @@ -1070,6 +1075,65 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg)
> >>  	return ret;
> >>  }
> >>  
> >> +struct io_rsrc_node *io_rsrc_map_request(struct io_ring_ctx *ctx,
> >> +					 struct request *req,
> >> +					 void (*kbuf_fn)(struct io_rsrc_node *))
> >> +{
> >> +	struct io_mapped_ubuf *imu = NULL;
> >> +	struct io_rsrc_node *node = NULL;
> >> +	struct req_iterator rq_iter;
> >> +	unsigned int offset;
> >> +	struct bio_vec bv;
> >> +	int nr_bvecs;
> >> +
> >> +	if (!bio_has_data(req->bio))
> >> +		goto out;
> >> +
> >> +	nr_bvecs = 0;
> >> +	rq_for_each_bvec(bv, req, rq_iter)
> >> +		nr_bvecs++;
> >> +	if (!nr_bvecs)
> >> +		goto out;
> >> +
> >> +	node = io_rsrc_node_alloc(ctx, IORING_RSRC_KBUFFER);
> >> +	if (!node)
> >> +		goto out;
> >> +	node->buf = NULL;
> >> +
> >> +	imu = kvmalloc(struct_size(imu, bvec, nr_bvecs), GFP_NOIO);
> >> +	if (!imu)
> >> +		goto out;
> >> +
> >> +	imu->ubuf = 0;
> >> +	imu->len = 0;
> >> +	if (req->bio != req->biotail) {
> >> +		int idx = 0;
> >> +
> >> +		offset = 0;
> >> +		rq_for_each_bvec(bv, req, rq_iter) {
> >> +			imu->bvec[idx++] = bv;
> >> +			imu->len += bv.bv_len;
> >> +		}
> >> +	} else {
> >> +		struct bio *bio = req->bio;
> >> +
> >> +		offset = bio->bi_iter.bi_bvec_done;
> >> +		imu->bvec[0] = *__bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
> >> +		imu->len = imu->bvec[0].bv_len;
> >> +	}
> >> +	imu->nr_bvecs = nr_bvecs;
> >> +	imu->folio_shift = PAGE_SHIFT;
> >> +	refcount_set(&imu->refs, 1);
> > 
> > One big problem is how to initialize the reference count, because this
> > buffer need to be used in the following more than one request. Without
> > one perfect counter, the buffer won't be freed in the exact time without
> > extra OP.
> 
> Each request that uses the node, will grab a reference to the node. The
> node holds a reference to the buffer. So at least as the above works,
> the buf will be put when submission ends, as that puts the node and
> subsequently the one reference the imu has by default. It'll outlast any
> of the requests that use it during submission, and there cannot be any
> other users of it as it isn't discoverable outside of that.

OK, if the node/buffer is only looked up in ->prep(), this way works.

> 
> > I think the reference should be in `node` which need to be live if any
> > consumer OP isn't completed.
> 
> That is how it works... io_req_assign_rsrc_node() will assign a node to
> a request, which will be there until the request completes.
> 
> >> +	node->buf = imu;
> >> +	node->kbuf_fn = kbuf_fn;
> >> +	return node;
> > 
> > Also this function needs to register the buffer to table with one
> > pre-defined buf index, then the following request can use it by
> > the way of io_prep_rw_fixed().
> 
> It should not register it with the table, the whole point is to keep
> this node only per-submission discoverable. If you're grabbing random
> request pages, then it very much is a bit finicky and needs to be of
> limited scope.

There can be more than 1 buffer uses in single submission, can you share
how OP finds the specific buffer with ->buf_index from submission state?
This part is missed in your patch.

> 
> Each request type would need to support it. For normal read/write, I'd
> suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that.
> 
> > If OP dependency can be avoided, I think this approach is fine,
> > otherwise I still suggest sqe group. Not only performance, but
> > application becomes too complicated.
> 
> You could avoid the OP dependency with just a flag, if you really wanted
> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot

Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
syscall makes application too complicated, and IO latency is increased.

> simpler than the sqe group scheme, which I'm a bit worried about as it's
> a bit complicated in how deep it needs to go in the code. This one
> stands alone, so I'd strongly encourage we pursue this a bit further and
> iron out the kinks. Maybe it won't work in the end, I don't know, but it
> seems pretty promising and it's soooo much simpler.

If buffer register and lookup are always done in ->prep(), OP dependency
may be avoided.

> 
> > We also we need to provide ->prep() callback for uring_cmd driver, so
> > that io_rsrc_map_request() can be called by driver in ->prep(),
> > meantime `io_ring_ctx` and `io_rsrc_node` need to be visible for driver.
> > What do you think of these kind of changes?
> 
> io_ring_ctx is already visible in the normal system headers,
> io_rsrc_node we certainly could make visible. That's not a big deal. It
> makes a lot more sense to export than some of the other stuff we have in
> there! As long as it's all nicely handled by helpers, then we'd be fine.

OK.


thanks,
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-30  3:08               ` Ming Lei
@ 2024-10-30  4:11                 ` Ming Lei
  2024-10-30 13:20                   ` Jens Axboe
  2024-10-30 13:18                 ` Jens Axboe
  1 sibling, 1 reply; 41+ messages in thread
From: Ming Lei @ 2024-10-30  4:11 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash

On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote:
> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:

...

> > You could avoid the OP dependency with just a flag, if you really wanted
> > to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
> 
> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
> syscall makes application too complicated, and IO latency is increased.
> 
> > simpler than the sqe group scheme, which I'm a bit worried about as it's
> > a bit complicated in how deep it needs to go in the code. This one
> > stands alone, so I'd strongly encourage we pursue this a bit further and
> > iron out the kinks. Maybe it won't work in the end, I don't know, but it
> > seems pretty promising and it's soooo much simpler.
> 
> If buffer register and lookup are always done in ->prep(), OP dependency
> may be avoided.

Even all buffer register and lookup are done in ->prep(), OP dependency
still can't be avoided completely, such as:

1) two local buffers for sending to two sockets

2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)]  

3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)]

group 1 and group 2 needs to be linked, but inside each group, the two
sends may be submitted in parallel.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-30  3:08               ` Ming Lei
  2024-10-30  4:11                 ` Ming Lei
@ 2024-10-30 13:18                 ` Jens Axboe
  1 sibling, 0 replies; 41+ messages in thread
From: Jens Axboe @ 2024-10-30 13:18 UTC (permalink / raw)
  To: Ming Lei
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash

On 10/29/24 9:08 PM, Ming Lei wrote:
> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:
>> On 10/29/24 8:03 PM, Ming Lei wrote:
>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
>>>> On 10/29/24 2:06 PM, Jens Axboe wrote:
>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote:
>>>>>> Now, this implementation requires a user buffer, and as far as I'm told,
>>>>>> you currently have kernel buffers on the ublk side. There's absolutely
>>>>>> no reason why kernel buffers cannot work, we'd most likely just need to
>>>>>> add a IORING_RSRC_KBUFFER type to handle that. My question here is how
>>>>>> hard is this requirement? Reason I ask is that it's much simpler to work
>>>>>> with userspace buffers. Yes the current implementation maps them
>>>>>> everytime, we could certainly change that, however I don't see this
>>>>>> being an issue. It's really no different than O_DIRECT, and you only
>>>>>> need to map them once for a read + whatever number of writes you'd need
>>>>>> to do. If a 'tag' is provided for LOCAL_BUF, it'll post a CQE whenever
>>>>>> that buffer is unmapped. This is a notification for the application that
>>>>>> it's done using the buffer. For a pure kernel buffer, we'd either need
>>>>>> to be able to reference it (so that we KNOW it's not going away) and/or
>>>>>> have a callback associated with the buffer.
>>>>>
>>>>> Just to expand on this - if a kernel buffer is absolutely required, for
>>>>> example if you're inheriting pages from the page cache or other
>>>>> locations you cannot control, we would need to add something ala the
>>>>> below:
>>>>
>>>> Here's a more complete one, but utterly untested. But it does the same
>>>> thing, mapping a struct request, but it maps it to an io_rsrc_node which
>>>> in turn has an io_mapped_ubuf in it. Both BUFFER and KBUFFER use the
>>>> same type, only the destruction is different. Then the callback provided
>>>> needs to do something ala:
>>>>
>>>> struct io_mapped_ubuf *imu = node->buf;
>>>>
>>>> if (imu && refcount_dec_and_test(&imu->refs))
>>>> 	kvfree(imu);
>>>>
>>>> when it's done with the imu. Probably an rsrc helper should just be done
>>>> for that, but those are details.
>>>>
>>>> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
>>>> index 9621ba533b35..050868a4c9f1 100644
>>>> --- a/io_uring/rsrc.c
>>>> +++ b/io_uring/rsrc.c
>>>> @@ -8,6 +8,8 @@
>>>>  #include <linux/nospec.h>
>>>>  #include <linux/hugetlb.h>
>>>>  #include <linux/compat.h>
>>>> +#include <linux/bvec.h>
>>>> +#include <linux/blk-mq.h>
>>>>  #include <linux/io_uring.h>
>>>>  
>>>>  #include <uapi/linux/io_uring.h>
>>>> @@ -474,6 +476,9 @@ void io_free_rsrc_node(struct io_rsrc_node *node)
>>>>  		if (node->buf)
>>>>  			io_buffer_unmap(node->ctx, node);
>>>>  		break;
>>>> +	case IORING_RSRC_KBUFFER:
>>>> +		node->kbuf_fn(node);
>>>> +		break;
>>>
>>> Here 'node' is freed later, and it may not work because ->imu is bound
>>> with node.
>>
>> Not sure why this matters? imu can be bound to any node (and has a
>> separate ref), but the node will remain for as long as the submission
>> runs. It has to, because the last reference is put when submission of
>> all requests in that series ends.
> 
> Fine, how is the imu found from OP? Not see related code to add the
> allocated node into submission_state or ctx->buf_table.

Just didn't do that, see the POC test patch I did for rw for just
grabbing the fixed one in io_submit_state. Really depends on how many
we'd need - if it's just 1 per submit, then whatever I had would work
and the OP just needs to know to look there.

> io_rsrc_node_lookup() needs to find the buffer any way, right?

That's for table lookup, for the POC there's just the one node hence
nothing really to lookup. It's either rsrc_empty_node, or a valid node.

>>> I think the reference should be in `node` which need to be live if any
>>> consumer OP isn't completed.
>>
>> That is how it works... io_req_assign_rsrc_node() will assign a node to
>> a request, which will be there until the request completes.
>>
>>>> +	node->buf = imu;
>>>> +	node->kbuf_fn = kbuf_fn;
>>>> +	return node;
>>>
>>> Also this function needs to register the buffer to table with one
>>> pre-defined buf index, then the following request can use it by
>>> the way of io_prep_rw_fixed().
>>
>> It should not register it with the table, the whole point is to keep
>> this node only per-submission discoverable. If you're grabbing random
>> request pages, then it very much is a bit finicky and needs to be of
>> limited scope.
> 
> There can be more than 1 buffer uses in single submission, can you share
> how OP finds the specific buffer with ->buf_index from submission state?
> This part is missed in your patch.

If we need more than one, then yeah we'd need an index rather than just
a single pointer. Doesn't really change the mechanics, you'd need to
provide an index like with ->buf_index.

It's not missed in the patch, it's really just a POC patch to show how
it can be done, by no means a done solution! But we can certainly get it
there.

>> Each request type would need to support it. For normal read/write, I'd
>> suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that.
>>
>>> If OP dependency can be avoided, I think this approach is fine,
>>> otherwise I still suggest sqe group. Not only performance, but
>>> application becomes too complicated.
>>
>> You could avoid the OP dependency with just a flag, if you really wanted
>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
> 
> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
> syscall makes application too complicated, and IO latency is increased.

It's really not a big deal to prepare-and-submit the dependencies
separately, but at the same time, I don't think it'd be a bad idea to
support eg 2 local buffers per submit. Or whatever we need there.

This is more from a usability point of view, because the rest of the
machinery is so much more expensive than a single extra syscall that the
latter is not goinbg to affect IO latencies at all.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-30  4:11                 ` Ming Lei
@ 2024-10-30 13:20                   ` Jens Axboe
  2024-10-31  2:53                     ` Ming Lei
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-30 13:20 UTC (permalink / raw)
  To: Ming Lei
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash

On 10/29/24 10:11 PM, Ming Lei wrote:
> On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote:
>> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:
> 
> ...
> 
>>> You could avoid the OP dependency with just a flag, if you really wanted
>>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
>>
>> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
>> syscall makes application too complicated, and IO latency is increased.
>>
>>> simpler than the sqe group scheme, which I'm a bit worried about as it's
>>> a bit complicated in how deep it needs to go in the code. This one
>>> stands alone, so I'd strongly encourage we pursue this a bit further and
>>> iron out the kinks. Maybe it won't work in the end, I don't know, but it
>>> seems pretty promising and it's soooo much simpler.
>>
>> If buffer register and lookup are always done in ->prep(), OP dependency
>> may be avoided.
> 
> Even all buffer register and lookup are done in ->prep(), OP dependency
> still can't be avoided completely, such as:
> 
> 1) two local buffers for sending to two sockets
> 
> 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)]  
> 
> 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)]
> 
> group 1 and group 2 needs to be linked, but inside each group, the two
> sends may be submitted in parallel.

That is where groups of course work, in that you can submit 2 groups and
have each member inside each group run independently. But I do think we
need to decouple the local buffer and group concepts entirely. For the
first step, getting local buffers working with zero copy would be ideal,
and then just live with the fact that group 1 needs to be submitted
first and group 2 once the first ones are done.

Once local buffers are done, we can look at doing the sqe grouping in a
nice way. I do think it's a potentially powerful concept, but we're
going to make a lot more progress on this issue if we carefully separate
dependencies and get each of them done separately.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-30 13:20                   ` Jens Axboe
@ 2024-10-31  2:53                     ` Ming Lei
  2024-10-31 13:35                       ` Jens Axboe
  2024-10-31 13:42                       ` Pavel Begunkov
  0 siblings, 2 replies; 41+ messages in thread
From: Ming Lei @ 2024-10-31  2:53 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash, ming.lei

On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote:
> On 10/29/24 10:11 PM, Ming Lei wrote:
> > On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote:
> >> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:
> > 
> > ...
> > 
> >>> You could avoid the OP dependency with just a flag, if you really wanted
> >>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
> >>
> >> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
> >> syscall makes application too complicated, and IO latency is increased.
> >>
> >>> simpler than the sqe group scheme, which I'm a bit worried about as it's
> >>> a bit complicated in how deep it needs to go in the code. This one
> >>> stands alone, so I'd strongly encourage we pursue this a bit further and
> >>> iron out the kinks. Maybe it won't work in the end, I don't know, but it
> >>> seems pretty promising and it's soooo much simpler.
> >>
> >> If buffer register and lookup are always done in ->prep(), OP dependency
> >> may be avoided.
> > 
> > Even all buffer register and lookup are done in ->prep(), OP dependency
> > still can't be avoided completely, such as:
> > 
> > 1) two local buffers for sending to two sockets
> > 
> > 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)]  
> > 
> > 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)]
> > 
> > group 1 and group 2 needs to be linked, but inside each group, the two
> > sends may be submitted in parallel.
> 
> That is where groups of course work, in that you can submit 2 groups and
> have each member inside each group run independently. But I do think we
> need to decouple the local buffer and group concepts entirely. For the
> first step, getting local buffers working with zero copy would be ideal,
> and then just live with the fact that group 1 needs to be submitted
> first and group 2 once the first ones are done.

IMHO, it is one _kernel_ zero copy(_performance_) feature, which often imply:

- better performance expectation
- no big change on existed application for using this feature

Application developer is less interested in sort of crippled or immature
feature, especially need big change on existed code logic(then two code paths
need to maintain), with potential performance regression.

With sqe group and REQ_F_GROUP_KBUF, application just needs lines of
code change for using the feature, and it is pretty easy to evaluate
the feature since no any extra logic change & no extra syscall/wait
introduced. The whole patchset has been mature enough, unfortunately
blocked without obvious reasons.

> 
> Once local buffers are done, we can look at doing the sqe grouping in a
> nice way. I do think it's a potentially powerful concept, but we're
> going to make a lot more progress on this issue if we carefully separate
> dependencies and get each of them done separately.

One fundamental difference between local buffer and REQ_F_GROUP_KBUF is

- local buffer has to be provided and used in ->prep()
- REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep()

The only common code could be one buffer abstraction for OP to use, but
still used differently, ->prep() vs. ->issue().

So it is hard to call it decouple, especially REQ_F_GROUP_KBUF has been
simple enough, and the main change is to import it in OP code.

Local buffer is one smart idea, but I hope the following things may be
settled first:

1) is it generic enough to just allow to provide local buffer during
->prep()?

- this way actually becomes sync & nowait IO, instead AIO, and has been
  one strong constraint from UAPI viewpoint.

- Driver may need to wait until some data comes, then return & provide
the buffer with data, and local buffer can't cover this case

2) is it allowed to call ->uring_cmd() from io_uring_cmd_prep()? If not,
any idea to call into driver for leasing the kernel buffer to io_uring?

3) in OP code, how to differentiate normal userspace buffer select with
local buffer? And how does OP know if normal buffer select or local
kernel buffer should be used? Some OP may want to use normal buffer
select instead of local buffer, others may want to use local buffer.

4) arbitrary numbers of local buffer needs to be supported, since IO
often comes at batch, it shouldn't be hard to support it by adding xarray
to submission state, what do you think of this added complexity? Without
supporting arbitrary number of local buffers, performance can be just
bad, it doesn't make sense as zc viewpoint. Meantime as number of local
buffer is increased, more rsrc_node & imu allocation is introduced, this
still may degrade perf a bit.

5) io_rsrc_node becomes part of interface between io_uring and driver
for releasing the leased buffer, so extra data has to be
added to `io_rsrc_node` for driver use.

However, from above, the following can be concluded at least:

- it isn't generic enough, #1, #3
- it still need sqe group
- it is much more complicated than REQ_F_GROUP_KBUF only
- it can't be more efficient


Thanks,
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  2024-10-30  2:04         ` Ming Lei
@ 2024-10-31 13:16           ` Pavel Begunkov
  2024-11-01  1:04             ` Ming Lei
  0 siblings, 1 reply; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-31 13:16 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/30/24 02:04, Ming Lei wrote:
> On Wed, Oct 30, 2024 at 01:25:33AM +0000, Pavel Begunkov wrote:
>> On 10/30/24 00:45, Ming Lei wrote:
>>> On Tue, Oct 29, 2024 at 04:47:59PM +0000, Pavel Begunkov wrote:
>>>> On 10/25/24 13:22, Ming Lei wrote:
>>>> ...
>>>>> diff --git a/io_uring/rw.c b/io_uring/rw.c
>>>>> index 4bc0d762627d..5a2025d48804 100644
>>>>> --- a/io_uring/rw.c
>>>>> +++ b/io_uring/rw.c
>>>>> @@ -245,7 +245,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
>>>>>     	if (io_rw_alloc_async(req))
>>>>>     		return -ENOMEM;
>>>>> -	if (!do_import || io_do_buffer_select(req))
>>>>> +	if (!do_import || io_do_buffer_select(req) ||
>>>>> +	    io_use_leased_grp_kbuf(req))
>>>>>     		return 0;
>>>>>     	rw = req->async_data;
>>>>> @@ -489,6 +490,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
>>>>>     		}
>>>>>     		req_set_fail(req);
>>>>>     		req->cqe.res = res;
>>>>> +		if (io_use_leased_grp_kbuf(req)) {
>>>>
>>>> That's what I'm talking about, we're pushing more and
>>>> into the generic paths (or patching every single hot opcode
>>>> there is). You said it's fine for ublk the way it was, i.e.
>>>> without tracking, so let's then pretend it's a ublk specific
>>>> feature, kill that addition and settle at that if that's the
>>>> way to go.
>>>
>>> As I mentioned before, it isn't ublk specific, zeroing is required
>>> because the buffer is kernel buffer, that is all. Any other approach
>>> needs this kind of handling too. The coming fuse zc need it.
>>>
>>> And it can't be done in driver side, because driver has no idea how
>>> to consume the kernel buffer.
>>>
>>> Also it is only required in case of short read/recv, and it isn't
>>> hot path, not mention it is just one check on request flag.
>>
>> I agree, it's not hot, it's a failure path, and the recv side
>> is of medium hotness, but the main concern is that the feature
>> is too actively leaking into other requests.
>   
> The point is that if you'd like to support kernel buffer. If yes, this
> kind of change can't be avoided.

There is no guarantee with the patchset that there will be any IO done
with that buffer, e.g. place a nop into the group, and even then you
have offsets and length, so it's not clear what the zeroying is supposed
to achieve. Either the buffer comes fully "initialised", i.e. free of
kernel private data, or we need to track what parts of the buffer were
used.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-30  2:43             ` Jens Axboe
  2024-10-30  3:08               ` Ming Lei
@ 2024-10-31 13:25               ` Pavel Begunkov
  2024-10-31 14:29                 ` Jens Axboe
  1 sibling, 1 reply; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-31 13:25 UTC (permalink / raw)
  To: Jens Axboe, Ming Lei; +Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/30/24 02:43, Jens Axboe wrote:
> On 10/29/24 8:03 PM, Ming Lei wrote:
>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
>>> On 10/29/24 2:06 PM, Jens Axboe wrote:
>>>> On 10/29/24 1:18 PM, Jens Axboe wrote:
...
>>> +	node->buf = imu;
>>> +	node->kbuf_fn = kbuf_fn;
>>> +	return node;
>>
>> Also this function needs to register the buffer to table with one
>> pre-defined buf index, then the following request can use it by
>> the way of io_prep_rw_fixed().
> 
> It should not register it with the table, the whole point is to keep
> this node only per-submission discoverable. If you're grabbing random
> request pages, then it very much is a bit finicky 

Registering it into the table has enough of design and flexibility
merits: error handling, allowing any type of dependencies of requests
by handling it in the user space, etc.

> and needs to be of
> limited scope.

And I don't think we can force it, neither with limiting exposure to
submission only nor with the Ming's group based approach. The user can
always queue a request that will never complete and/or by using
DEFER_TASKRUN and just not letting it run. In this sense it might be
dangerous to block requests of an average system shared block device,
but if it's fine with ublk it sounds like it should be fine for any of
the aforementioned approaches.

> Each request type would need to support it. For normal read/write, I'd
> suggest just adding IORING_OP_READ_LOCAL and WRITE_LOCAL to do that.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31  2:53                     ` Ming Lei
@ 2024-10-31 13:35                       ` Jens Axboe
  2024-10-31 15:07                         ` Jens Axboe
  2024-11-01  1:39                         ` Ming Lei
  2024-10-31 13:42                       ` Pavel Begunkov
  1 sibling, 2 replies; 41+ messages in thread
From: Jens Axboe @ 2024-10-31 13:35 UTC (permalink / raw)
  To: Ming Lei
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash

On 10/30/24 8:53 PM, Ming Lei wrote:
> On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote:
>> On 10/29/24 10:11 PM, Ming Lei wrote:
>>> On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote:
>>>> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:
>>>
>>> ...
>>>
>>>>> You could avoid the OP dependency with just a flag, if you really wanted
>>>>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
>>>>
>>>> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
>>>> syscall makes application too complicated, and IO latency is increased.
>>>>
>>>>> simpler than the sqe group scheme, which I'm a bit worried about as it's
>>>>> a bit complicated in how deep it needs to go in the code. This one
>>>>> stands alone, so I'd strongly encourage we pursue this a bit further and
>>>>> iron out the kinks. Maybe it won't work in the end, I don't know, but it
>>>>> seems pretty promising and it's soooo much simpler.
>>>>
>>>> If buffer register and lookup are always done in ->prep(), OP dependency
>>>> may be avoided.
>>>
>>> Even all buffer register and lookup are done in ->prep(), OP dependency
>>> still can't be avoided completely, such as:
>>>
>>> 1) two local buffers for sending to two sockets
>>>
>>> 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)]  
>>>
>>> 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)]
>>>
>>> group 1 and group 2 needs to be linked, but inside each group, the two
>>> sends may be submitted in parallel.
>>
>> That is where groups of course work, in that you can submit 2 groups and
>> have each member inside each group run independently. But I do think we
>> need to decouple the local buffer and group concepts entirely. For the
>> first step, getting local buffers working with zero copy would be ideal,
>> and then just live with the fact that group 1 needs to be submitted
>> first and group 2 once the first ones are done.
> 
> IMHO, it is one _kernel_ zero copy(_performance_) feature, which often
> imply:
> 
> - better performance expectation
> - no big change on existed application for using this feature

For #2, really depends on what it is. But ideally, yes, agree.

> Application developer is less interested in sort of crippled or
> immature feature, especially need big change on existed code
> logic(then two code paths need to maintain), with potential
> performance regression.
> 
> With sqe group and REQ_F_GROUP_KBUF, application just needs lines of
> code change for using the feature, and it is pretty easy to evaluate
> the feature since no any extra logic change & no extra syscall/wait
> introduced. The whole patchset has been mature enough, unfortunately
> blocked without obvious reasons.

Let me tell you where I'm coming from. If you might recall, I originated
the whole grouping idea. Didn't complete it, but it's essentially the
same concept as REQ_F_GROUP_KBUF in that you have some dependents on a
leader, and the dependents can run in parallel rather than being
serialized by links. I'm obviously in favor of this concept, but I want
to see it being done in such a way that it's actually something we can
reason about and maintain. You want it for zero copy, which makes sense,
but I also want to ensure it's a CLEAN implementation that doesn't have
tangles in places it doesn't need to.

You seem to be very hard to convince of making ANY changes at all. In
your mind the whole thing is done, and it's being "blocked without
obvious reason". It's not being blocked at all, I've been diligently
trying to work with you recently on getting this done. I'm at least as
interested as you in getting this work done. But I want you to work with
me a bit on some items so we can get it into a shape where I'm happy
with it, and I can maintain it going forward.

So, please, rather than dig your heels in all the time, have an open
mind on how we can accomplish some of these things.


>> Once local buffers are done, we can look at doing the sqe grouping in a
>> nice way. I do think it's a potentially powerful concept, but we're
>> going to make a lot more progress on this issue if we carefully separate
>> dependencies and get each of them done separately.
> 
> One fundamental difference between local buffer and REQ_F_GROUP_KBUF is
> 
> - local buffer has to be provided and used in ->prep()
> - REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep()

It does not - the POC certainly did it in ->prep(), but all it really
cares about is having the ring locked. ->prep() always has that,
->issue() _normally_ has that, unless it ends up in an io-wq punt.

You can certainly do it in ->issue() and still have it be per-submit,
the latter which I care about for safety reasons. This just means it has
to be provided in the _first_ issue, and that IOSQE_ASYNC must not be
set on the request. I think that restriction is fine, nobody should
really be using IOSQE_ASYNC anyway.

I think the original POC maybe did more harm than good in that it was
too simplistic, and you seem too focused on the limits of that. So let
me detail what it actually could look like. We have io_submit_state in
io_ring_ctx. This is per-submit private data, it's initialized and
flushed for each io_uring_enter(2) that submits requests.

We have a registered file and buffer table, file_table and buf_table.
These have life times that are dependent on the ring and
registration/unregistration. We could have a local_table. This one
should be setup by some register command, eg reserving X slots for that.
At the end of submit, we'd flush this table, putting nodes in there.
Requests can look at the table in either prep or issue, and find buffer
nodes. If a request uses one of these, it grabs a ref and hence has it
available until it puts it at IO completion time. When a single submit
context is done, local_table is iterated (if any entries exist) and
existing nodes cleared and put.

That provides a similar table to buf_table, but with a lifetime of a
submit. Just like local buf. Yes it would not be private to a single
group, it'd be private to a submit which has potentially bigger scope,
but that should not matter.

That should give you exactly what you need, if you use
IORING_RSRC_KBUFFER in the local_table. But it could even be used for
IORING_RSRC_BUFFER as well, providing buffers for a single submit cycle
as well.

Rather than do something completely on the side with
io_uring_kernel_buf, we can use io_rsrc_node and io_mapped_ubuf for
this. Which goes back to my initial rant in this email - use EXISTING
infrastructure for these things. A big part of why this isn't making
progress is that a lot of things are done on the side rather than being
integrated. Then you need extra io_kiocb members, where it really should
just be using io_rsrc_node and get everything else for free. No need to
do special checking and putting seperately, it's a resource node just
like any other resource node we already support.

> The only common code could be one buffer abstraction for OP to use, but
> still used differently, ->prep() vs. ->issue().

With the prep vs issue thing not being an issue, then it sounds like we
fully agree that a) it should be one buffer abstraction, and b) we
already have the infrastructure for this. We just need to add
IORING_RSRC_KBUFFER, which I already posted some POC code for.

> So it is hard to call it decouple, especially REQ_F_GROUP_KBUF has been
> simple enough, and the main change is to import it in OP code.
> 
> Local buffer is one smart idea, but I hope the following things may be
> settled first:
> 
> 1) is it generic enough to just allow to provide local buffer during
> ->prep()?
> 
> - this way actually becomes sync & nowait IO, instead AIO, and has been
>   one strong constraint from UAPI viewpoint.
> 
> - Driver may need to wait until some data comes, then return & provide
> the buffer with data, and local buffer can't cover this case

This should be moot now with the above explanation.

> 2) is it allowed to call ->uring_cmd() from io_uring_cmd_prep()? If not,
> any idea to call into driver for leasing the kernel buffer to io_uring?

Ditto

> 3) in OP code, how to differentiate normal userspace buffer select with
> local buffer? And how does OP know if normal buffer select or local
> kernel buffer should be used? Some OP may want to use normal buffer
> select instead of local buffer, others may want to use local buffer.

Yes this is a key question we need to figure out. Right now using fixed
buffers needs to set ->buf_index, and the OP needs to know aboout it.
let's not confuse it with buffer select, IOSQE_BUFFER_SELECT, as that's
for provided buffers.

> 4) arbitrary numbers of local buffer needs to be supported, since IO
> often comes at batch, it shouldn't be hard to support it by adding xarray
> to submission state, what do you think of this added complexity? Without
> supporting arbitrary number of local buffers, performance can be just
> bad, it doesn't make sense as zc viewpoint. Meantime as number of local
> buffer is increased, more rsrc_node & imu allocation is introduced, this
> still may degrade perf a bit.

That's fine, we just need to reserve space for them upfront. I don't
like the xarray idea, as:

1) xarray does internal locking, which we don't need here
2) The existing io_rsrc_data table is what is being used for
   io_rsrc_node management now. This would introduce another method for
   that.

I do want to ensure that io_submit_state_finish() is still low overhead,
and using an xarray would be more expensive than just doing:

if (ctx->local_table.nr)
	flush_nodes();

as you'd need to always setup an iterator. But this isn't really THAT
important. The benefit of using an xarray would be that we'd get
flexible storing of members without needing pre-registration, obviously.

> 5) io_rsrc_node becomes part of interface between io_uring and driver
> for releasing the leased buffer, so extra data has to be
> added to `io_rsrc_node` for driver use.

That's fine imho. The KBUFFER addition already adds the callback, we can
add a data thing too. The kernel you based your code on has an
io_rsrc_node that is 48 bytes in size, and my current tree has one where
it's 32 bytes in size after the table rework. If we have to add 2x8b to
support this, that's NOT a big deal and we just end up with a node
that's the same size as before.

And we get rid of this odd intermediate io_uring_kernel_buf struct,
which is WAY bigger anyway, and requires TWO allocations where the
existing io_mapped_ubuf embeds the bvec. I'd argue two vs one allocs is
a much bigger deal for performance reasons.

As a final note, one thing I mentioned in an earlier email is that part
of the issue here is that there are several things that need ironing
out, and they are actually totally separate. One is the buffer side,
which this email mostly deals with, the other one is the grouping
concept.

For the sqe grouping, one sticking point has been using that last
sqe->flags bit. I was thinking about this last night, and what if we got
away from using a flag entirely? At some point io_uring needs to deal
with this flag limitation, but it's arguably a large effort, and I'd
greatly prefer not having to paper over it to shove in grouped SQEs.

So... what if we simply added a new OP, IORING_OP_GROUP_START, or
something like that. Hence instead of having a designated group leader
bit for an OP, eg:

sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe, ...);
sqe->flags |= IOSQE_GROUP_BIT;

you'd do:

sqe = io_uring_get_sqe(ring);
io_uring_prep_group_start(sqe, ...);
sqe->flags |= IOSQE_IO_LINK;

sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe, ...);

which would be the equivalent transformation - the read would be the
group leader as it's the first member of that chain. The read should set
IOSQE_IO_LINK for as long as it has members. The members in that group
would NOT be serialized. They would use IOSQE_IO_LINK purely to be part
of that group, but IOSQE_IO_LINK would not cause them to be serialized.
Hence the link just implies membership, not ordering within the group.

This removes the flag issue, with the sligth caveat that IOSQE_IO_LINK
has a different meaning inside the group. Maybe you'd need a GROUP_END
op as well, so you could potentially terminate the group. Or maybe you'd
just rely on the usual semantics, which is "first one that doesn't have
IOSQE_IO_LINK sets marks the end of the group", which is how linked
chains work right now too.

The whole grouping wouldn't change at all, it's just a different way of
marking what constitutes a group that doesn't run afoul of the whole
flag limitation thing.

Just an idea!

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31  2:53                     ` Ming Lei
  2024-10-31 13:35                       ` Jens Axboe
@ 2024-10-31 13:42                       ` Pavel Begunkov
  1 sibling, 0 replies; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-31 13:42 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe; +Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/31/24 02:53, Ming Lei wrote:
> On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote:
>> On 10/29/24 10:11 PM, Ming Lei wrote:
>>> On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote:
>>>> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:
>>>
>>> ...
>>>
>>>>> You could avoid the OP dependency with just a flag, if you really wanted
>>>>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
>>>>
>>>> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
>>>> syscall makes application too complicated, and IO latency is increased.
>>>>
>>>>> simpler than the sqe group scheme, which I'm a bit worried about as it's
>>>>> a bit complicated in how deep it needs to go in the code. This one
>>>>> stands alone, so I'd strongly encourage we pursue this a bit further and
>>>>> iron out the kinks. Maybe it won't work in the end, I don't know, but it
>>>>> seems pretty promising and it's soooo much simpler.
>>>>
>>>> If buffer register and lookup are always done in ->prep(), OP dependency
>>>> may be avoided.
>>>
>>> Even all buffer register and lookup are done in ->prep(), OP dependency
>>> still can't be avoided completely, such as:
>>>
>>> 1) two local buffers for sending to two sockets
>>>
>>> 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)]
>>>
>>> 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)]
>>>
>>> group 1 and group 2 needs to be linked, but inside each group, the two
>>> sends may be submitted in parallel.
>>
>> That is where groups of course work, in that you can submit 2 groups and
>> have each member inside each group run independently. But I do think we
>> need to decouple the local buffer and group concepts entirely. For the
>> first step, getting local buffers working with zero copy would be ideal,
>> and then just live with the fact that group 1 needs to be submitted
>> first and group 2 once the first ones are done.
> 
> IMHO, it is one _kernel_ zero copy(_performance_) feature, which often imply:
> 
> - better performance expectation
> - no big change on existed application for using this feature

Yes, the feature doesn't make sense without appropriate performance
wins, but I outright disagree with "there should be no big uapi
changes to use it". It might be nice if the user doesn't have to
change anything, but I find it of lower priority than performance,
clarity of the overall design and so on.

> Application developer is less interested in sort of crippled or immature
> feature, especially need big change on existed code logic(then two code paths
> need to maintain), with potential performance regression.

Then we just need avoid creating a "crippled" feature, I believe
everyone is on the same page here. As for maturity, features don't
get there at the same pace, extra layers of complexity definitely
do make getting into shape much slower. You can argue you like how
the uapi turned to be, though I believe there are still rough edges
if we consider it a generic feature, but the kernel side of things is
fairly complicated.

> With sqe group and REQ_F_GROUP_KBUF, application just needs lines of
> code change for using the feature, and it is pretty easy to evaluate
> the feature since no any extra logic change & no extra syscall/wait
> introduced. The whole patchset has been mature enough, unfortunately
> blocked without obvious reasons.
> 
>>
>> Once local buffers are done, we can look at doing the sqe grouping in a
>> nice way. I do think it's a potentially powerful concept, but we're
>> going to make a lot more progress on this issue if we carefully separate
>> dependencies and get each of them done separately.
> 
> One fundamental difference between local buffer and REQ_F_GROUP_KBUF is
> 
> - local buffer has to be provided and used in ->prep()
> - REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep()

I'd need to take a look at that local buffer patch to say, but likely
there is a way to shift all of it to ->issue(), which would be more
aligned with fixed file resolution and how links use it.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31 13:25               ` Pavel Begunkov
@ 2024-10-31 14:29                 ` Jens Axboe
  2024-10-31 15:25                   ` Pavel Begunkov
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-31 14:29 UTC (permalink / raw)
  To: Pavel Begunkov, Ming Lei
  Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/31/24 7:25 AM, Pavel Begunkov wrote:
> On 10/30/24 02:43, Jens Axboe wrote:
>> On 10/29/24 8:03 PM, Ming Lei wrote:
>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
>>>> On 10/29/24 2:06 PM, Jens Axboe wrote:
>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote:
> ...
>>>> +    node->buf = imu;
>>>> +    node->kbuf_fn = kbuf_fn;
>>>> +    return node;
>>>
>>> Also this function needs to register the buffer to table with one
>>> pre-defined buf index, then the following request can use it by
>>> the way of io_prep_rw_fixed().
>>
>> It should not register it with the table, the whole point is to keep
>> this node only per-submission discoverable. If you're grabbing random
>> request pages, then it very much is a bit finicky 
> 
> Registering it into the table has enough of design and flexibility
> merits: error handling, allowing any type of dependencies of requests
> by handling it in the user space, etc.

Right, but it has to be a special table. See my lengthier reply to Ming.
The initial POC did install it into a table, it's just a one-slot table,
io_submit_state. I think the right approach is to have an actual struct
io_rsrc_data local_table in the ctx, with refs put at the end of submit.
Same kind of concept, just allows for more entries (potentially), with
the same requirement that nodes get put when submit ends. IOW, requests
need to find it within the same submit.

Obviously you would not NEED to do that, but if the use case is grabbing
bvecs out of a request, then it very much should not be discoverable
past the initial assignments within that submit scope.

>> and needs to be of
>> limited scope.
> 
> And I don't think we can force it, neither with limiting exposure to
> submission only nor with the Ming's group based approach. The user can
> always queue a request that will never complete and/or by using
> DEFER_TASKRUN and just not letting it run. In this sense it might be
> dangerous to block requests of an average system shared block device,
> but if it's fine with ublk it sounds like it should be fine for any of
> the aforementioned approaches.

As long as the resource remains valid until the last put of the node,
then it should be OK. Yes the application can mess things up in terms of
latency if it uses one of these bufs for eg a read on a pipe that never
gets any data, but the data will remain valid regardless. And that's
very much a "doctor it hurts when I..." case, it should not cause any
safety issues. It'll just prevent progress for the other requests that
are using that buffer, if they need the final put to happen before
making progress.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31 13:35                       ` Jens Axboe
@ 2024-10-31 15:07                         ` Jens Axboe
  2024-11-01  1:39                         ` Ming Lei
  1 sibling, 0 replies; 41+ messages in thread
From: Jens Axboe @ 2024-10-31 15:07 UTC (permalink / raw)
  To: Ming Lei
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash

Another option is that we fully stick with the per-group buffer concept,
which could also work just fine with io_rsrc_node. If we stick with the
OP_GROUP_START thing, then that op could setup group_buf thing that is
local to that group. This is where an instantiated buffer would appear
too, keeping it strictly local to that group. That avoids needing any
kind of ring state for this, and the group_buf would be propagated from
the group leader to the members. The group_buf lives until all members
of the group are dead, at which point it's released. I forget if your
grouping implementation mandated the same scheme I originally had, where
the group leader completes last? If it does, then it's a natural thing
to have the group_buf live for the duration of the group leader, and it
can just be normal per-io_kiocb data at that point, nothing special
needed there.

As with the previous scheme, each request using one of these
IORING_RSRC_KBUFFER nodes just assigns it like it would any other fixed
resource node, and the normal completion path puts it.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31 14:29                 ` Jens Axboe
@ 2024-10-31 15:25                   ` Pavel Begunkov
  2024-10-31 15:42                     ` Jens Axboe
  0 siblings, 1 reply; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-31 15:25 UTC (permalink / raw)
  To: Jens Axboe, Ming Lei; +Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/31/24 14:29, Jens Axboe wrote:
> On 10/31/24 7:25 AM, Pavel Begunkov wrote:
>> On 10/30/24 02:43, Jens Axboe wrote:
>>> On 10/29/24 8:03 PM, Ming Lei wrote:
>>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
>>>>> On 10/29/24 2:06 PM, Jens Axboe wrote:
>>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote:
>> ...
>>>>> +    node->buf = imu;
>>>>> +    node->kbuf_fn = kbuf_fn;
>>>>> +    return node;
>>>>
>>>> Also this function needs to register the buffer to table with one
>>>> pre-defined buf index, then the following request can use it by
>>>> the way of io_prep_rw_fixed().
>>>
>>> It should not register it with the table, the whole point is to keep
>>> this node only per-submission discoverable. If you're grabbing random
>>> request pages, then it very much is a bit finicky
>>
>> Registering it into the table has enough of design and flexibility
>> merits: error handling, allowing any type of dependencies of requests
>> by handling it in the user space, etc.
> 
> Right, but it has to be a special table. See my lengthier reply to Ming.

Mind pointing the specific part? I read through the thread and didn't
see why it _has_ to be a special table.
And by "special" I assume you mean the property of it being cleaned up
/ flushed by the end of submission / syscall, right?

> The initial POC did install it into a table, it's just a one-slot table,

By "table" I actually mean anything that survives beyond the current
syscall / submission and potentially can be used by requests submitted
with another syscall.

> io_submit_state. I think the right approach is to have an actual struct
> io_rsrc_data local_table in the ctx, with refs put at the end of submit.
> Same kind of concept, just allows for more entries (potentially), with
> the same requirement that nodes get put when submit ends. IOW, requests
> need to find it within the same submit.
> 
> Obviously you would not NEED to do that, but if the use case is grabbing
> bvecs out of a request, then it very much should not be discoverable
> past the initial assignments within that submit scope.
> 
>>> and needs to be of
>>> limited scope.
>>
>> And I don't think we can force it, neither with limiting exposure to
>> submission only nor with the Ming's group based approach. The user can
>> always queue a request that will never complete and/or by using
>> DEFER_TASKRUN and just not letting it run. In this sense it might be
>> dangerous to block requests of an average system shared block device,
>> but if it's fine with ublk it sounds like it should be fine for any of
>> the aforementioned approaches.
> 
> As long as the resource remains valid until the last put of the node,
> then it should be OK. Yes the application can mess things up in terms of

It should be fine in terms of buffers staying alive. The "dangerous"
part I mentioned is about abuse of a shared resource, e.g. one
container locking up all requests of a bdev so that another container
can't do any IO, maybe even with an fs on top. Nevertheless, it's ublk,
I don't think we need to concern about that much since io_uring is
on the other side from normal user space.

> latency if it uses one of these bufs for eg a read on a pipe that never
> gets any data, but the data will remain valid regardless. And that's
> very much a "doctor it hurts when I..." case, it should not cause any

Right, I care about malicious abuse when it can affect other users,
break isolation / fairness, etc., I'm saying that there is no
difference between all the approaches in this aspect, and if so
it should also be perfectly ok from the kernel's perspective to allow
to leave a buffer in the table long term. If the user wants to screw
itself and doesn't remove the buffer that's the user's choice to
shoot itself in the leg.

 From this angle, that I look at the auto removal you add not as some
security / etc. concern, but just as a QoL / performance feature so
that the user doesn't need to remove the buffer by hand.

FWIW, instead of having another table, we can just mark a sub range
of the main buffer table to be cleared every time after submission,
just like we separate auto slot allocation with ranges.

> safety issues. It'll just prevent progress for the other requests that
> are using that buffer, if they need the final put to happen before
> making progress.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31 15:25                   ` Pavel Begunkov
@ 2024-10-31 15:42                     ` Jens Axboe
  2024-10-31 16:29                       ` Pavel Begunkov
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-31 15:42 UTC (permalink / raw)
  To: Pavel Begunkov, Ming Lei
  Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/31/24 9:25 AM, Pavel Begunkov wrote:
> On 10/31/24 14:29, Jens Axboe wrote:
>> On 10/31/24 7:25 AM, Pavel Begunkov wrote:
>>> On 10/30/24 02:43, Jens Axboe wrote:
>>>> On 10/29/24 8:03 PM, Ming Lei wrote:
>>>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
>>>>>> On 10/29/24 2:06 PM, Jens Axboe wrote:
>>>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote:
>>> ...
>>>>>> +    node->buf = imu;
>>>>>> +    node->kbuf_fn = kbuf_fn;
>>>>>> +    return node;
>>>>>
>>>>> Also this function needs to register the buffer to table with one
>>>>> pre-defined buf index, then the following request can use it by
>>>>> the way of io_prep_rw_fixed().
>>>>
>>>> It should not register it with the table, the whole point is to keep
>>>> this node only per-submission discoverable. If you're grabbing random
>>>> request pages, then it very much is a bit finicky
>>>
>>> Registering it into the table has enough of design and flexibility
>>> merits: error handling, allowing any type of dependencies of requests
>>> by handling it in the user space, etc.
>>
>> Right, but it has to be a special table. See my lengthier reply to Ming.
> 
> Mind pointing the specific part? I read through the thread and didn't
> see why it _has_ to be a special table.
> And by "special" I assume you mean the property of it being cleaned up
> / flushed by the end of submission / syscall, right?

Right, that's all I mean, special in the sense that it isn't persistent.
Nothing special about it otherwise. Maybe "separate table from
buf_table" is a more accurate way to describe it.

>> The initial POC did install it into a table, it's just a one-slot table,
> 
> By "table" I actually mean anything that survives beyond the current
> syscall / submission and potentially can be used by requests submitted
> with another syscall.

Obviously there's nothing special about the above mentioned table in the
sense that it's a normal table, it just doesn't survive beyond the
current submission. The reason why I like that approach is that it
doesn't leave potentially iffy data in a table beyond that submission.
If we don't own this data, it's merely borrowed from someone else, then
special case must be taken around it.

>> io_submit_state. I think the right approach is to have an actual struct
>> io_rsrc_data local_table in the ctx, with refs put at the end of submit.
>> Same kind of concept, just allows for more entries (potentially), with
>> the same requirement that nodes get put when submit ends. IOW, requests
>> need to find it within the same submit.
>>
>> Obviously you would not NEED to do that, but if the use case is grabbing
>> bvecs out of a request, then it very much should not be discoverable
>> past the initial assignments within that submit scope.
>>
>>>> and needs to be of
>>>> limited scope.
>>>
>>> And I don't think we can force it, neither with limiting exposure to
>>> submission only nor with the Ming's group based approach. The user can
>>> always queue a request that will never complete and/or by using
>>> DEFER_TASKRUN and just not letting it run. In this sense it might be
>>> dangerous to block requests of an average system shared block device,
>>> but if it's fine with ublk it sounds like it should be fine for any of
>>> the aforementioned approaches.
>>
>> As long as the resource remains valid until the last put of the node,
>> then it should be OK. Yes the application can mess things up in terms of
> 
> It should be fine in terms of buffers staying alive. The "dangerous"
> part I mentioned is about abuse of a shared resource, e.g. one
> container locking up all requests of a bdev so that another container
> can't do any IO, maybe even with an fs on top. Nevertheless, it's ublk,
> I don't think we need to concern about that much since io_uring is
> on the other side from normal user space.

If you leave it in the table, then you can no longer rely on the final
put being the callback driver. Maybe this is fine, but then it needs
some other mechanism for this.

>> latency if it uses one of these bufs for eg a read on a pipe that never
>> gets any data, but the data will remain valid regardless. And that's
>> very much a "doctor it hurts when I..." case, it should not cause any
> 
> Right, I care about malicious abuse when it can affect other users,
> break isolation / fairness, etc., I'm saying that there is no
> difference between all the approaches in this aspect, and if so
> it should also be perfectly ok from the kernel's perspective to allow
> to leave a buffer in the table long term. If the user wants to screw
> itself and doesn't remove the buffer that's the user's choice to
> shoot itself in the leg.
> 
> From this angle, that I look at the auto removal you add not as some
> security / etc. concern, but just as a QoL / performance feature so
> that the user doesn't need to remove the buffer by hand.
> 
> FWIW, instead of having another table, we can just mark a sub range
> of the main buffer table to be cleared every time after submission,
> just like we separate auto slot allocation with ranges.

I did consider that idea too, mainly from the perspective of then not
needing any kind of special OP or OP support to grab one of these
buffers, it'd just use the normal table but in a separate range. Doesn't
feel super clean, and does require some odd setup. Realistically,
applications probabably use one or the other and not combined, so
perhaps it's fine and the range is just the normal range. If they do mix
the two, then yeah they would want to use separate ranges for them.

Honestly don't care too deeply about that implementation detail, I care
more about having these buffers be io_rsrc_node and using the general
infrastructure for them. If we have to add IORING_RSRC_KBUFFER for them
and a callback + data field to io_rsrc_node, that still a much better
approach than having some other intermediate type which basically does
the same thing, except it needs new fields to store it and new helpers
to alloc/put it.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31 15:42                     ` Jens Axboe
@ 2024-10-31 16:29                       ` Pavel Begunkov
  0 siblings, 0 replies; 41+ messages in thread
From: Pavel Begunkov @ 2024-10-31 16:29 UTC (permalink / raw)
  To: Jens Axboe, Ming Lei; +Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash

On 10/31/24 15:42, Jens Axboe wrote:
> On 10/31/24 9:25 AM, Pavel Begunkov wrote:
>> On 10/31/24 14:29, Jens Axboe wrote:
>>> On 10/31/24 7:25 AM, Pavel Begunkov wrote:
>>>> On 10/30/24 02:43, Jens Axboe wrote:
>>>>> On 10/29/24 8:03 PM, Ming Lei wrote:
>>>>>> On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
>>>>>>> On 10/29/24 2:06 PM, Jens Axboe wrote:
>>>>>>>> On 10/29/24 1:18 PM, Jens Axboe wrote:
>>>> ...
>>>>>>> +    node->buf = imu;
>>>>>>> +    node->kbuf_fn = kbuf_fn;
>>>>>>> +    return node;
>>>>>>
>>>>>> Also this function needs to register the buffer to table with one
>>>>>> pre-defined buf index, then the following request can use it by
>>>>>> the way of io_prep_rw_fixed().
>>>>>
>>>>> It should not register it with the table, the whole point is to keep
>>>>> this node only per-submission discoverable. If you're grabbing random
>>>>> request pages, then it very much is a bit finicky
>>>>
>>>> Registering it into the table has enough of design and flexibility
>>>> merits: error handling, allowing any type of dependencies of requests
>>>> by handling it in the user space, etc.
>>>
>>> Right, but it has to be a special table. See my lengthier reply to Ming.
>>
>> Mind pointing the specific part? I read through the thread and didn't
>> see why it _has_ to be a special table.
>> And by "special" I assume you mean the property of it being cleaned up
>> / flushed by the end of submission / syscall, right?
> 
> Right, that's all I mean, special in the sense that it isn't persistent.
> Nothing special about it otherwise. Maybe "separate table from
> buf_table" is a more accurate way to describe it.
> 
>>> The initial POC did install it into a table, it's just a one-slot table,
>>
>> By "table" I actually mean anything that survives beyond the current
>> syscall / submission and potentially can be used by requests submitted
>> with another syscall.
> 
> Obviously there's nothing special about the above mentioned table in the
> sense that it's a normal table, it just doesn't survive beyond the
> current submission. The reason why I like that approach is that it
> doesn't leave potentially iffy data in a table beyond that submission.
> If we don't own this data, it's merely borrowed from someone else, then
> special case must be taken around it.

So you're not trying to prevent some potential malicious use
but rather making it a bit nicer for buggy users. I don't think I
care much about that aspect and would sacrifice the property if it
gives us anything good anywhere else.

>>> io_submit_state. I think the right approach is to have an actual struct
>>> io_rsrc_data local_table in the ctx, with refs put at the end of submit.
>>> Same kind of concept, just allows for more entries (potentially), with
>>> the same requirement that nodes get put when submit ends. IOW, requests
>>> need to find it within the same submit.
>>>
>>> Obviously you would not NEED to do that, but if the use case is grabbing
>>> bvecs out of a request, then it very much should not be discoverable
>>> past the initial assignments within that submit scope.
>>>
>>>>> and needs to be of
>>>>> limited scope.
>>>>
>>>> And I don't think we can force it, neither with limiting exposure to
>>>> submission only nor with the Ming's group based approach. The user can
>>>> always queue a request that will never complete and/or by using
>>>> DEFER_TASKRUN and just not letting it run. In this sense it might be
>>>> dangerous to block requests of an average system shared block device,
>>>> but if it's fine with ublk it sounds like it should be fine for any of
>>>> the aforementioned approaches.
>>>
>>> As long as the resource remains valid until the last put of the node,
>>> then it should be OK. Yes the application can mess things up in terms of
>>
>> It should be fine in terms of buffers staying alive. The "dangerous"
>> part I mentioned is about abuse of a shared resource, e.g. one
>> container locking up all requests of a bdev so that another container
>> can't do any IO, maybe even with an fs on top. Nevertheless, it's ublk,
>> I don't think we need to concern about that much since io_uring is
>> on the other side from normal user space.
> 
> If you leave it in the table, then you can no longer rely on the final
> put being the callback driver. Maybe this is fine, but then it needs
> some other mechanism for this.

Not sure I follow. The ->kbuf_fn is set by the driver, right?
It'll always be called once the node is destroyed, in this sense
the final destination is always the driver that leased the buffer.

Or do you mean the final rsrc_node put? Not sure how that works
considering requests can complete inside the submission as well
as outlive it with the node reference.

>>> latency if it uses one of these bufs for eg a read on a pipe that never
>>> gets any data, but the data will remain valid regardless. And that's
>>> very much a "doctor it hurts when I..." case, it should not cause any
>>
>> Right, I care about malicious abuse when it can affect other users,
>> break isolation / fairness, etc., I'm saying that there is no
>> difference between all the approaches in this aspect, and if so
>> it should also be perfectly ok from the kernel's perspective to allow
>> to leave a buffer in the table long term. If the user wants to screw
>> itself and doesn't remove the buffer that's the user's choice to
>> shoot itself in the leg.
>>
>>  From this angle, that I look at the auto removal you add not as some
>> security / etc. concern, but just as a QoL / performance feature so
>> that the user doesn't need to remove the buffer by hand.
>>
>> FWIW, instead of having another table, we can just mark a sub range
>> of the main buffer table to be cleared every time after submission,
>> just like we separate auto slot allocation with ranges.
> 
> I did consider that idea too, mainly from the perspective of then not
> needing any kind of special OP or OP support to grab one of these
> buffers, it'd just use the normal table but in a separate range. Doesn't
> feel super clean, and does require some odd setup. Realistically,

I feel like we don't even need to differentiate it from normal
reg buffers in how it's used by other opcodes, cleaning the table
is just a feature, I'd even argues an optional one.

> applications probabably use one or the other and not combined, so
> perhaps it's fine and the range is just the normal range. If they do mix
> the two, then yeah they would want to use separate ranges for them.
> 
> Honestly don't care too deeply about that implementation detail, I care
> more about having these buffers be io_rsrc_node and using the general
> infrastructure for them. If we have to add IORING_RSRC_KBUFFER for them
> and a callback + data field to io_rsrc_node, that still a much better

Right, and I don't think it's a problem at all, for most of the
users destroying a resource is cold path anyway, apart from this
zc proposal nobody registers file/buffer just for one request.

> approach than having some other intermediate type which basically does
> the same thing, except it needs new fields to store it and new helpers
> to alloc/put it.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 4/7] io_uring: support SQE group
  2024-10-25 12:22 ` [PATCH V8 4/7] io_uring: support SQE group Ming Lei
  2024-10-29  0:12   ` Jens Axboe
@ 2024-10-31 21:24   ` Jens Axboe
  2024-10-31 21:39     ` Jens Axboe
  1 sibling, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-31 21:24 UTC (permalink / raw)
  To: Ming Lei, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Kevin Wolf

On 10/25/24 6:22 AM, Ming Lei wrote:
> SQE group is defined as one chain of SQEs starting with the first SQE that
> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> doesn't have it set, and it is similar with chain of linked SQEs.
> 
> Not like linked SQEs, each sqe is issued after the previous one is
> completed. All SQEs in one group can be submitted in parallel. To simplify
> the implementation from beginning, all members are queued after the leader
> is completed, however, this way may be changed and leader and members may
> be issued concurrently in future.
> 
> The 1st SQE is group leader, and the other SQEs are group member. The whole
> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> the two flags can't be set for group members. For the sake of
> simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.
> 
> When the group is in one link chain, this group isn't submitted until the
> previous SQE or group is completed. And the following SQE or group can't
> be started if this group isn't completed. Failure from any group member will
> fail the group leader, then the link chain can be terminated.
> 
> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> group leader only, we respect IO_DRAIN by always completing group leader as
> the last one in the group. Meantime it is natural to post leader's CQE
> as the last one from application viewpoint.
> 
> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> support N:M dependency, such as:
> 
> - group A is chained with group B together
> - group A has N SQEs
> - group B has M SQEs
> 
> then M SQEs in group B depend on N SQEs in group A.
> 
> N:M dependency can support some interesting use cases in efficient way:
> 
> 1) read from multiple files, then write the read data into single file
> 
> 2) read from single file, and write the read data into multiple files
> 
> 3) write same data into multiple files, and read data from multiple files and
> compare if correct data is written
> 
> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> extend sqe->flags with io_uring context flag, such as use __pad3 for
> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.

Did you run the liburing tests with this? I rebased it on top of the
flags2 patch I just sent out, and it fails defer-taskrun and crashes
link_drain. Don't know if others fail too. I'll try the original one
too, but nothing between those two should make a difference. It passes
just fine with just the flags2 patch, so I'm a bit suspicious this patch
is the issue.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 4/7] io_uring: support SQE group
  2024-10-31 21:24   ` Jens Axboe
@ 2024-10-31 21:39     ` Jens Axboe
  2024-11-01  0:00       ` Jens Axboe
  0 siblings, 1 reply; 41+ messages in thread
From: Jens Axboe @ 2024-10-31 21:39 UTC (permalink / raw)
  To: Ming Lei, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Kevin Wolf

[-- Attachment #1: Type: text/plain, Size: 3295 bytes --]

On 10/31/24 3:24 PM, Jens Axboe wrote:
> On 10/25/24 6:22 AM, Ming Lei wrote:
>> SQE group is defined as one chain of SQEs starting with the first SQE that
>> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
>> doesn't have it set, and it is similar with chain of linked SQEs.
>>
>> Not like linked SQEs, each sqe is issued after the previous one is
>> completed. All SQEs in one group can be submitted in parallel. To simplify
>> the implementation from beginning, all members are queued after the leader
>> is completed, however, this way may be changed and leader and members may
>> be issued concurrently in future.
>>
>> The 1st SQE is group leader, and the other SQEs are group member. The whole
>> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
>> the two flags can't be set for group members. For the sake of
>> simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.
>>
>> When the group is in one link chain, this group isn't submitted until the
>> previous SQE or group is completed. And the following SQE or group can't
>> be started if this group isn't completed. Failure from any group member will
>> fail the group leader, then the link chain can be terminated.
>>
>> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
>> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
>> group leader only, we respect IO_DRAIN by always completing group leader as
>> the last one in the group. Meantime it is natural to post leader's CQE
>> as the last one from application viewpoint.
>>
>> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
>> support N:M dependency, such as:
>>
>> - group A is chained with group B together
>> - group A has N SQEs
>> - group B has M SQEs
>>
>> then M SQEs in group B depend on N SQEs in group A.
>>
>> N:M dependency can support some interesting use cases in efficient way:
>>
>> 1) read from multiple files, then write the read data into single file
>>
>> 2) read from single file, and write the read data into multiple files
>>
>> 3) write same data into multiple files, and read data from multiple files and
>> compare if correct data is written
>>
>> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
>> extend sqe->flags with io_uring context flag, such as use __pad3 for
>> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
> 
> Did you run the liburing tests with this? I rebased it on top of the
> flags2 patch I just sent out, and it fails defer-taskrun and crashes
> link_drain. Don't know if others fail too. I'll try the original one
> too, but nothing between those two should make a difference. It passes
> just fine with just the flags2 patch, so I'm a bit suspicious this patch
> is the issue.

False alarm, it was my messup adding the group flag. Works just fine.
I'm attaching the version I tested, on top of that flags2 patch.

Since we're on the topic - my original bundle patch used a bundle OP to
define an sqe grouping, which didn't need to use an sqe flag. Any
particular reason why you went with a flag for this one?

I do think it comes out nicer with a flag for certain things, like being
able to link groups. Maybe that's the primary reason.

-- 
Jens Axboe

[-- Attachment #2: 0001-io_uring-support-SQE-group.patch --]
[-- Type: text/x-patch, Size: 19407 bytes --]

From 2b031ef42bc929fc5c35ddf70de3a4a61488fc50 Mon Sep 17 00:00:00 2001
From: Ming Lei <[email protected]>
Date: Fri, 25 Oct 2024 20:22:41 +0800
Subject: [PATCH] io_uring: support SQE group

SQE group is defined as one chain of SQEs starting with the first SQE that
has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
doesn't have it set, and it is similar with chain of linked SQEs.

Not like linked SQEs, each sqe is issued after the previous one is
completed. All SQEs in one group can be submitted in parallel. To simplify
the implementation from beginning, all members are queued after the leader
is completed, however, this way may be changed and leader and members may
be issued concurrently in future.

The 1st SQE is group leader, and the other SQEs are group member. The whole
group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
the two flags can't be set for group members. For the sake of
simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.

When the group is in one link chain, this group isn't submitted until the
previous SQE or group is completed. And the following SQE or group can't
be started if this group isn't completed. Failure from any group member will
fail the group leader, then the link chain can be terminated.

When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
group leader only, we respect IO_DRAIN by always completing group leader as
the last one in the group. Meantime it is natural to post leader's CQE
as the last one from application viewpoint.

Working together with IOSQE_IO_LINK, SQE group provides flexible way to
support N:M dependency, such as:

- group A is chained with group B together
- group A has N SQEs
- group B has M SQEs

then M SQEs in group B depend on N SQEs in group A.

N:M dependency can support some interesting use cases in efficient way:

1) read from multiple files, then write the read data into single file

2) read from single file, and write the read data into multiple files

3) write same data into multiple files, and read data from multiple files and
compare if correct data is written

Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
extend sqe->flags with io_uring context flag, such as use __pad3 for
non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.

Suggested-by: Kevin Wolf <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jens Axboe <[email protected]>
---
 include/linux/io_uring_types.h |  17 ++
 include/uapi/linux/io_uring.h  |   3 +
 io_uring/io_uring.c            | 300 +++++++++++++++++++++++++++++++--
 io_uring/io_uring.h            |   6 +
 io_uring/timeout.c             |   6 +
 5 files changed, 317 insertions(+), 15 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 8a45bf6a68ca..fe5fcb6bae54 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -201,6 +201,8 @@ struct io_submit_state {
 	/* batch completion logic */
 	struct io_wq_work_list	compl_reqs;
 	struct io_submit_link	link;
+	/* points to current group */
+	struct io_submit_link	group;
 
 	bool			plug_started;
 	bool			need_plug;
@@ -445,6 +447,7 @@ enum {
 
 	/* 16 bits of sqe->flags2 */
 	REQ_F_PERSONALITY_BIT	= IOSQE2_PERSONALITY_BIT + 8,
+	REQ_F_GROUP_BIT		= IOSQE2_GROUP_BIT + 8,
 
 	/* first byte taken by sqe->flags, next 2 by sqe->flags2 */
 	REQ_F_FAIL_BIT		= 24,
@@ -474,6 +477,7 @@ enum {
 	REQ_F_BL_EMPTY_BIT,
 	REQ_F_BL_NO_RECYCLE_BIT,
 	REQ_F_BUFFERS_COMMIT_BIT,
+	REQ_F_GROUP_LEADER_BIT,
 
 	/* not a real bit, just to check we're not overflowing the space */
 	__REQ_F_LAST_BIT,
@@ -501,6 +505,7 @@ enum {
 	REQ_F_FLAGS2		= IO_REQ_FLAG(REQ_F_FLAGS2_BIT),
 
 	REQ_F_PERSONALITY	= IO_REQ_FLAG(REQ_F_PERSONALITY_BIT),
+	REQ_F_GROUP		= IO_REQ_FLAG(REQ_F_GROUP_BIT),
 
 	/* fail rest of links */
 	REQ_F_FAIL		= IO_REQ_FLAG(REQ_F_FAIL_BIT),
@@ -554,6 +559,8 @@ enum {
 	REQ_F_BL_NO_RECYCLE	= IO_REQ_FLAG(REQ_F_BL_NO_RECYCLE_BIT),
 	/* buffer ring head needs incrementing on put */
 	REQ_F_BUFFERS_COMMIT	= IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT),
+	/* sqe group lead */
+	REQ_F_GROUP_LEADER	= IO_REQ_FLAG(REQ_F_GROUP_LEADER_BIT),
 };
 
 typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
@@ -656,6 +663,8 @@ struct io_kiocb {
 	void				*async_data;
 	/* linked requests, IFF REQ_F_HARDLINK or REQ_F_LINK are set */
 	atomic_t			poll_refs;
+	/* reference for group leader request */
+	int				grp_refs;
 	struct io_kiocb			*link;
 	/* custom credentials, valid IFF REQ_F_CREDS is set */
 	const struct cred		*creds;
@@ -665,6 +674,14 @@ struct io_kiocb {
 		u64			extra1;
 		u64			extra2;
 	} big_cqe;
+
+	union {
+		/* links all group members for leader */
+		struct io_kiocb			*grp_link;
+
+		/* points to group leader for member */
+		struct io_kiocb			*grp_leader;
+	};
 };
 
 struct io_overflow_cqe {
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index c7c3ba69ffdd..2650c355f413 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -139,6 +139,7 @@ enum io_uring_sqe_flags_bit {
 
 enum io_uring_sqe_flags2_bit {
 	IOSQE2_PERSONALITY_BIT,
+	IOSQE2_GROUP_BIT,
 };
 
 /*
@@ -166,6 +167,7 @@ enum io_uring_sqe_flags2_bit {
  */
  /* if set, sqe->personality2 contains personality */
 #define IOSQE2_PERSONALITY	(1U << IOSQE2_PERSONALITY_BIT)
+#define IOSQE2_GROUP		(1U << IOSQE2_GROUP_BIT)
 
 /*
  * io_uring_setup() flags
@@ -581,6 +583,7 @@ struct io_uring_params {
 #define IORING_FEAT_REG_REG_RING	(1U << 13)
 #define IORING_FEAT_RECVSEND_BUNDLE	(1U << 14)
 #define IORING_FEAT_MIN_TIMEOUT		(1U << 15)
+#define IORING_FEAT_SQE_GROUP		(1U << 16)
 
 /*
  * io_uring_register(2) opcodes and arguments
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c2bbadd5640d..00cbee0049a8 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -117,7 +117,7 @@
 				REQ_F_ASYNC_DATA)
 
 #define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
-				 IO_REQ_CLEAN_FLAGS)
+				 REQ_F_GROUP | IO_REQ_CLEAN_FLAGS)
 
 #define IO_TCTX_REFS_CACHE_NR	(1U << 10)
 
@@ -906,6 +906,123 @@ static __always_inline void io_req_commit_cqe(struct io_ring_ctx *ctx,
 	}
 }
 
+/* Can only be called after this request is issued */
+static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
+{
+	if (!(req->flags & REQ_F_GROUP))
+		return NULL;
+	if (req_is_group_leader(req))
+		return req;
+	return req->grp_leader;
+}
+
+void io_fail_group_members(struct io_kiocb *req)
+{
+	struct io_kiocb *member = req->grp_link;
+
+	while (member) {
+		struct io_kiocb *next = member->grp_link;
+
+		if (!(member->flags & REQ_F_FAIL)) {
+			req_set_fail(member);
+			io_req_set_res(member, -ECANCELED, 0);
+		}
+		member = next;
+	}
+}
+
+static void io_queue_group_members(struct io_kiocb *req)
+{
+	struct io_kiocb *member = req->grp_link;
+
+	req->grp_link = NULL;
+	while (member) {
+		struct io_kiocb *next = member->grp_link;
+
+		member->grp_leader = req;
+		if (unlikely(member->flags & REQ_F_FAIL))
+			io_req_task_queue_fail(member, member->cqe.res);
+		else if (unlikely(req->flags & REQ_F_FAIL))
+			io_req_task_queue_fail(member, -ECANCELED);
+		else
+			io_req_task_queue(member);
+		member = next;
+	}
+}
+
+/* called only after the request is completed */
+static bool req_is_last_group_member(struct io_kiocb *req)
+{
+	return req->grp_leader != NULL;
+}
+
+static void io_complete_group_req(struct io_kiocb *req)
+{
+	struct io_kiocb *lead;
+
+	if (req_is_group_leader(req)) {
+		req->grp_refs--;
+		return;
+	}
+
+	lead = get_group_leader(req);
+
+	/* member CQE needs to be posted first */
+	if (!(req->flags & REQ_F_CQE_SKIP))
+		io_req_commit_cqe(req->ctx, req);
+
+	/* Set leader as failed in case of any member failed */
+	if (unlikely((req->flags & REQ_F_FAIL)))
+		req_set_fail(lead);
+
+	WARN_ON_ONCE(lead->grp_refs <= 0);
+	if (!--lead->grp_refs) {
+		/*
+		 * We are the last member, and ->grp_leader isn't cleared,
+		 * so our leader can be found & freed with the last member
+		 */
+		if (!(lead->flags & REQ_F_CQE_SKIP))
+			io_req_commit_cqe(lead->ctx, lead);
+	} else {
+		/* we are done with the group now */
+		req->grp_leader = NULL;
+	}
+}
+
+enum group_mem {
+	GROUP_LEADER,
+	GROUP_LAST_MEMBER,
+	GROUP_OTHER_MEMBER,
+};
+
+static enum group_mem io_prep_free_group_req(struct io_kiocb *req,
+					     struct io_kiocb **leader)
+{
+	/*
+	 * Group completion is done, so clear the flag for avoiding double
+	 * handling in case of io-wq
+	 */
+	req->flags &= ~REQ_F_GROUP;
+
+	if (req_is_group_leader(req)) {
+		/* Queue members now */
+		if (req->grp_link)
+			io_queue_group_members(req);
+		return GROUP_LEADER;
+	}
+	if (!req_is_last_group_member(req))
+		return GROUP_OTHER_MEMBER;
+
+	/*
+	 * Prepare for freeing leader which can only be found from the last
+	 * member
+	 */
+	*leader = req->grp_leader;
+	(*leader)->flags &= ~REQ_F_GROUP_LEADER;
+	req->grp_leader = NULL;
+	return GROUP_LAST_MEMBER;
+}
+
 static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -921,7 +1038,8 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 	 * Handle special CQ sync cases via task_work. DEFER_TASKRUN requires
 	 * the submitter task context, IOPOLL protects with uring_lock.
 	 */
-	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL)) {
+	if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL) ||
+	    (req->flags & REQ_F_GROUP)) {
 		req->io_task_work.func = io_req_task_complete;
 		io_req_task_work_add(req);
 		return;
@@ -1398,6 +1516,25 @@ void io_queue_next(struct io_kiocb *req)
 		io_req_task_queue(nxt);
 }
 
+static bool io_group_complete(struct io_kiocb *req)
+{
+	struct io_kiocb *leader = NULL;
+	enum group_mem type = io_prep_free_group_req(req, &leader);
+
+	if (type == GROUP_LEADER) {
+		return true;
+	} else if (type == GROUP_LAST_MEMBER) {
+		/*
+		 * Link leader to current request's next, this way works
+		 * because the iterator always check the next node only.
+		 *
+		 * Be careful when you change the iterator in future
+		 */
+		wq_stack_add_head(&leader->comp_list, &req->comp_list);
+	}
+	return false;
+}
+
 static void io_free_batch_list(struct io_ring_ctx *ctx,
 			       struct io_wq_work_node *node)
 	__must_hold(&ctx->uring_lock)
@@ -1407,6 +1544,12 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
 						    comp_list);
 
 		if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
+			if (req->flags & REQ_F_GROUP) {
+				if (io_group_complete(req)) {
+					node = req->comp_list.next;
+					continue;
+				}
+			}
 			if (req->flags & REQ_F_REFCOUNT) {
 				node = req->comp_list.next;
 				if (!req_ref_put_and_test(req))
@@ -1446,8 +1589,16 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 					    comp_list);
 
-		if (!(req->flags & REQ_F_CQE_SKIP))
-			io_req_commit_cqe(ctx, req);
+		if (unlikely(req->flags & (REQ_F_CQE_SKIP | REQ_F_GROUP))) {
+			if (req->flags & REQ_F_GROUP) {
+				io_complete_group_req(req);
+				continue;
+			}
+
+			if (req->flags & REQ_F_CQE_SKIP)
+				continue;
+		}
+		io_req_commit_cqe(ctx, req);
 	}
 	__io_cq_unlock_post(ctx);
 
@@ -1657,8 +1808,12 @@ static u32 io_get_sequence(struct io_kiocb *req)
 	struct io_kiocb *cur;
 
 	/* need original cached_sq_head, but it was increased for each req */
-	io_for_each_link(cur, req)
-		seq--;
+	io_for_each_link(cur, req) {
+		if (req_is_group_leader(cur))
+			seq -= cur->grp_refs;
+		else
+			seq--;
+	}
 	return seq;
 }
 
@@ -2121,6 +2276,66 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	return def->prep(req, sqe);
 }
 
+static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
+				     struct io_kiocb *req)
+{
+	/*
+	 * Group chain is similar with link chain: starts with 1st sqe with
+	 * REQ_F_GROUP, and ends with the 1st sqe without REQ_F_GROUP
+	 */
+	if (group->head) {
+		struct io_kiocb *lead = group->head;
+
+		/*
+		 * Members can't be in link chain, can't be drained, but
+		 * the whole group can be linked or drained by setting
+		 * flags on group leader.
+		 *
+		 * IOSQE_CQE_SKIP_SUCCESS can't be set for member
+		 * for the sake of simplicity
+		 */
+		if (req->flags & (IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN |
+				REQ_F_CQE_SKIP))
+			req_fail_link_node(lead, -EINVAL);
+
+		lead->grp_refs += 1;
+		group->last->grp_link = req;
+		group->last = req;
+
+		if (req->flags & REQ_F_GROUP)
+			return NULL;
+
+		req->grp_link = NULL;
+		req->flags |= REQ_F_GROUP;
+		group->head = NULL;
+		return lead;
+	}
+
+	if (WARN_ON_ONCE(!(req->flags & REQ_F_GROUP)))
+		return req;
+	group->head = req;
+	group->last = req;
+	req->grp_refs = 1;
+	req->flags |= REQ_F_GROUP_LEADER;
+	return NULL;
+}
+
+static __cold struct io_kiocb *io_submit_fail_group(
+		struct io_submit_link *link, struct io_kiocb *req)
+{
+	struct io_kiocb *lead = link->head;
+
+	/*
+	 * Instead of failing eagerly, continue assembling the group link
+	 * if applicable and mark the leader with REQ_F_FAIL. The group
+	 * flushing code should find the flag and handle the rest
+	 */
+	if (lead && !(lead->flags & REQ_F_FAIL))
+		req_fail_link_node(lead, -ECANCELED);
+
+	return io_group_sqe(link, req);
+}
+
 static __cold int io_submit_fail_link(struct io_submit_link *link,
 				      struct io_kiocb *req, int ret)
 {
@@ -2159,11 +2374,18 @@ static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct io_submit_link *link = &ctx->submit_state.link;
+	struct io_submit_link *group = &ctx->submit_state.group;
 
 	trace_io_uring_req_failed(sqe, req, ret);
 
 	req_fail_link_node(req, ret);
 
+	if (group->head || (req->flags & REQ_F_GROUP)) {
+		req = io_submit_fail_group(group, req);
+		if (!req)
+			return 0;
+	}
+
 	/* cover both linked and non-linked request */
 	return io_submit_fail_link(link, req, ret);
 }
@@ -2207,11 +2429,29 @@ static struct io_kiocb *io_link_sqe(struct io_submit_link *link,
 	return req;
 }
 
+static inline bool io_group_assembling(const struct io_submit_state *state,
+				       const struct io_kiocb *req)
+{
+	if (state->group.head || req->flags & REQ_F_GROUP)
+		return true;
+	return false;
+}
+
+/* Failed request is covered too */
+static inline bool io_link_assembling(const struct io_submit_state *state,
+				      const struct io_kiocb *req)
+{
+	if (state->link.head ||
+	    (req->flags & (IO_REQ_LINK_FLAGS | REQ_F_FORCE_ASYNC | REQ_F_FAIL)))
+		return true;
+	return false;
+}
+
 static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			 const struct io_uring_sqe *sqe)
 	__must_hold(&ctx->uring_lock)
 {
-	struct io_submit_link *link = &ctx->submit_state.link;
+	struct io_submit_state *state = &ctx->submit_state;
 	int ret;
 
 	ret = io_init_req(ctx, req, sqe);
@@ -2220,11 +2460,20 @@ static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 
 	trace_io_uring_submit_req(req);
 
-	if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
-				    REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
-		req = io_link_sqe(link, req);
-		if (!req)
-			return 0;
+	if (unlikely(io_link_assembling(state, req) ||
+		     io_group_assembling(state, req))) {
+		if (io_group_assembling(state, req)) {
+			req = io_group_sqe(&state->group, req);
+			if (!req)
+				return 0;
+		}
+
+		/* covers non-linked failed request too */
+		if (io_link_assembling(state, req)) {
+			req = io_link_sqe(&state->link, req);
+			if (!req)
+				return 0;
+		}
 	}
 	io_queue_sqe(req);
 	return 0;
@@ -2237,8 +2486,27 @@ static void io_submit_state_end(struct io_ring_ctx *ctx)
 {
 	struct io_submit_state *state = &ctx->submit_state;
 
-	if (unlikely(state->link.head))
-		io_queue_sqe_fallback(state->link.head);
+	if (unlikely(state->group.head || state->link.head)) {
+		/* the last member must set REQ_F_GROUP */
+		if (state->group.head) {
+			struct io_kiocb *lead = state->group.head;
+			struct io_kiocb *last = state->group.last;
+
+			/* fail group with single leader */
+			if (unlikely(last == lead))
+				req_fail_link_node(lead, -EINVAL);
+
+			last->grp_link = NULL;
+			if (state->link.head)
+				io_link_sqe(&state->link, lead);
+			else
+				io_queue_sqe_fallback(lead);
+		}
+
+		if (unlikely(state->link.head))
+			io_queue_sqe_fallback(state->link.head);
+	}
+
 	/* flush only after queuing links as they can generate completions */
 	io_submit_flush_completions(ctx);
 	if (state->plug_started)
@@ -2256,6 +2524,7 @@ static void io_submit_state_start(struct io_submit_state *state,
 	state->submit_nr = max_ios;
 	/* set only head, no need to init link_last in advance */
 	state->link.head = NULL;
+	state->group.head = NULL;
 }
 
 static void io_commit_sqring(struct io_ring_ctx *ctx)
@@ -3753,7 +4022,8 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
 			IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
 			IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
 			IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING |
-			IORING_FEAT_RECVSEND_BUNDLE | IORING_FEAT_MIN_TIMEOUT;
+			IORING_FEAT_RECVSEND_BUNDLE | IORING_FEAT_MIN_TIMEOUT |
+			IORING_FEAT_SQE_GROUP;
 
 	if (copy_to_user(params, p, sizeof(*p))) {
 		ret = -EFAULT;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index e3e6cb14de5d..52d15ac8d209 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -78,6 +78,7 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
 bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags);
 void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
+void io_fail_group_members(struct io_kiocb *req);
 
 struct file *io_file_get_normal(struct io_kiocb *req, int fd);
 struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
@@ -357,6 +358,11 @@ static inline void io_tw_lock(struct io_ring_ctx *ctx, struct io_tw_state *ts)
 	lockdep_assert_held(&ctx->uring_lock);
 }
 
+static inline bool req_is_group_leader(struct io_kiocb *req)
+{
+	return req->flags & REQ_F_GROUP_LEADER;
+}
+
 /*
  * Don't complete immediately but use deferred completion infrastructure.
  * Protected by ->uring_lock and can only be used either with
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 9973876d91b0..ed6c74f1a475 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -149,6 +149,8 @@ static void io_req_tw_fail_links(struct io_kiocb *link, struct io_tw_state *ts)
 			res = link->cqe.res;
 		link->link = NULL;
 		io_req_set_res(link, res, 0);
+		if (req_is_group_leader(link))
+			io_fail_group_members(link);
 		io_req_task_complete(link, ts);
 		link = nxt;
 	}
@@ -543,6 +545,10 @@ static int __io_timeout_prep(struct io_kiocb *req,
 	if (is_timeout_link) {
 		struct io_submit_link *link = &req->ctx->submit_state.link;
 
+		/* so far disallow IO group link timeout */
+		if (req->ctx->submit_state.group.head)
+			return -EINVAL;
+
 		if (!link->head)
 			return -EINVAL;
 		if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 4/7] io_uring: support SQE group
  2024-10-31 21:39     ` Jens Axboe
@ 2024-11-01  0:00       ` Jens Axboe
  0 siblings, 0 replies; 41+ messages in thread
From: Jens Axboe @ 2024-11-01  0:00 UTC (permalink / raw)
  To: Ming Lei, io-uring, Pavel Begunkov
  Cc: linux-block, Uday Shankar, Akilesh Kailash, Kevin Wolf

On 10/31/24 3:39 PM, Jens Axboe wrote:
> On 10/31/24 3:24 PM, Jens Axboe wrote:
>> On 10/25/24 6:22 AM, Ming Lei wrote:
>>> SQE group is defined as one chain of SQEs starting with the first SQE that
>>> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
>>> doesn't have it set, and it is similar with chain of linked SQEs.
>>>
>>> Not like linked SQEs, each sqe is issued after the previous one is
>>> completed. All SQEs in one group can be submitted in parallel. To simplify
>>> the implementation from beginning, all members are queued after the leader
>>> is completed, however, this way may be changed and leader and members may
>>> be issued concurrently in future.
>>>
>>> The 1st SQE is group leader, and the other SQEs are group member. The whole
>>> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
>>> the two flags can't be set for group members. For the sake of
>>> simplicity, IORING_OP_LINK_TIMEOUT is disallowed for SQE group now.
>>>
>>> When the group is in one link chain, this group isn't submitted until the
>>> previous SQE or group is completed. And the following SQE or group can't
>>> be started if this group isn't completed. Failure from any group member will
>>> fail the group leader, then the link chain can be terminated.
>>>
>>> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
>>> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
>>> group leader only, we respect IO_DRAIN by always completing group leader as
>>> the last one in the group. Meantime it is natural to post leader's CQE
>>> as the last one from application viewpoint.
>>>
>>> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
>>> support N:M dependency, such as:
>>>
>>> - group A is chained with group B together
>>> - group A has N SQEs
>>> - group B has M SQEs
>>>
>>> then M SQEs in group B depend on N SQEs in group A.
>>>
>>> N:M dependency can support some interesting use cases in efficient way:
>>>
>>> 1) read from multiple files, then write the read data into single file
>>>
>>> 2) read from single file, and write the read data into multiple files
>>>
>>> 3) write same data into multiple files, and read data from multiple files and
>>> compare if correct data is written
>>>
>>> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
>>> extend sqe->flags with io_uring context flag, such as use __pad3 for
>>> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
>>
>> Did you run the liburing tests with this? I rebased it on top of the
>> flags2 patch I just sent out, and it fails defer-taskrun and crashes
>> link_drain. Don't know if others fail too. I'll try the original one
>> too, but nothing between those two should make a difference. It passes
>> just fine with just the flags2 patch, so I'm a bit suspicious this patch
>> is the issue.
> 
> False alarm, it was my messup adding the group flag. Works just fine.
> I'm attaching the version I tested, on top of that flags2 patch.
> 
> Since we're on the topic - my original bundle patch used a bundle OP to
> define an sqe grouping, which didn't need to use an sqe flag. Any
> particular reason why you went with a flag for this one?
> 
> I do think it comes out nicer with a flag for certain things, like being
> able to link groups. Maybe that's the primary reason.

Various hickups, please just see the patches here, works now:

https://git.kernel.dk/cgit/linux/log/?h=io_uring-group

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF
  2024-10-31 13:16           ` Pavel Begunkov
@ 2024-11-01  1:04             ` Ming Lei
  0 siblings, 0 replies; 41+ messages in thread
From: Ming Lei @ 2024-11-01  1:04 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Jens Axboe, io-uring, linux-block, Uday Shankar, Akilesh Kailash

On Thu, Oct 31, 2024 at 01:16:07PM +0000, Pavel Begunkov wrote:
> On 10/30/24 02:04, Ming Lei wrote:
> > On Wed, Oct 30, 2024 at 01:25:33AM +0000, Pavel Begunkov wrote:
> > > On 10/30/24 00:45, Ming Lei wrote:
> > > > On Tue, Oct 29, 2024 at 04:47:59PM +0000, Pavel Begunkov wrote:
> > > > > On 10/25/24 13:22, Ming Lei wrote:
> > > > > ...
> > > > > > diff --git a/io_uring/rw.c b/io_uring/rw.c
> > > > > > index 4bc0d762627d..5a2025d48804 100644
> > > > > > --- a/io_uring/rw.c
> > > > > > +++ b/io_uring/rw.c
> > > > > > @@ -245,7 +245,8 @@ static int io_prep_rw_setup(struct io_kiocb *req, int ddir, bool do_import)
> > > > > >     	if (io_rw_alloc_async(req))
> > > > > >     		return -ENOMEM;
> > > > > > -	if (!do_import || io_do_buffer_select(req))
> > > > > > +	if (!do_import || io_do_buffer_select(req) ||
> > > > > > +	    io_use_leased_grp_kbuf(req))
> > > > > >     		return 0;
> > > > > >     	rw = req->async_data;
> > > > > > @@ -489,6 +490,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
> > > > > >     		}
> > > > > >     		req_set_fail(req);
> > > > > >     		req->cqe.res = res;
> > > > > > +		if (io_use_leased_grp_kbuf(req)) {
> > > > > 
> > > > > That's what I'm talking about, we're pushing more and
> > > > > into the generic paths (or patching every single hot opcode
> > > > > there is). You said it's fine for ublk the way it was, i.e.
> > > > > without tracking, so let's then pretend it's a ublk specific
> > > > > feature, kill that addition and settle at that if that's the
> > > > > way to go.
> > > > 
> > > > As I mentioned before, it isn't ublk specific, zeroing is required
> > > > because the buffer is kernel buffer, that is all. Any other approach
> > > > needs this kind of handling too. The coming fuse zc need it.
> > > > 
> > > > And it can't be done in driver side, because driver has no idea how
> > > > to consume the kernel buffer.
> > > > 
> > > > Also it is only required in case of short read/recv, and it isn't
> > > > hot path, not mention it is just one check on request flag.
> > > 
> > > I agree, it's not hot, it's a failure path, and the recv side
> > > is of medium hotness, but the main concern is that the feature
> > > is too actively leaking into other requests.
> > The point is that if you'd like to support kernel buffer. If yes, this
> > kind of change can't be avoided.
> 
> There is no guarantee with the patchset that there will be any IO done
> with that buffer, e.g. place a nop into the group, and even then you

Yes, here it depends on user. In case of ublk, the application has to be
trusted, and the situation is same with other user-emulated storage, such
as qemu.

> have offsets and length, so it's not clear what the zeroying is supposed
> to achieve.

The buffer may bee one page cache page, if it isn't initialized
completely, kernel data may be leaked to userspace via mmap.

> Either the buffer comes fully "initialised", i.e. free of
> kernel private data, or we need to track what parts of the buffer were
> used.

That is why the only workable way is to zero the remainder in
consumer of OP, imo.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf
  2024-10-31 13:35                       ` Jens Axboe
  2024-10-31 15:07                         ` Jens Axboe
@ 2024-11-01  1:39                         ` Ming Lei
  1 sibling, 0 replies; 41+ messages in thread
From: Ming Lei @ 2024-11-01  1:39 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Pavel Begunkov, io-uring, linux-block, Uday Shankar,
	Akilesh Kailash, ming.lei

On Thu, Oct 31, 2024 at 07:35:35AM -0600, Jens Axboe wrote:
> On 10/30/24 8:53 PM, Ming Lei wrote:
> > On Wed, Oct 30, 2024 at 07:20:48AM -0600, Jens Axboe wrote:
> >> On 10/29/24 10:11 PM, Ming Lei wrote:
> >>> On Wed, Oct 30, 2024 at 11:08:16AM +0800, Ming Lei wrote:
> >>>> On Tue, Oct 29, 2024 at 08:43:39PM -0600, Jens Axboe wrote:
> >>>
> >>> ...
> >>>
> >>>>> You could avoid the OP dependency with just a flag, if you really wanted
> >>>>> to. But I'm not sure it makes a lot of sense. And it's a hell of a lot
> >>>>
> >>>> Yes, IO_LINK won't work for submitting multiple IOs concurrently, extra
> >>>> syscall makes application too complicated, and IO latency is increased.
> >>>>
> >>>>> simpler than the sqe group scheme, which I'm a bit worried about as it's
> >>>>> a bit complicated in how deep it needs to go in the code. This one
> >>>>> stands alone, so I'd strongly encourage we pursue this a bit further and
> >>>>> iron out the kinks. Maybe it won't work in the end, I don't know, but it
> >>>>> seems pretty promising and it's soooo much simpler.
> >>>>
> >>>> If buffer register and lookup are always done in ->prep(), OP dependency
> >>>> may be avoided.
> >>>
> >>> Even all buffer register and lookup are done in ->prep(), OP dependency
> >>> still can't be avoided completely, such as:
> >>>
> >>> 1) two local buffers for sending to two sockets
> >>>
> >>> 2) group 1: IORING_OP_LOCAL_KBUF1 & [send(sock1), send(sock2)]  
> >>>
> >>> 3) group 2: IORING_OP_LOCAL_KBUF2 & [send(sock1), send(sock2)]
> >>>
> >>> group 1 and group 2 needs to be linked, but inside each group, the two
> >>> sends may be submitted in parallel.
> >>
> >> That is where groups of course work, in that you can submit 2 groups and
> >> have each member inside each group run independently. But I do think we
> >> need to decouple the local buffer and group concepts entirely. For the
> >> first step, getting local buffers working with zero copy would be ideal,
> >> and then just live with the fact that group 1 needs to be submitted
> >> first and group 2 once the first ones are done.
> > 
> > IMHO, it is one _kernel_ zero copy(_performance_) feature, which often
> > imply:
> > 
> > - better performance expectation
> > - no big change on existed application for using this feature
> 
> For #2, really depends on what it is. But ideally, yes, agree.
> 
> > Application developer is less interested in sort of crippled or
> > immature feature, especially need big change on existed code
> > logic(then two code paths need to maintain), with potential
> > performance regression.
> > 
> > With sqe group and REQ_F_GROUP_KBUF, application just needs lines of
> > code change for using the feature, and it is pretty easy to evaluate
> > the feature since no any extra logic change & no extra syscall/wait
> > introduced. The whole patchset has been mature enough, unfortunately
> > blocked without obvious reasons.
> 
> Let me tell you where I'm coming from. If you might recall, I originated
> the whole grouping idea. Didn't complete it, but it's essentially the
> same concept as REQ_F_GROUP_KBUF in that you have some dependents on a
> leader, and the dependents can run in parallel rather than being
> serialized by links. I'm obviously in favor of this concept, but I want
> to see it being done in such a way that it's actually something we can
> reason about and maintain. You want it for zero copy, which makes sense,
> but I also want to ensure it's a CLEAN implementation that doesn't have
> tangles in places it doesn't need to.
> 
> You seem to be very hard to convince of making ANY changes at all. In
> your mind the whole thing is done, and it's being "blocked without
> obvious reason". It's not being blocked at all, I've been diligently
> trying to work with you recently on getting this done. I'm at least as
> interested as you in getting this work done. But I want you to work with
> me a bit on some items so we can get it into a shape where I'm happy
> with it, and I can maintain it going forward.
> 
> So, please, rather than dig your heels in all the time, have an open
> mind on how we can accomplish some of these things.
> 
> 
> >> Once local buffers are done, we can look at doing the sqe grouping in a
> >> nice way. I do think it's a potentially powerful concept, but we're
> >> going to make a lot more progress on this issue if we carefully separate
> >> dependencies and get each of them done separately.
> > 
> > One fundamental difference between local buffer and REQ_F_GROUP_KBUF is
> > 
> > - local buffer has to be provided and used in ->prep()
> > - REQ_F_GROUP_KBUF needs to be provided in ->issue() instead of ->prep()
> 
> It does not - the POC certainly did it in ->prep(), but all it really
> cares about is having the ring locked. ->prep() always has that,
> ->issue() _normally_ has that, unless it ends up in an io-wq punt.
> 
> You can certainly do it in ->issue() and still have it be per-submit,
> the latter which I care about for safety reasons. This just means it has
> to be provided in the _first_ issue, and that IOSQE_ASYNC must not be
> set on the request. I think that restriction is fine, nobody should
> really be using IOSQE_ASYNC anyway.

Yes, IOSQE_ASYNC can't work, and IO_LINK can't work too.

The restriction on IO_LINK is too strong, because it needs big
application change.

> 
> I think the original POC maybe did more harm than good in that it was
> too simplistic, and you seem too focused on the limits of that. So let

I am actually trying to thinking how to code local buffer, that is why I
puts these questions first.

> me detail what it actually could look like. We have io_submit_state in
> io_ring_ctx. This is per-submit private data, it's initialized and
> flushed for each io_uring_enter(2) that submits requests.
> 
> We have a registered file and buffer table, file_table and buf_table.
> These have life times that are dependent on the ring and
> registration/unregistration. We could have a local_table. This one
> should be setup by some register command, eg reserving X slots for that.
> At the end of submit, we'd flush this table, putting nodes in there.
> Requests can look at the table in either prep or issue, and find buffer
> nodes. If a request uses one of these, it grabs a ref and hence has it
> available until it puts it at IO completion time. When a single submit
> context is done, local_table is iterated (if any entries exist) and
> existing nodes cleared and put.
> 
> That provides a similar table to buf_table, but with a lifetime of a
> submit. Just like local buf. Yes it would not be private to a single
> group, it'd be private to a submit which has potentially bigger scope,
> but that should not matter.
> 
> That should give you exactly what you need, if you use
> IORING_RSRC_KBUFFER in the local_table. But it could even be used for
> IORING_RSRC_BUFFER as well, providing buffers for a single submit cycle
> as well.
> 
> Rather than do something completely on the side with
> io_uring_kernel_buf, we can use io_rsrc_node and io_mapped_ubuf for
> this. Which goes back to my initial rant in this email - use EXISTING
> infrastructure for these things. A big part of why this isn't making
> progress is that a lot of things are done on the side rather than being
> integrated. Then you need extra io_kiocb members, where it really should
> just be using io_rsrc_node and get everything else for free. No need to
> do special checking and putting seperately, it's a resource node just
> like any other resource node we already support.
> 
> > The only common code could be one buffer abstraction for OP to use, but
> > still used differently, ->prep() vs. ->issue().
> 
> With the prep vs issue thing not being an issue, then it sounds like we

IO_LINK is another blocker for prep vs issue thing.

> fully agree that a) it should be one buffer abstraction, and b) we

Yes, can't agree more.

> already have the infrastructure for this. We just need to add
> IORING_RSRC_KBUFFER, which I already posted some POC code for.

I am open to IORING_RSRC_KBUFFER if there isn't too strong limit for
application. Disallowing IOSQE_ASYNC is fine, but not allowing IO_LINK
does cause trouble for application, and need big change on app code.

> 
> > So it is hard to call it decouple, especially REQ_F_GROUP_KBUF has been
> > simple enough, and the main change is to import it in OP code.
> > 
> > Local buffer is one smart idea, but I hope the following things may be
> > settled first:
> > 
> > 1) is it generic enough to just allow to provide local buffer during
> > ->prep()?
> > 
> > - this way actually becomes sync & nowait IO, instead AIO, and has been
> >   one strong constraint from UAPI viewpoint.
> > 
> > - Driver may need to wait until some data comes, then return & provide
> > the buffer with data, and local buffer can't cover this case
> 
> This should be moot now with the above explanation.
> 
> > 2) is it allowed to call ->uring_cmd() from io_uring_cmd_prep()? If not,
> > any idea to call into driver for leasing the kernel buffer to io_uring?
> 
> Ditto
> 
> > 3) in OP code, how to differentiate normal userspace buffer select with
> > local buffer? And how does OP know if normal buffer select or local
> > kernel buffer should be used? Some OP may want to use normal buffer
> > select instead of local buffer, others may want to use local buffer.
> 
> Yes this is a key question we need to figure out. Right now using fixed
> buffers needs to set ->buf_index, and the OP needs to know aboout it.
> let's not confuse it with buffer select, IOSQE_BUFFER_SELECT, as that's
> for provided buffers.

Indeed, here what matters is fixed buffer.

> 
> > 4) arbitrary numbers of local buffer needs to be supported, since IO
> > often comes at batch, it shouldn't be hard to support it by adding xarray
> > to submission state, what do you think of this added complexity? Without
> > supporting arbitrary number of local buffers, performance can be just
> > bad, it doesn't make sense as zc viewpoint. Meantime as number of local
> > buffer is increased, more rsrc_node & imu allocation is introduced, this
> > still may degrade perf a bit.
> 
> That's fine, we just need to reserve space for them upfront. I don't
> like the xarray idea, as:
> 
> 1) xarray does internal locking, which we don't need here
> 2) The existing io_rsrc_data table is what is being used for
>    io_rsrc_node management now. This would introduce another method for
>    that.
> 
> I do want to ensure that io_submit_state_finish() is still low overhead,
> and using an xarray would be more expensive than just doing:
> 
> if (ctx->local_table.nr)
> 	flush_nodes();
> 
> as you'd need to always setup an iterator. But this isn't really THAT
> important. The benefit of using an xarray would be that we'd get
> flexible storing of members without needing pre-registration, obviously.

The main trouble could be buffer table pre-allocation if xarray isn't
used.

> 
> > 5) io_rsrc_node becomes part of interface between io_uring and driver
> > for releasing the leased buffer, so extra data has to be
> > added to `io_rsrc_node` for driver use.
> 
> That's fine imho. The KBUFFER addition already adds the callback, we can
> add a data thing too. The kernel you based your code on has an
> io_rsrc_node that is 48 bytes in size, and my current tree has one where
> it's 32 bytes in size after the table rework. If we have to add 2x8b to
> support this, that's NOT a big deal and we just end up with a node
> that's the same size as before.
> 
> And we get rid of this odd intermediate io_uring_kernel_buf struct,
> which is WAY bigger anyway, and requires TWO allocations where the
> existing io_mapped_ubuf embeds the bvec. I'd argue two vs one allocs is
> a much bigger deal for performance reasons.

The structure is actually preallocated in ublk, so it is zero allocation
in my patchset in case of non bio merge.

> 
> As a final note, one thing I mentioned in an earlier email is that part
> of the issue here is that there are several things that need ironing
> out, and they are actually totally separate. One is the buffer side,
> which this email mostly deals with, the other one is the grouping
> concept.
> 
> For the sqe grouping, one sticking point has been using that last
> sqe->flags bit. I was thinking about this last night, and what if we got
> away from using a flag entirely? At some point io_uring needs to deal
> with this flag limitation, but it's arguably a large effort, and I'd
> greatly prefer not having to paper over it to shove in grouped SQEs.
> 
> So... what if we simply added a new OP, IORING_OP_GROUP_START, or
> something like that. Hence instead of having a designated group leader
> bit for an OP, eg:
> 
> sqe = io_uring_get_sqe(ring);
> io_uring_prep_read(sqe, ...);
> sqe->flags |= IOSQE_GROUP_BIT;
> 
> you'd do:
> 
> sqe = io_uring_get_sqe(ring);
> io_uring_prep_group_start(sqe, ...);
> sqe->flags |= IOSQE_IO_LINK;

One problem is how to map IORING_OP_GROUP_START to uring_cmd.

IOSQE_IO_LINK isn't another trouble, because it becomes not possible
to model dependency among groups.

> 
> sqe = io_uring_get_sqe(ring);
> io_uring_prep_read(sqe, ...);
> 
> which would be the equivalent transformation - the read would be the
> group leader as it's the first member of that chain. The read should set
> IOSQE_IO_LINK for as long as it has members. The members in that group
> would NOT be serialized. They would use IOSQE_IO_LINK purely to be part
> of that group, but IOSQE_IO_LINK would not cause them to be serialized.
> Hence the link just implies membership, not ordering within the group.
> 
> This removes the flag issue, with the sligth caveat that IOSQE_IO_LINK
> has a different meaning inside the group. Maybe you'd need a GROUP_END

No, for group leader IOSQE_IO_LINK works same as any other normal SQE.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2024-11-01  1:39 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-25 12:22 [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Ming Lei
2024-10-25 12:22 ` [PATCH V8 1/7] io_uring: add io_link_req() helper Ming Lei
2024-10-25 12:22 ` [PATCH V8 2/7] io_uring: add io_submit_fail_link() helper Ming Lei
2024-10-25 12:22 ` [PATCH V8 3/7] io_uring: add helper of io_req_commit_cqe() Ming Lei
2024-10-25 12:22 ` [PATCH V8 4/7] io_uring: support SQE group Ming Lei
2024-10-29  0:12   ` Jens Axboe
2024-10-29  1:50     ` Ming Lei
2024-10-29 16:38       ` Pavel Begunkov
2024-10-31 21:24   ` Jens Axboe
2024-10-31 21:39     ` Jens Axboe
2024-11-01  0:00       ` Jens Axboe
2024-10-25 12:22 ` [PATCH V8 5/7] io_uring: support leased group buffer with REQ_F_GROUP_KBUF Ming Lei
2024-10-29 16:47   ` Pavel Begunkov
2024-10-30  0:45     ` Ming Lei
2024-10-30  1:25       ` Pavel Begunkov
2024-10-30  2:04         ` Ming Lei
2024-10-31 13:16           ` Pavel Begunkov
2024-11-01  1:04             ` Ming Lei
2024-10-25 12:22 ` [PATCH V8 6/7] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
2024-10-25 12:22 ` [PATCH V8 7/7] ublk: support leasing io " Ming Lei
2024-10-29 17:01 ` [PATCH V8 0/8] io_uring: support sqe group and leased group kbuf Pavel Begunkov
2024-10-29 17:04   ` Jens Axboe
2024-10-29 19:18     ` Jens Axboe
2024-10-29 20:06       ` Jens Axboe
2024-10-29 21:26         ` Jens Axboe
2024-10-30  2:03           ` Ming Lei
2024-10-30  2:43             ` Jens Axboe
2024-10-30  3:08               ` Ming Lei
2024-10-30  4:11                 ` Ming Lei
2024-10-30 13:20                   ` Jens Axboe
2024-10-31  2:53                     ` Ming Lei
2024-10-31 13:35                       ` Jens Axboe
2024-10-31 15:07                         ` Jens Axboe
2024-11-01  1:39                         ` Ming Lei
2024-10-31 13:42                       ` Pavel Begunkov
2024-10-30 13:18                 ` Jens Axboe
2024-10-31 13:25               ` Pavel Begunkov
2024-10-31 14:29                 ` Jens Axboe
2024-10-31 15:25                   ` Pavel Begunkov
2024-10-31 15:42                     ` Jens Axboe
2024-10-31 16:29                       ` Pavel Begunkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox