* [PATCH V10 01/12] io_uring/rsrc: pass 'struct io_ring_ctx' reference to rsrc helpers
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 02/12] io_uring/rsrc: remove '->ctx_ptr' of 'struct io_rsrc_node' Ming Lei
` (11 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
`io_rsrc_node` instance won't be shared among different io_uring ctxs,
and its allocation 'ctx' is always same with the user's 'ctx', so it is
safe to pass user 'ctx' reference to rsrc helpers. Even in io_clone_buffers(),
`io_rsrc_node` instance is allocated actually for destination io_uring_ctx.
Then io_rsrc_node_ctx() can be removed, and the 8 bytes `ctx` pointer will be
removed from `io_rsrc_node` in the following patch.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/filetable.c | 13 +++++++------
io_uring/filetable.h | 4 ++--
io_uring/rsrc.c | 24 +++++++++++-------------
io_uring/rsrc.h | 22 +++++++++-------------
io_uring/splice.c | 2 +-
5 files changed, 30 insertions(+), 35 deletions(-)
diff --git a/io_uring/filetable.c b/io_uring/filetable.c
index 45f005f5db42..a21660e3145a 100644
--- a/io_uring/filetable.c
+++ b/io_uring/filetable.c
@@ -36,20 +36,21 @@ static int io_file_bitmap_get(struct io_ring_ctx *ctx)
return -ENFILE;
}
-bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files)
+bool io_alloc_file_tables(struct io_ring_ctx *ctx, struct io_file_table *table,
+ unsigned nr_files)
{
if (io_rsrc_data_alloc(&table->data, nr_files))
return false;
table->bitmap = bitmap_zalloc(nr_files, GFP_KERNEL_ACCOUNT);
if (table->bitmap)
return true;
- io_rsrc_data_free(&table->data);
+ io_rsrc_data_free(ctx, &table->data);
return false;
}
-void io_free_file_tables(struct io_file_table *table)
+void io_free_file_tables(struct io_ring_ctx *ctx, struct io_file_table *table)
{
- io_rsrc_data_free(&table->data);
+ io_rsrc_data_free(ctx, &table->data);
bitmap_free(table->bitmap);
table->bitmap = NULL;
}
@@ -71,7 +72,7 @@ static int io_install_fixed_file(struct io_ring_ctx *ctx, struct file *file,
if (!node)
return -ENOMEM;
- if (!io_reset_rsrc_node(&ctx->file_table.data, slot_index))
+ if (!io_reset_rsrc_node(ctx, &ctx->file_table.data, slot_index))
io_file_bitmap_set(&ctx->file_table, slot_index);
ctx->file_table.data.nodes[slot_index] = node;
@@ -130,7 +131,7 @@ int io_fixed_fd_remove(struct io_ring_ctx *ctx, unsigned int offset)
node = io_rsrc_node_lookup(&ctx->file_table.data, offset);
if (!node)
return -EBADF;
- io_reset_rsrc_node(&ctx->file_table.data, offset);
+ io_reset_rsrc_node(ctx, &ctx->file_table.data, offset);
io_file_bitmap_clear(&ctx->file_table, offset);
return 0;
}
diff --git a/io_uring/filetable.h b/io_uring/filetable.h
index bfacadb8d089..7717ea9efd0e 100644
--- a/io_uring/filetable.h
+++ b/io_uring/filetable.h
@@ -6,8 +6,8 @@
#include <linux/io_uring_types.h>
#include "rsrc.h"
-bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files);
-void io_free_file_tables(struct io_file_table *table);
+bool io_alloc_file_tables(struct io_ring_ctx *ctx, struct io_file_table *table, unsigned nr_files);
+void io_free_file_tables(struct io_ring_ctx *ctx, struct io_file_table *table);
int io_fixed_fd_install(struct io_kiocb *req, unsigned int issue_flags,
struct file *file, unsigned int file_slot);
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 2fb1791d7255..d7db36a2c66e 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -130,13 +130,13 @@ struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx, int type)
return node;
}
-__cold void io_rsrc_data_free(struct io_rsrc_data *data)
+__cold void io_rsrc_data_free(struct io_ring_ctx *ctx, struct io_rsrc_data *data)
{
if (!data->nr)
return;
while (data->nr--) {
if (data->nodes[data->nr])
- io_put_rsrc_node(data->nodes[data->nr]);
+ io_put_rsrc_node(ctx, data->nodes[data->nr]);
}
kvfree(data->nodes);
data->nodes = NULL;
@@ -184,7 +184,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
continue;
i = up->offset + done;
- if (io_reset_rsrc_node(&ctx->file_table.data, i))
+ if (io_reset_rsrc_node(ctx, &ctx->file_table.data, i))
io_file_bitmap_clear(&ctx->file_table, i);
if (fd != -1) {
@@ -266,7 +266,7 @@ static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
node->tag = tag;
}
i = array_index_nospec(up->offset + done, ctx->buf_table.nr);
- io_reset_rsrc_node(&ctx->buf_table, i);
+ io_reset_rsrc_node(ctx, &ctx->buf_table, i);
ctx->buf_table.nodes[i] = node;
if (ctx->compat)
user_data += sizeof(struct compat_iovec);
@@ -442,10 +442,8 @@ int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
return IOU_OK;
}
-void io_free_rsrc_node(struct io_rsrc_node *node)
+void io_free_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
{
- struct io_ring_ctx *ctx = io_rsrc_node_ctx(node);
-
lockdep_assert_held(&ctx->uring_lock);
if (node->tag)
@@ -473,7 +471,7 @@ int io_sqe_files_unregister(struct io_ring_ctx *ctx)
if (!ctx->file_table.data.nr)
return -ENXIO;
- io_free_file_tables(&ctx->file_table);
+ io_free_file_tables(ctx, &ctx->file_table);
io_file_table_set_alloc_range(ctx, 0, 0);
return 0;
}
@@ -494,7 +492,7 @@ int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
return -EMFILE;
if (nr_args > rlimit(RLIMIT_NOFILE))
return -EMFILE;
- if (!io_alloc_file_tables(&ctx->file_table, nr_args))
+ if (!io_alloc_file_tables(ctx, &ctx->file_table, nr_args))
return -ENOMEM;
for (i = 0; i < nr_args; i++) {
@@ -551,7 +549,7 @@ int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
{
if (!ctx->buf_table.nr)
return -ENXIO;
- io_rsrc_data_free(&ctx->buf_table);
+ io_rsrc_data_free(ctx, &ctx->buf_table);
return 0;
}
@@ -788,7 +786,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
if (ret) {
kvfree(imu);
if (node)
- io_put_rsrc_node(node);
+ io_put_rsrc_node(ctx, node);
node = ERR_PTR(ret);
}
kvfree(pages);
@@ -1018,7 +1016,7 @@ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx
* old and new nodes at this point.
*/
if (arg->flags & IORING_REGISTER_DST_REPLACE)
- io_rsrc_data_free(&ctx->buf_table);
+ io_rsrc_data_free(ctx, &ctx->buf_table);
/*
* ctx->buf_table should be empty now - either the contents are being
@@ -1042,7 +1040,7 @@ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx
kfree(data.nodes[i]);
}
out_unlock:
- io_rsrc_data_free(&data);
+ io_rsrc_data_free(ctx, &data);
mutex_unlock(&src_ctx->uring_lock);
mutex_lock(&ctx->uring_lock);
return ret;
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index bc3a863b14bb..c9057f7a06f5 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -45,8 +45,8 @@ struct io_imu_folio_data {
};
struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx, int type);
-void io_free_rsrc_node(struct io_rsrc_node *node);
-void io_rsrc_data_free(struct io_rsrc_data *data);
+void io_free_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node);
+void io_rsrc_data_free(struct io_ring_ctx *ctx, struct io_rsrc_data *data);
int io_rsrc_data_alloc(struct io_rsrc_data *data, unsigned nr);
int io_import_fixed(int ddir, struct iov_iter *iter,
@@ -76,19 +76,20 @@ static inline struct io_rsrc_node *io_rsrc_node_lookup(struct io_rsrc_data *data
return NULL;
}
-static inline void io_put_rsrc_node(struct io_rsrc_node *node)
+static inline void io_put_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
{
if (node && !--node->refs)
- io_free_rsrc_node(node);
+ io_free_rsrc_node(ctx, node);
}
-static inline bool io_reset_rsrc_node(struct io_rsrc_data *data, int index)
+static inline bool io_reset_rsrc_node(struct io_ring_ctx *ctx,
+ struct io_rsrc_data *data, int index)
{
struct io_rsrc_node *node = data->nodes[index];
if (!node)
return false;
- io_put_rsrc_node(node);
+ io_put_rsrc_node(ctx, node);
data->nodes[index] = NULL;
return true;
}
@@ -96,20 +97,15 @@ static inline bool io_reset_rsrc_node(struct io_rsrc_data *data, int index)
static inline void io_req_put_rsrc_nodes(struct io_kiocb *req)
{
if (req->file_node) {
- io_put_rsrc_node(req->file_node);
+ io_put_rsrc_node(req->ctx, req->file_node);
req->file_node = NULL;
}
if (req->flags & REQ_F_BUF_NODE) {
- io_put_rsrc_node(req->buf_node);
+ io_put_rsrc_node(req->ctx, req->buf_node);
req->buf_node = NULL;
}
}
-static inline struct io_ring_ctx *io_rsrc_node_ctx(struct io_rsrc_node *node)
-{
- return (struct io_ring_ctx *) (node->ctx_ptr & ~IORING_RSRC_TYPE_MASK);
-}
-
static inline int io_rsrc_node_type(struct io_rsrc_node *node)
{
return node->ctx_ptr & IORING_RSRC_TYPE_MASK;
diff --git a/io_uring/splice.c b/io_uring/splice.c
index e8ed15f4ea1a..5b84f1630611 100644
--- a/io_uring/splice.c
+++ b/io_uring/splice.c
@@ -51,7 +51,7 @@ void io_splice_cleanup(struct io_kiocb *req)
{
struct io_splice *sp = io_kiocb_to_cmd(req, struct io_splice);
- io_put_rsrc_node(sp->rsrc_node);
+ io_put_rsrc_node(req->ctx, sp->rsrc_node);
}
static struct file *io_splice_get_file(struct io_kiocb *req,
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 02/12] io_uring/rsrc: remove '->ctx_ptr' of 'struct io_rsrc_node'
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
2024-11-07 11:01 ` [PATCH V10 01/12] io_uring/rsrc: pass 'struct io_ring_ctx' reference to rsrc helpers Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 03/12] io_uring/rsrc: add & apply io_req_assign_buf_node() Ming Lei
` (10 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
Remove '->ctx_ptr' of 'struct io_rsrc_node', and add 'type' field,
meantime remove io_rsrc_node_type().
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/rsrc.c | 4 ++--
io_uring/rsrc.h | 9 +--------
2 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index d7db36a2c66e..adaae8630932 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -124,7 +124,7 @@ struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx, int type)
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (node) {
- node->ctx_ptr = (unsigned long) ctx | type;
+ node->type = type;
node->refs = 1;
}
return node;
@@ -449,7 +449,7 @@ void io_free_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
if (node->tag)
io_post_aux_cqe(ctx, node->tag, 0, 0);
- switch (io_rsrc_node_type(node)) {
+ switch (node->type) {
case IORING_RSRC_FILE:
if (io_slot_file(node))
fput(io_slot_file(node));
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index c9057f7a06f5..c8a64a9ed5b9 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -11,12 +11,10 @@
enum {
IORING_RSRC_FILE = 0,
IORING_RSRC_BUFFER = 1,
-
- IORING_RSRC_TYPE_MASK = 0x3UL,
};
struct io_rsrc_node {
- unsigned long ctx_ptr;
+ unsigned char type;
int refs;
u64 tag;
@@ -106,11 +104,6 @@ static inline void io_req_put_rsrc_nodes(struct io_kiocb *req)
}
}
-static inline int io_rsrc_node_type(struct io_rsrc_node *node)
-{
- return node->ctx_ptr & IORING_RSRC_TYPE_MASK;
-}
-
static inline void io_req_assign_rsrc_node(struct io_rsrc_node **dst_node,
struct io_rsrc_node *node)
{
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 03/12] io_uring/rsrc: add & apply io_req_assign_buf_node()
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
2024-11-07 11:01 ` [PATCH V10 01/12] io_uring/rsrc: pass 'struct io_ring_ctx' reference to rsrc helpers Ming Lei
2024-11-07 11:01 ` [PATCH V10 02/12] io_uring/rsrc: remove '->ctx_ptr' of 'struct io_rsrc_node' Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 04/12] io_uring/rsrc: prepare for supporting external 'io_rsrc_node' Ming Lei
` (9 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
The following pattern becomes more and more:
+ io_req_assign_rsrc_node(&req->buf_node, node);
+ req->flags |= REQ_F_BUF_NODE;
so make it a helper, which is less fragile to use than above code, for
example, the BUF_NODE flag is even missed in current io_uring_cmd_prep().
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/net.c | 3 +--
io_uring/nop.c | 3 +--
io_uring/rsrc.h | 7 +++++++
io_uring/rw.c | 3 +--
io_uring/uring_cmd.c | 2 +-
5 files changed, 11 insertions(+), 7 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index 2ccc2b409431..df1f7dc6f1c8 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -1348,8 +1348,7 @@ static int io_send_zc_import(struct io_kiocb *req, unsigned int issue_flags)
io_ring_submit_lock(ctx, issue_flags);
node = io_rsrc_node_lookup(&ctx->buf_table, sr->buf_index);
if (node) {
- io_req_assign_rsrc_node(&sr->notif->buf_node, node);
- sr->notif->flags |= REQ_F_BUF_NODE;
+ io_req_assign_buf_node(sr->notif, node);
ret = 0;
}
io_ring_submit_unlock(ctx, issue_flags);
diff --git a/io_uring/nop.c b/io_uring/nop.c
index bc22bcc739f3..6d470d4251ee 100644
--- a/io_uring/nop.c
+++ b/io_uring/nop.c
@@ -67,8 +67,7 @@ int io_nop(struct io_kiocb *req, unsigned int issue_flags)
io_ring_submit_lock(ctx, issue_flags);
node = io_rsrc_node_lookup(&ctx->buf_table, nop->buffer);
if (node) {
- io_req_assign_rsrc_node(&req->buf_node, node);
- req->flags |= REQ_F_BUF_NODE;
+ io_req_assign_buf_node(req, node);
ret = 0;
}
io_ring_submit_unlock(ctx, issue_flags);
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index c8a64a9ed5b9..7a4668deaa1a 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -111,6 +111,13 @@ static inline void io_req_assign_rsrc_node(struct io_rsrc_node **dst_node,
*dst_node = node;
}
+static inline void io_req_assign_buf_node(struct io_kiocb *req,
+ struct io_rsrc_node *node)
+{
+ io_req_assign_rsrc_node(&req->buf_node, node);
+ req->flags |= REQ_F_BUF_NODE;
+}
+
int io_files_update(struct io_kiocb *req, unsigned int issue_flags);
int io_files_update_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
diff --git a/io_uring/rw.c b/io_uring/rw.c
index e368b9afde03..b62cdb5fc936 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -341,8 +341,7 @@ static int io_prep_rw_fixed(struct io_kiocb *req, const struct io_uring_sqe *sqe
node = io_rsrc_node_lookup(&ctx->buf_table, req->buf_index);
if (!node)
return -EFAULT;
- io_req_assign_rsrc_node(&req->buf_node, node);
- req->flags |= REQ_F_BUF_NODE;
+ io_req_assign_buf_node(req, node);
io = req->async_data;
ret = io_import_fixed(ddir, &io->iter, node->buf, rw->addr, rw->len);
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 40b8b777ba12..b62965f58f30 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -219,7 +219,7 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
* being called. This prevents destruction of the mapped buffer
* we'll need at actual import time.
*/
- io_req_assign_rsrc_node(&req->buf_node, node);
+ io_req_assign_buf_node(req, node);
}
ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 04/12] io_uring/rsrc: prepare for supporting external 'io_rsrc_node'
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (2 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 03/12] io_uring/rsrc: add & apply io_req_assign_buf_node() Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 05/12] io_uring: rename io_mapped_ubuf as io_mapped_buf Ming Lei
` (8 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
Now 'io_rsrc_node' is just one buffer, and it may be provided from
other subsystem, such as the coming group kernel buffer.
So add flag IORING_RSRC_F_NEED_FREE for external 'io_rsrc_node'.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/rsrc.c | 12 +++++++-----
io_uring/rsrc.h | 15 ++++++++++++++-
2 files changed, 21 insertions(+), 6 deletions(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index adaae8630932..db5d917081b1 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -123,10 +123,9 @@ struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx, int type)
struct io_rsrc_node *node;
node = kzalloc(sizeof(*node), GFP_KERNEL);
- if (node) {
- node->type = type;
- node->refs = 1;
- }
+ if (node)
+ io_rsrc_node_init(node, type, IORING_RSRC_F_NEED_FREE);
+
return node;
}
@@ -444,6 +443,8 @@ int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
void io_free_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
{
+ bool need_free = node->flags & IORING_RSRC_F_NEED_FREE;
+
lockdep_assert_held(&ctx->uring_lock);
if (node->tag)
@@ -463,7 +464,8 @@ void io_free_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
break;
}
- kfree(node);
+ if (need_free)
+ kfree(node);
}
int io_sqe_files_unregister(struct io_ring_ctx *ctx)
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 7a4668deaa1a..582a69adfdc9 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -10,11 +10,15 @@
enum {
IORING_RSRC_FILE = 0,
- IORING_RSRC_BUFFER = 1,
+ IORING_RSRC_BUFFER,
+ __IORING_RSRC_LAST_TYPE,
+
+ IORING_RSRC_F_NEED_FREE = 1 << 0,
};
struct io_rsrc_node {
unsigned char type;
+ unsigned char flags;
int refs;
u64 tag;
@@ -66,6 +70,15 @@ int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
unsigned int size, unsigned int type);
+static inline void io_rsrc_node_init(struct io_rsrc_node *node, int type,
+ unsigned char flags)
+{
+ WARN_ON_ONCE(type >= __IORING_RSRC_LAST_TYPE);
+
+ node->type = type;
+ node->refs = 1;
+ node->flags = flags;
+}
static inline struct io_rsrc_node *io_rsrc_node_lookup(struct io_rsrc_data *data,
int index)
{
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 05/12] io_uring: rename io_mapped_ubuf as io_mapped_buf
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (3 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 04/12] io_uring/rsrc: prepare for supporting external 'io_rsrc_node' Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 06/12] io_uring: rename io_mapped_buf->ubuf as io_mapped_buf->addr Ming Lei
` (7 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
Rename io_mapped_ubuf so that the same structure can be used for
describing kernel buffer.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/fdinfo.c | 2 +-
io_uring/rsrc.c | 10 +++++-----
io_uring/rsrc.h | 6 +++---
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
index b214e5a407b5..9ca95f877312 100644
--- a/io_uring/fdinfo.c
+++ b/io_uring/fdinfo.c
@@ -218,7 +218,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
}
seq_printf(m, "UserBufs:\t%u\n", ctx->buf_table.nr);
for (i = 0; has_lock && i < ctx->buf_table.nr; i++) {
- struct io_mapped_ubuf *buf = NULL;
+ struct io_mapped_buf *buf = NULL;
if (ctx->buf_table.nodes[i])
buf = ctx->buf_table.nodes[i]->buf;
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index db5d917081b1..a4a553bbbbfa 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -106,7 +106,7 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
unsigned int i;
if (node->buf) {
- struct io_mapped_ubuf *imu = node->buf;
+ struct io_mapped_buf *imu = node->buf;
if (!refcount_dec_and_test(&imu->refs))
return;
@@ -580,7 +580,7 @@ static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
/* check previously registered pages */
for (i = 0; i < ctx->buf_table.nr; i++) {
struct io_rsrc_node *node = ctx->buf_table.nodes[i];
- struct io_mapped_ubuf *imu;
+ struct io_mapped_buf *imu;
if (!node)
continue;
@@ -597,7 +597,7 @@ static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
}
static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
- int nr_pages, struct io_mapped_ubuf *imu,
+ int nr_pages, struct io_mapped_buf *imu,
struct page **last_hpage)
{
int i, ret;
@@ -724,7 +724,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
struct iovec *iov,
struct page **last_hpage)
{
- struct io_mapped_ubuf *imu = NULL;
+ struct io_mapped_buf *imu = NULL;
struct page **pages = NULL;
struct io_rsrc_node *node;
unsigned long off;
@@ -866,7 +866,7 @@ int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
}
int io_import_fixed(int ddir, struct iov_iter *iter,
- struct io_mapped_ubuf *imu,
+ struct io_mapped_buf *imu,
u64 buf_addr, size_t len)
{
u64 buf_end;
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 582a69adfdc9..0867dc304f4f 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -24,11 +24,11 @@ struct io_rsrc_node {
u64 tag;
union {
unsigned long file_ptr;
- struct io_mapped_ubuf *buf;
+ struct io_mapped_buf *buf;
};
};
-struct io_mapped_ubuf {
+struct io_mapped_buf {
u64 ubuf;
unsigned int len;
unsigned int nr_bvecs;
@@ -52,7 +52,7 @@ void io_rsrc_data_free(struct io_ring_ctx *ctx, struct io_rsrc_data *data);
int io_rsrc_data_alloc(struct io_rsrc_data *data, unsigned nr);
int io_import_fixed(int ddir, struct iov_iter *iter,
- struct io_mapped_ubuf *imu,
+ struct io_mapped_buf *imu,
u64 buf_addr, size_t len);
int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg);
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 06/12] io_uring: rename io_mapped_buf->ubuf as io_mapped_buf->addr
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (4 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 05/12] io_uring: rename io_mapped_ubuf as io_mapped_buf Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 07/12] io_uring: shrink io_mapped_buf Ming Lei
` (6 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
->addr of `io_mapped_buf` stores the start address of userspace fixed
buffer. `io_mapped_buf` will be extended for covering kernel buffer,
so rename ->ubuf as ->addr.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/fdinfo.c | 2 +-
io_uring/rsrc.c | 6 +++---
io_uring/rsrc.h | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
index 9ca95f877312..1fd05e78ce15 100644
--- a/io_uring/fdinfo.c
+++ b/io_uring/fdinfo.c
@@ -223,7 +223,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
if (ctx->buf_table.nodes[i])
buf = ctx->buf_table.nodes[i]->buf;
if (buf)
- seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, buf->len);
+ seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->addr, buf->len);
else
seq_printf(m, "%5u: <none>\n", i);
}
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index a4a553bbbbfa..f57c4d295f09 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -765,7 +765,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
size = iov->iov_len;
/* store original address for later verification */
- imu->ubuf = (unsigned long) iov->iov_base;
+ imu->addr = (unsigned long) iov->iov_base;
imu->len = iov->iov_len;
imu->nr_bvecs = nr_pages;
imu->folio_shift = PAGE_SHIFT;
@@ -877,14 +877,14 @@ int io_import_fixed(int ddir, struct iov_iter *iter,
if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end)))
return -EFAULT;
/* not inside the mapped region */
- if (unlikely(buf_addr < imu->ubuf || buf_end > (imu->ubuf + imu->len)))
+ if (unlikely(buf_addr < imu->addr || buf_end > (imu->addr + imu->len)))
return -EFAULT;
/*
* Might not be a start of buffer, set size appropriately
* and advance us to the beginning.
*/
- offset = buf_addr - imu->ubuf;
+ offset = buf_addr - imu->addr;
iov_iter_bvec(iter, ddir, imu->bvec, imu->nr_bvecs, offset + len);
if (offset) {
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 0867dc304f4f..c8a4db4721ca 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -29,7 +29,7 @@ struct io_rsrc_node {
};
struct io_mapped_buf {
- u64 ubuf;
+ u64 addr;
unsigned int len;
unsigned int nr_bvecs;
unsigned int folio_shift;
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 07/12] io_uring: shrink io_mapped_buf
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (5 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 06/12] io_uring: rename io_mapped_buf->ubuf as io_mapped_buf->addr Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 08/12] io_uring: reuse io_mapped_buf for kernel buffer Ming Lei
` (5 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
`struct io_mapped_buf` will be extended to cover kernel buffer which
may be in fast IO path, and `struct io_mapped_buf` needs to be per-IO.
So shrink sizeof(struct io_mapped_buf) by the following ways:
- folio_shift is < 64, so 6bits are enough to hold it, the remained bits
can be used for the coming kernel buffer
- define `acct_pages` as 'unsigned int', which is big enough for
accounting pages in the buffer
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/rsrc.c | 2 ++
io_uring/rsrc.h | 6 +++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index f57c4d295f09..99ff2797e6ec 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -685,6 +685,8 @@ static bool io_try_coalesce_buffer(struct page ***pages, int *nr_pages,
return false;
data->folio_shift = folio_shift(folio);
+ WARN_ON_ONCE(data->folio_shift >= 64);
+
/*
* Check if pages are contiguous inside a folio, and all folios have
* the same page count except for the head and tail.
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index c8a4db4721ca..bf0824b4beb6 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -32,9 +32,9 @@ struct io_mapped_buf {
u64 addr;
unsigned int len;
unsigned int nr_bvecs;
- unsigned int folio_shift;
refcount_t refs;
- unsigned long acct_pages;
+ unsigned int acct_pages;
+ unsigned int folio_shift:6;
struct bio_vec bvec[] __counted_by(nr_bvecs);
};
@@ -43,7 +43,7 @@ struct io_imu_folio_data {
unsigned int nr_pages_head;
/* For non-head/tail folios, has to be fully included */
unsigned int nr_pages_mid;
- unsigned int folio_shift;
+ unsigned char folio_shift;
};
struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx, int type);
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 08/12] io_uring: reuse io_mapped_buf for kernel buffer
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (6 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 07/12] io_uring: shrink io_mapped_buf Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 09/12] io_uring: add callback to 'io_mapped_buffer' for giving back " Ming Lei
` (4 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
Prepare for supporting kernel buffer in case of io group, in which group
leader leases kernel buffer to io_uring, and consumed by io_uring OPs.
So reuse io_mapped_buf for group kernel buffer, and unfortunately
io_import_fixed() can't be reused since userspace fixed buffer is
virt-contiguous, but it isn't true for kernel buffer.
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring_types.h | 19 +++++++++++++++++++
io_uring/kbuf.c | 34 ++++++++++++++++++++++++++++++++++
io_uring/kbuf.h | 3 +++
io_uring/rsrc.c | 1 +
io_uring/rsrc.h | 10 ----------
5 files changed, 57 insertions(+), 10 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index d060ce5e6145..03abaeef4a67 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -2,6 +2,7 @@
#define IO_URING_TYPES_H
#include <linux/blkdev.h>
+#include <linux/bvec.h>
#include <linux/hashtable.h>
#include <linux/task_work.h>
#include <linux/bitmap.h>
@@ -39,6 +40,24 @@ enum io_uring_cmd_flags {
IO_URING_F_COMPAT = (1 << 12),
};
+struct io_mapped_buf {
+ u64 addr;
+ unsigned int len;
+ unsigned int nr_bvecs;
+ refcount_t refs;
+ union {
+ /* for userspace buffer only */
+ unsigned int acct_pages;
+ /* offset in the 1st bvec, for kbuf only */
+ unsigned int offset;
+ };
+ const struct bio_vec *pbvec; /* pbvec is only for kbuf */
+ unsigned int folio_shift:6;
+ unsigned int dir:1; /* ITER_DEST or ITER_SOURCE */
+ unsigned int kbuf:1; /* kernel buffer or not */
+ struct bio_vec bvec[] __counted_by(nr_bvecs);
+};
+
struct io_wq_work_node {
struct io_wq_work_node *next;
};
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index d407576ddfb7..c4a776860cb4 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -838,3 +838,37 @@ int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma)
io_put_bl(ctx, bl);
return ret;
}
+
+/*
+ * kernel buffer is built over generic bvec, and can't be always
+ * virt-contiguous, which is different with userspace fixed buffer,
+ * so we can't reuse io_import_fixed() here
+ *
+ * Also kernel buffer lifetime is bound with request, and we needn't
+ * to use rsrc_node to track its lifetime
+ */
+int io_import_kbuf(int ddir, struct iov_iter *iter,
+ const struct io_mapped_buf *kbuf,
+ u64 buf_off, size_t len)
+{
+ unsigned long offset = kbuf->offset;
+
+ WARN_ON_ONCE(!kbuf->kbuf);
+
+ if (ddir != kbuf->dir)
+ return -EINVAL;
+
+ if (unlikely(buf_off > kbuf->len))
+ return -EFAULT;
+
+ if (unlikely(len > kbuf->len - buf_off))
+ return -EFAULT;
+
+ offset += buf_off;
+ iov_iter_bvec(iter, ddir, kbuf->pbvec, kbuf->nr_bvecs, offset + len);
+
+ if (offset)
+ iov_iter_advance(iter, offset);
+
+ return 0;
+}
diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
index 36aadfe5ac00..04ccd52dd0ad 100644
--- a/io_uring/kbuf.h
+++ b/io_uring/kbuf.h
@@ -88,6 +88,9 @@ void io_put_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl);
struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx,
unsigned long bgid);
int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma);
+int io_import_kbuf(int ddir, struct iov_iter *iter,
+ const struct io_mapped_buf *kbuf,
+ u64 buf_off, size_t len);
static inline bool io_kbuf_recycle_ring(struct io_kiocb *req)
{
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 99ff2797e6ec..b0b60ae0456a 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -771,6 +771,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
imu->len = iov->iov_len;
imu->nr_bvecs = nr_pages;
imu->folio_shift = PAGE_SHIFT;
+ imu->kbuf = 0;
if (coalesced)
imu->folio_shift = data.folio_shift;
refcount_set(&imu->refs, 1);
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index bf0824b4beb6..3bc3a484fbba 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -28,16 +28,6 @@ struct io_rsrc_node {
};
};
-struct io_mapped_buf {
- u64 addr;
- unsigned int len;
- unsigned int nr_bvecs;
- refcount_t refs;
- unsigned int acct_pages;
- unsigned int folio_shift:6;
- struct bio_vec bvec[] __counted_by(nr_bvecs);
-};
-
struct io_imu_folio_data {
/* Head folio can be partially included in the fixed buf */
unsigned int nr_pages_head;
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 09/12] io_uring: add callback to 'io_mapped_buffer' for giving back kernel buffer
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (7 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 08/12] io_uring: reuse io_mapped_buf for kernel buffer Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 10/12] io_uring: support leased group buffer with REQ_F_GROUP_BUF Ming Lei
` (3 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
Add one callback to 'io_mapped_buffer' for giving back kernel buffer
after the buffer is used.
Meantime move 'io_rsrc_node' into public header, and it will become
part of kernel buffer API.
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring_types.h | 20 +++++++++++++++++++-
io_uring/rsrc.c | 14 +++++++++++++-
io_uring/rsrc.h | 13 +------------
3 files changed, 33 insertions(+), 14 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 03abaeef4a67..7de0d4c0ed6b 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -40,8 +40,26 @@ enum io_uring_cmd_flags {
IO_URING_F_COMPAT = (1 << 12),
};
+struct io_rsrc_node {
+ unsigned char type;
+ unsigned char flags;
+ int refs;
+
+ u64 tag;
+ union {
+ unsigned long file_ptr;
+ struct io_mapped_buf *buf;
+ };
+};
+
+typedef void (io_uring_kbuf_ack_t) (struct io_rsrc_node *);
+
struct io_mapped_buf {
- u64 addr;
+ /* 'addr' is always 0 for kernel buffer */
+ union {
+ u64 addr;
+ io_uring_kbuf_ack_t *kbuf_ack;
+ };
unsigned int len;
unsigned int nr_bvecs;
refcount_t refs;
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index b0b60ae0456a..327bc1a83e4b 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -441,6 +441,18 @@ int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
return IOU_OK;
}
+static void __io_free_buf_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
+{
+ struct io_mapped_buf *buf = node->buf;
+
+ if (node->flags & IORING_RSRC_F_BUF_KERNEL) {
+ if (buf->kbuf_ack)
+ buf->kbuf_ack(node);
+ } else {
+ io_buffer_unmap(ctx, node);
+ }
+}
+
void io_free_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
{
bool need_free = node->flags & IORING_RSRC_F_NEED_FREE;
@@ -457,7 +469,7 @@ void io_free_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
break;
case IORING_RSRC_BUFFER:
if (node->buf)
- io_buffer_unmap(ctx, node);
+ __io_free_buf_node(ctx, node);
break;
default:
WARN_ON_ONCE(1);
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 3bc3a484fbba..f45a26c3b79d 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -14,18 +14,7 @@ enum {
__IORING_RSRC_LAST_TYPE,
IORING_RSRC_F_NEED_FREE = 1 << 0,
-};
-
-struct io_rsrc_node {
- unsigned char type;
- unsigned char flags;
- int refs;
-
- u64 tag;
- union {
- unsigned long file_ptr;
- struct io_mapped_buf *buf;
- };
+ IORING_RSRC_F_BUF_KERNEL = 1 << 1,
};
struct io_imu_folio_data {
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 10/12] io_uring: support leased group buffer with REQ_F_GROUP_BUF
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (8 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 09/12] io_uring: add callback to 'io_mapped_buffer' for giving back " Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 11:01 ` [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
` (2 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
SQE group introduces one new mechanism to share resource among one
group of requests, and all member requests can consume the resource
leased by group leader efficiently in parallel.
This patch uses the added SQE group to lease kernel buffer from group
leader(driver) to members(io_uring) in sqe group:
- this kernel buffer is owned by kernel device(driver), and has very
short lifetime, such as, it is often aligned with block IO lifetime
- group leader leases the kernel buffer from driver to member requests
of io_uring subsystem
- member requests uses the leased buffer to do FS or network IO,
IOSQE_IO_DRAIN bit isn't used for group member IO, so it is mapped to
GROUP_KBUF; the actual use becomes very similar with buffer select.
- this kernel buffer is returned back after all member requests are
completed
io_uring builtin provide/register buffer isn't one good match for this
use case:
- complicated dependency on add/remove buffer
this buffer has to be added/removed to one global table by add/remove OPs,
and all consumer OPs have to sync with the add/remove OPs; either
consumer OPs have to by issued one by one with IO_LINK; or two extra
syscall are added for one time of buffer lease & consumption, this way
slows down ublk io handling, and may lose zero copy value
- application becomes more complicated
- application may panic and the kernel buffer is left in io_uring, which
complicates io_uring shutdown handling since returning back buffer
needs to cowork with buffer owner
- big change is needed in io_uring provide/register buffer
- the requirement is just to lease the kernel buffer to io_uring subsystem for
very short time, not necessary to move it into io_uring and make it global
This way looks a bit similar with kernel's pipe/splice, but there are some
important differences:
- splice is for transferring data between two FDs via pipe, and fd_out can
only read data from pipe, but data can't be written to; this feature can
lease buffer from group leader(driver subsystem) to members(io_uring subsystem),
so member request can write data to this buffer if the buffer direction is
allowed to write to.
- splice implements data transfer by moving pages between subsystem and
pipe, that means page ownership is transferred, and this way is one of the
most complicated thing of splice; this patch supports scenarios in which
the buffer can't be transferred, and buffer is only borrowed to member
requests for consumption, and is returned back after member requests
consume the leased buffer, so buffer lifetime is aligned with group leader
lifetime, and buffer lifetime is simplified a lot. Especially the buffer
is guaranteed to be returned back.
- splice can't run in async way basically
It can help to implement generic zero copy between device and related
operations, such as ublk, fuse.
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring_types.h | 6 ++++
io_uring/io_uring.c | 24 ++++++++++++---
io_uring/io_uring.h | 5 ++++
io_uring/kbuf.c | 54 ++++++++++++++++++++++++++++++++--
io_uring/kbuf.h | 30 +++++++++++++++++--
io_uring/net.c | 27 ++++++++++++++++-
io_uring/opdef.c | 4 +++
io_uring/opdef.h | 2 ++
io_uring/rsrc.h | 7 +++++
io_uring/rw.c | 37 +++++++++++++++++++----
10 files changed, 180 insertions(+), 16 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 7de0d4c0ed6b..b919ab62020c 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -518,6 +518,8 @@ enum {
REQ_F_BUFFERS_COMMIT_BIT,
REQ_F_BUF_NODE_BIT,
REQ_F_GROUP_LEADER_BIT,
+ REQ_F_GROUP_BUF_BIT,
+ REQ_F_BUF_IMPORTED_BIT,
/* not a real bit, just to check we're not overflowing the space */
__REQ_F_LAST_BIT,
@@ -602,6 +604,10 @@ enum {
REQ_F_BUF_NODE = IO_REQ_FLAG(REQ_F_BUF_NODE_BIT),
/* sqe group lead */
REQ_F_GROUP_LEADER = IO_REQ_FLAG(REQ_F_GROUP_LEADER_BIT),
+ /* Use group leader's buffer */
+ REQ_F_GROUP_BUF = IO_REQ_FLAG(REQ_F_GROUP_BUF_BIT),
+ /* used in case buffer has to be imported from ->issue() once */
+ REQ_F_BUF_IMPORTED = IO_REQ_FLAG(REQ_F_BUF_IMPORTED_BIT),
};
typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 076171977d5e..c0d8b3c34d71 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -925,14 +925,21 @@ static void io_queue_group_members(struct io_kiocb *req)
req->grp_link = NULL;
while (member) {
struct io_kiocb *next = member->grp_link;
+ bool grp_buf = member->flags & REQ_F_GROUP_BUF;
member->grp_leader = req;
if (unlikely(member->flags & REQ_F_FAIL))
io_req_task_queue_fail(member, member->cqe.res);
+ else if (unlikely(grp_buf && !(req->flags & REQ_F_BUF_NODE &&
+ io_issue_defs[member->opcode].group_buf)))
+ io_req_task_queue_fail(member, -EINVAL);
else if (unlikely(req->flags & REQ_F_FAIL))
io_req_task_queue_fail(member, -ECANCELED);
- else
+ else {
+ if (grp_buf)
+ io_req_assign_buf_node(member, req->buf_node);
io_req_task_queue(member);
+ }
member = next;
}
}
@@ -2196,9 +2203,18 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
ctx->drain_disabled = true;
if (sqe_flags & IOSQE_IO_DRAIN) {
- if (ctx->drain_disabled)
- return io_init_fail_req(req, -EOPNOTSUPP);
- io_init_req_drain(req);
+ /* IO_DRAIN is mapped to GROUP_BUF for group members */
+ if (ctx->submit_state.group.head) {
+ /* can't do buffer select */
+ if (sqe_flags & IOSQE_BUFFER_SELECT)
+ return io_init_fail_req(req, -EINVAL);
+ req->flags &= ~REQ_F_IO_DRAIN;
+ req->flags |= REQ_F_GROUP_BUF;
+ } else {
+ if (ctx->drain_disabled)
+ return io_init_fail_req(req, -EOPNOTSUPP);
+ io_init_req_drain(req);
+ }
}
}
if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index facd5c85ba8b..b14acb58b573 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -363,6 +363,11 @@ static inline bool req_is_group_leader(struct io_kiocb *req)
return req->flags & REQ_F_GROUP_LEADER;
}
+static inline bool req_is_group_member(struct io_kiocb *req)
+{
+ return (req->flags & REQ_F_GROUP) && !req_is_group_leader(req);
+}
+
/*
* Don't complete immediately but use deferred completion infrastructure.
* Protected by ->uring_lock and can only be used either with
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index c4a776860cb4..6b2f74daf135 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -847,9 +847,9 @@ int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma)
* Also kernel buffer lifetime is bound with request, and we needn't
* to use rsrc_node to track its lifetime
*/
-int io_import_kbuf(int ddir, struct iov_iter *iter,
- const struct io_mapped_buf *kbuf,
- u64 buf_off, size_t len)
+static int io_import_kbuf(int ddir, struct iov_iter *iter,
+ const struct io_mapped_buf *kbuf,
+ u64 buf_off, size_t len)
{
unsigned long offset = kbuf->offset;
@@ -872,3 +872,51 @@ int io_import_kbuf(int ddir, struct iov_iter *iter,
return 0;
}
+
+int io_import_group_buf(struct io_kiocb *req, int dir, struct iov_iter *iter,
+ unsigned long buf_off, unsigned int len)
+{
+ int ret;
+
+ if (!req_is_group_member(req))
+ return -EINVAL;
+
+ if (!(req->flags & REQ_F_BUF_NODE))
+ return -EINVAL;
+
+ if (req->flags & REQ_F_BUF_IMPORTED)
+ return 0;
+
+ ret = io_import_kbuf(dir, iter, req->buf_node->buf, buf_off, len);
+ if (!ret)
+ req->flags |= REQ_F_BUF_IMPORTED;
+ return ret;
+}
+
+int io_lease_group_kbuf(struct io_kiocb *req,
+ struct io_rsrc_node *node)
+{
+ const struct io_mapped_buf *buf = node->buf;
+
+ if (!(req->flags & REQ_F_GROUP_LEADER))
+ return -EINVAL;
+
+ if (req->flags & (REQ_F_BUFFER_SELECT | REQ_F_BUF_NODE))
+ return -EINVAL;
+
+ if (!buf || !buf->kbuf_ack || !buf->pbvec || !buf->kbuf)
+ return -EINVAL;
+
+ /*
+ * Allow io_uring OPs to borrow this leased kbuf, which is returned
+ * back by calling `kbuf_ack` when the group leader is freed.
+ *
+ * Not like pipe/splice, this kernel buffer is always owned by the
+ * provider, and has to be returned back.
+ */
+ io_rsrc_node_init(node, IORING_RSRC_BUFFER, IORING_RSRC_F_BUF_KERNEL);
+ req->buf_node = node;
+
+ req->flags |= REQ_F_GROUP_BUF | REQ_F_BUF_NODE;
+ return 0;
+}
diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
index 04ccd52dd0ad..2e47ac33aa60 100644
--- a/io_uring/kbuf.h
+++ b/io_uring/kbuf.h
@@ -3,6 +3,7 @@
#define IOU_KBUF_H
#include <uapi/linux/io_uring.h>
+#include "rsrc.h"
enum {
/* ring mapped provided buffers */
@@ -88,9 +89,10 @@ void io_put_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl);
struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx,
unsigned long bgid);
int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma);
-int io_import_kbuf(int ddir, struct iov_iter *iter,
- const struct io_mapped_buf *kbuf,
- u64 buf_off, size_t len);
+
+int io_import_group_buf(struct io_kiocb *req, int dir, struct iov_iter *iter,
+ unsigned long buf_off, unsigned int len);
+int io_lease_group_kbuf(struct io_kiocb *req, struct io_rsrc_node *node);
static inline bool io_kbuf_recycle_ring(struct io_kiocb *req)
{
@@ -223,4 +225,26 @@ static inline unsigned int io_put_kbufs(struct io_kiocb *req, int len,
{
return __io_put_kbufs(req, len, nbufs, issue_flags);
}
+
+static inline bool io_use_group_buf(struct io_kiocb *req)
+{
+ return req->flags & REQ_F_GROUP_BUF;
+}
+
+static inline bool io_use_group_kbuf(struct io_kiocb *req)
+{
+ if (io_use_group_buf(req))
+ return io_req_use_kernel_buf(req);
+ return false;
+}
+
+/* zero remained bytes of kernel buffer for avoiding to leak data */
+static inline void io_req_zero_remained(struct io_kiocb *req, struct iov_iter *iter)
+{
+ size_t left = iov_iter_count(iter);
+
+ if (iov_iter_rw(iter) == READ && left > 0)
+ iov_iter_zero(left, iter);
+}
+
#endif
diff --git a/io_uring/net.c b/io_uring/net.c
index df1f7dc6f1c8..855bf101d54f 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -88,6 +88,13 @@ struct io_sr_msg {
*/
#define MULTISHOT_MAX_RETRY 32
+#define user_ptr_to_u64(x) ( \
+{ \
+ typecheck(void __user *, (x)); \
+ (u64)(unsigned long)(x); \
+} \
+)
+
int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
{
struct io_shutdown *shutdown = io_kiocb_to_cmd(req, struct io_shutdown);
@@ -384,7 +391,7 @@ static int io_send_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe)
kmsg->msg.msg_name = &kmsg->addr;
kmsg->msg.msg_namelen = addr_len;
}
- if (!io_do_buffer_select(req)) {
+ if (!io_do_buffer_select(req) && !io_use_group_buf(req)) {
ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len,
&kmsg->msg.msg_iter);
if (unlikely(ret < 0))
@@ -599,6 +606,15 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
if (issue_flags & IO_URING_F_NONBLOCK)
flags |= MSG_DONTWAIT;
+ if (io_use_group_buf(req)) {
+ ret = io_import_group_buf(req, ITER_SOURCE,
+ &kmsg->msg.msg_iter,
+ user_ptr_to_u64(sr->buf),
+ sr->len);
+ if (unlikely(ret))
+ return ret;
+ }
+
retry_bundle:
if (io_do_buffer_select(req)) {
struct buf_sel_arg arg = {
@@ -889,6 +905,8 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
*ret = IOU_STOP_MULTISHOT;
else
*ret = IOU_OK;
+ if (io_use_group_kbuf(req))
+ io_req_zero_remained(req, &kmsg->msg.msg_iter);
io_req_msg_cleanup(req, issue_flags);
return true;
}
@@ -1161,6 +1179,13 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
goto out_free;
}
sr->buf = NULL;
+ } else if (io_use_group_buf(req)) {
+ ret = io_import_group_buf(req, ITER_DEST,
+ &kmsg->msg.msg_iter,
+ user_ptr_to_u64(sr->buf),
+ sr->len);
+ if (unlikely(ret))
+ goto out_free;
}
kmsg->msg.msg_flags = 0;
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index 3de75eca1c92..4426e8e7a2f1 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -246,6 +246,7 @@ const struct io_issue_def io_issue_defs[] = {
.ioprio = 1,
.iopoll = 1,
.iopoll_queue = 1,
+ .group_buf = 1,
.async_size = sizeof(struct io_async_rw),
.prep = io_prep_read,
.issue = io_read,
@@ -260,6 +261,7 @@ const struct io_issue_def io_issue_defs[] = {
.ioprio = 1,
.iopoll = 1,
.iopoll_queue = 1,
+ .group_buf = 1,
.async_size = sizeof(struct io_async_rw),
.prep = io_prep_write,
.issue = io_write,
@@ -282,6 +284,7 @@ const struct io_issue_def io_issue_defs[] = {
.audit_skip = 1,
.ioprio = 1,
.buffer_select = 1,
+ .group_buf = 1,
#if defined(CONFIG_NET)
.async_size = sizeof(struct io_async_msghdr),
.prep = io_sendmsg_prep,
@@ -297,6 +300,7 @@ const struct io_issue_def io_issue_defs[] = {
.buffer_select = 1,
.audit_skip = 1,
.ioprio = 1,
+ .group_buf = 1,
#if defined(CONFIG_NET)
.async_size = sizeof(struct io_async_msghdr),
.prep = io_recvmsg_prep,
diff --git a/io_uring/opdef.h b/io_uring/opdef.h
index 14456436ff74..44597d45d7c6 100644
--- a/io_uring/opdef.h
+++ b/io_uring/opdef.h
@@ -27,6 +27,8 @@ struct io_issue_def {
unsigned iopoll_queue : 1;
/* vectored opcode, set if 1) vectored, and 2) handler needs to know */
unsigned vectored : 1;
+ /* support group buffer */
+ unsigned group_buf : 1;
/* size of async data needed, if any */
unsigned short async_size;
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index f45a26c3b79d..9d001d72b65d 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -110,6 +110,13 @@ static inline void io_req_assign_buf_node(struct io_kiocb *req,
req->flags |= REQ_F_BUF_NODE;
}
+static inline bool io_req_use_kernel_buf(struct io_kiocb *req)
+{
+ if (req->flags & REQ_F_BUF_NODE)
+ return req->buf_node->flags & IORING_RSRC_F_BUF_KERNEL;
+ return false;
+}
+
int io_files_update(struct io_kiocb *req, unsigned int issue_flags);
int io_files_update_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
diff --git a/io_uring/rw.c b/io_uring/rw.c
index b62cdb5fc936..f0a4e4524188 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -487,6 +487,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
}
req_set_fail(req);
req->cqe.res = res;
+ if (io_use_group_kbuf(req)) {
+ struct io_async_rw *io = req->async_data;
+
+ io_req_zero_remained(req, &io->iter);
+ }
}
return false;
}
@@ -628,11 +633,15 @@ static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
*/
static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter)
{
+ struct io_kiocb *req = cmd_to_io_kiocb(rw);
struct kiocb *kiocb = &rw->kiocb;
struct file *file = kiocb->ki_filp;
ssize_t ret = 0;
loff_t *ppos;
+ if (io_use_group_kbuf(req))
+ return -EOPNOTSUPP;
+
/*
* Don't support polled IO through this interface, and we can't
* support non-blocking either. For the latter, this just causes
@@ -831,20 +840,32 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type)
return 0;
}
+static int rw_import_group_buf(struct io_kiocb *req, int dir,
+ struct io_rw *rw, struct io_async_rw *io)
+{
+ int ret = io_import_group_buf(req, dir, &io->iter, rw->addr, rw->len);
+
+ if (!ret)
+ iov_iter_save_state(&io->iter, &io->iter_state);
+ return ret;
+}
+
static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
{
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
struct io_async_rw *io = req->async_data;
struct kiocb *kiocb = &rw->kiocb;
- ssize_t ret;
+ ssize_t ret = 0;
loff_t *ppos;
- if (io_do_buffer_select(req)) {
+ if (io_do_buffer_select(req))
ret = io_import_iovec(ITER_DEST, req, io, issue_flags);
- if (unlikely(ret < 0))
- return ret;
- }
+ else if (io_use_group_buf(req))
+ ret = rw_import_group_buf(req, ITER_DEST, rw, io);
+ if (unlikely(ret < 0))
+ return ret;
+
ret = io_rw_init_file(req, FMODE_READ, READ);
if (unlikely(ret))
return ret;
@@ -1027,6 +1048,12 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
ssize_t ret, ret2;
loff_t *ppos;
+ if (io_use_group_buf(req)) {
+ ret = rw_import_group_buf(req, ITER_SOURCE, rw, io);
+ if (unlikely(ret < 0))
+ return ret;
+ }
+
ret = io_rw_init_file(req, FMODE_WRITE, WRITE);
if (unlikely(ret))
return ret;
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (9 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 10/12] io_uring: support leased group buffer with REQ_F_GROUP_BUF Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 18:27 ` kernel test robot
` (2 more replies)
2024-11-07 11:01 ` [PATCH V10 12/12] ublk: support leasing io " Ming Lei
2024-11-07 22:25 ` (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc Jens Axboe
12 siblings, 3 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
Add API of io_uring_cmd_lease_kbuf() for driver to lease its kernel
buffer to io_uring.
The leased buffer can only be consumed by io_uring OPs in group wide,
and the uring_cmd has to be one group leader.
This way can support generic device zero copy over device buffer in
userspace:
- create one sqe group
- lease one device buffer to io_uring by the group leader of uring_cmd
- io_uring member OPs consume this kernel buffer by passing IOSQE_IO_DRAIN
which isn't used for group member, and mapped to GROUP_BUF.
- the kernel buffer is returned back after all member OPs are completed
Signed-off-by: Ming Lei <[email protected]>
---
include/linux/io_uring/cmd.h | 7 +++++++
io_uring/uring_cmd.c | 10 ++++++++++
2 files changed, 17 insertions(+)
diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
index 578a3fdf5c71..0997ea247188 100644
--- a/include/linux/io_uring/cmd.h
+++ b/include/linux/io_uring/cmd.h
@@ -60,6 +60,8 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
/* Execute the request from a blocking context */
void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd);
+int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
+ struct io_rsrc_node *node);
#else
static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd)
@@ -82,6 +84,11 @@ static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
{
}
+static inline int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
+ struct io_rsrc_node *node);
+{
+ return -EOPNOTSUPP;
+}
#endif
/*
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index b62965f58f30..e7723759cb23 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -15,6 +15,7 @@
#include "alloc_cache.h"
#include "rsrc.h"
#include "uring_cmd.h"
+#include "kbuf.h"
static struct uring_cache *io_uring_async_get(struct io_kiocb *req)
{
@@ -175,6 +176,15 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
}
EXPORT_SYMBOL_GPL(io_uring_cmd_done);
+int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
+ struct io_rsrc_node *node)
+{
+ struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+
+ return io_lease_group_kbuf(req, node);
+}
+EXPORT_SYMBOL_GPL(io_uring_cmd_lease_kbuf);
+
static int io_uring_cmd_prep_setup(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring
2024-11-07 11:01 ` [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
@ 2024-11-07 18:27 ` kernel test robot
2024-11-07 19:30 ` kernel test robot
2024-11-08 0:59 ` Ming Lei
2 siblings, 0 replies; 21+ messages in thread
From: kernel test robot @ 2024-11-07 18:27 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring, Pavel Begunkov
Cc: oe-kbuild-all, linux-block, Uday Shankar, Akilesh Kailash,
Ming Lei
Hi Ming,
kernel test robot noticed the following build errors:
[auto build test ERROR on axboe-block/for-next]
[also build test ERROR on next-20241107]
[cannot apply to linus/master v6.12-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Ming-Lei/io_uring-rsrc-pass-struct-io_ring_ctx-reference-to-rsrc-helpers/20241107-190456
base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
patch link: https://lore.kernel.org/r/20241107110149.890530-12-ming.lei%40redhat.com
patch subject: [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring
config: arc-randconfig-001-20241108 (https://download.01.org/0day-ci/archive/20241108/[email protected]/config)
compiler: arc-elf-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241108/[email protected]/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
All error/warnings (new ones prefixed by >>):
In file included from block/fops.c:20:
>> include/linux/io_uring/cmd.h:89:1: error: expected identifier or '(' before '{' token
89 | {
| ^
>> include/linux/io_uring/cmd.h:87:19: warning: 'io_uring_cmd_lease_kbuf' declared 'static' but never defined [-Wunused-function]
87 | static inline int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
| ^~~~~~~~~~~~~~~~~~~~~~~
vim +89 include/linux/io_uring/cmd.h
62
63 int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
64 struct io_rsrc_node *node);
65 #else
66 static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
67 struct iov_iter *iter, void *ioucmd)
68 {
69 return -EOPNOTSUPP;
70 }
71 static inline void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret,
72 ssize_t ret2, unsigned issue_flags)
73 {
74 }
75 static inline void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
76 void (*task_work_cb)(struct io_uring_cmd *, unsigned),
77 unsigned flags)
78 {
79 }
80 static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
81 unsigned int issue_flags)
82 {
83 }
84 static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
85 {
86 }
> 87 static inline int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
88 struct io_rsrc_node *node);
> 89 {
90 return -EOPNOTSUPP;
91 }
92 #endif
93
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring
2024-11-07 11:01 ` [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
2024-11-07 18:27 ` kernel test robot
@ 2024-11-07 19:30 ` kernel test robot
2024-11-08 0:59 ` Ming Lei
2 siblings, 0 replies; 21+ messages in thread
From: kernel test robot @ 2024-11-07 19:30 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, io-uring, Pavel Begunkov
Cc: llvm, oe-kbuild-all, linux-block, Uday Shankar, Akilesh Kailash,
Ming Lei
Hi Ming,
kernel test robot noticed the following build errors:
[auto build test ERROR on axboe-block/for-next]
[also build test ERROR on next-20241107]
[cannot apply to linus/master v6.12-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Ming-Lei/io_uring-rsrc-pass-struct-io_ring_ctx-reference-to-rsrc-helpers/20241107-190456
base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
patch link: https://lore.kernel.org/r/20241107110149.890530-12-ming.lei%40redhat.com
patch subject: [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring
config: arm64-randconfig-001-20241108 (https://download.01.org/0day-ci/archive/20241108/[email protected]/config)
compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 592c0fe55f6d9a811028b5f3507be91458ab2713)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241108/[email protected]/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
All errors (new ones prefixed by >>):
In file included from block/ioctl.c:4:
In file included from include/linux/blkdev.h:9:
In file included from include/linux/blk_types.h:10:
In file included from include/linux/bvec.h:10:
In file included from include/linux/highmem.h:8:
In file included from include/linux/cacheflush.h:5:
In file included from arch/arm64/include/asm/cacheflush.h:11:
In file included from include/linux/kgdb.h:19:
In file included from include/linux/kprobes.h:28:
In file included from include/linux/ftrace.h:13:
In file included from include/linux/kallsyms.h:13:
In file included from include/linux/mm.h:2213:
include/linux/vmstat.h:504:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
504 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
505 | item];
| ~~~~
include/linux/vmstat.h:511:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
511 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
512 | NR_VM_NUMA_EVENT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~~
include/linux/vmstat.h:518:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
518 | return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
| ~~~~~~~~~~~ ^ ~~~
In file included from block/ioctl.c:15:
>> include/linux/io_uring/cmd.h:89:1: error: expected identifier or '('
89 | {
| ^
3 warnings and 1 error generated.
vim +89 include/linux/io_uring/cmd.h
62
63 int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
64 struct io_rsrc_node *node);
65 #else
66 static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
67 struct iov_iter *iter, void *ioucmd)
68 {
69 return -EOPNOTSUPP;
70 }
71 static inline void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret,
72 ssize_t ret2, unsigned issue_flags)
73 {
74 }
75 static inline void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
76 void (*task_work_cb)(struct io_uring_cmd *, unsigned),
77 unsigned flags)
78 {
79 }
80 static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
81 unsigned int issue_flags)
82 {
83 }
84 static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
85 {
86 }
87 static inline int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
88 struct io_rsrc_node *node);
> 89 {
90 return -EOPNOTSUPP;
91 }
92 #endif
93
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring
2024-11-07 11:01 ` [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
2024-11-07 18:27 ` kernel test robot
2024-11-07 19:30 ` kernel test robot
@ 2024-11-08 0:59 ` Ming Lei
2 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-08 0:59 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash
On Thu, Nov 7, 2024 at 7:02 PM Ming Lei <[email protected]> wrote:
>
> Add API of io_uring_cmd_lease_kbuf() for driver to lease its kernel
> buffer to io_uring.
>
> The leased buffer can only be consumed by io_uring OPs in group wide,
> and the uring_cmd has to be one group leader.
>
> This way can support generic device zero copy over device buffer in
> userspace:
>
> - create one sqe group
> - lease one device buffer to io_uring by the group leader of uring_cmd
> - io_uring member OPs consume this kernel buffer by passing IOSQE_IO_DRAIN
> which isn't used for group member, and mapped to GROUP_BUF.
> - the kernel buffer is returned back after all member OPs are completed
>
> Signed-off-by: Ming Lei <[email protected]>
> ---
> include/linux/io_uring/cmd.h | 7 +++++++
> io_uring/uring_cmd.c | 10 ++++++++++
> 2 files changed, 17 insertions(+)
>
> diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
> index 578a3fdf5c71..0997ea247188 100644
> --- a/include/linux/io_uring/cmd.h
> +++ b/include/linux/io_uring/cmd.h
> @@ -60,6 +60,8 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
> /* Execute the request from a blocking context */
> void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd);
>
> +int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
> + struct io_rsrc_node *node);
> #else
> static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
> struct iov_iter *iter, void *ioucmd)
> @@ -82,6 +84,11 @@ static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
> static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
> {
> }
> +static inline int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd,
> + struct io_rsrc_node *node);
ops, the above ";" needs to be removed, :-(
> +{
> + return -EOPNOTSUPP;
> +}
> #endif
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH V10 12/12] ublk: support leasing io buffer to io_uring
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (10 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 11/12] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Ming Lei
@ 2024-11-07 11:01 ` Ming Lei
2024-11-07 22:25 ` (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc Jens Axboe
12 siblings, 0 replies; 21+ messages in thread
From: Ming Lei @ 2024-11-07 11:01 UTC (permalink / raw)
To: Jens Axboe, io-uring, Pavel Begunkov
Cc: linux-block, Uday Shankar, Akilesh Kailash, Ming Lei
Suopport to lease block IO buffer for userpace to run io_uring operations(FS,
network IO), then ublk zero copy can be supported.
userspace code:
git clone https://github.com/ublk-org/ublksrv.git -b uring_group
And both loop and nbd zero copy(io_uring send and send zc) are covered.
Performance improvement is quite obvious in big block size test, such as
'loop --buffered_io' perf is doubled in 64KB block test("loop/007 vs
loop/009").
Signed-off-by: Ming Lei <[email protected]>
---
drivers/block/ublk_drv.c | 168 ++++++++++++++++++++++++++++++++--
include/uapi/linux/ublk_cmd.h | 11 ++-
2 files changed, 169 insertions(+), 10 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 6ba2c1dd1d87..d90f4cac8154 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -51,6 +51,8 @@
/* private ioctl command mirror */
#define UBLK_CMD_DEL_DEV_ASYNC _IOC_NR(UBLK_U_CMD_DEL_DEV_ASYNC)
+#define UBLK_IO_PROVIDE_IO_BUF _IOC_NR(UBLK_U_IO_PROVIDE_IO_BUF)
+
/* All UBLK_F_* have to be included into UBLK_F_ALL */
#define UBLK_F_ALL (UBLK_F_SUPPORT_ZERO_COPY \
| UBLK_F_URING_CMD_COMP_IN_TASK \
@@ -67,10 +69,18 @@
(UBLK_PARAM_TYPE_BASIC | UBLK_PARAM_TYPE_DISCARD | \
UBLK_PARAM_TYPE_DEVT | UBLK_PARAM_TYPE_ZONED)
+struct uring_kbuf {
+ struct io_rsrc_node node;
+ struct io_mapped_buf buf;
+};
+
struct ublk_rq_data {
struct llist_node node;
struct kref ref;
+
+ bool allocated_bvec;
+ struct uring_kbuf buf[0];
};
struct ublk_uring_cmd_pdu {
@@ -189,11 +199,15 @@ struct ublk_params_header {
__u32 types;
};
+static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
+ struct ublk_queue *ubq, int tag, size_t offset);
static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq);
static inline unsigned int ublk_req_build_flags(struct request *req);
static inline struct ublksrv_io_desc *ublk_get_iod(struct ublk_queue *ubq,
int tag);
+static void ublk_io_buf_giveback_cb(struct io_rsrc_node *node);
+
static inline bool ublk_dev_is_user_copy(const struct ublk_device *ub)
{
return ub->dev_info.flags & UBLK_F_USER_COPY;
@@ -588,6 +602,11 @@ static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
return ublk_support_user_copy(ubq);
}
+static inline bool ublk_support_zc(const struct ublk_queue *ubq)
+{
+ return ubq->flags & UBLK_F_SUPPORT_ZERO_COPY;
+}
+
static inline void ublk_init_req_ref(const struct ublk_queue *ubq,
struct request *req)
{
@@ -851,6 +870,74 @@ static size_t ublk_copy_user_pages(const struct request *req,
return done;
}
+/*
+ * The built command buffer is immutable, so it is fine to feed it to
+ * concurrent io_uring provide buf commands
+ */
+static int ublk_init_zero_copy_buffer(struct request *req)
+{
+ struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
+ struct io_rsrc_node *node = &data->buf->node;
+ struct io_mapped_buf *imu = &data->buf->buf;
+ struct req_iterator rq_iter;
+ unsigned int nr_bvecs = 0;
+ struct bio_vec *bvec;
+ unsigned int offset;
+ struct bio_vec bv;
+
+ if (!ublk_rq_has_data(req))
+ goto exit;
+
+ rq_for_each_bvec(bv, req, rq_iter)
+ nr_bvecs++;
+
+ if (!nr_bvecs)
+ goto exit;
+
+ if (req->bio != req->biotail) {
+ int idx = 0;
+
+ bvec = kvmalloc_array(nr_bvecs, sizeof(struct bio_vec),
+ GFP_NOIO);
+ if (!bvec)
+ return -ENOMEM;
+
+ offset = 0;
+ rq_for_each_bvec(bv, req, rq_iter)
+ bvec[idx++] = bv;
+ data->allocated_bvec = true;
+ } else {
+ struct bio *bio = req->bio;
+
+ offset = bio->bi_iter.bi_bvec_done;
+ bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
+ }
+ imu->kbuf = 1;
+ imu->pbvec = bvec;
+ imu->nr_bvecs = nr_bvecs;
+ imu->offset = offset;
+ imu->len = blk_rq_bytes(req);
+ imu->dir = req_op(req) == REQ_OP_READ ? ITER_DEST : ITER_SOURCE;
+ imu->kbuf_ack = ublk_io_buf_giveback_cb;
+ node->buf = imu;
+
+ return 0;
+exit:
+ imu->pbvec = NULL;
+ return 0;
+}
+
+static void ublk_deinit_zero_copy_buffer(struct request *req)
+{
+ struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
+ struct io_mapped_buf *imu = &data->buf->buf;
+
+ if (data->allocated_bvec) {
+ kvfree(imu->pbvec);
+ data->allocated_bvec = false;
+ }
+}
+
static inline bool ublk_need_map_req(const struct request *req)
{
return ublk_rq_has_data(req) && req_op(req) == REQ_OP_WRITE;
@@ -862,13 +949,25 @@ static inline bool ublk_need_unmap_req(const struct request *req)
(req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_DRV_IN);
}
-static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
+static int ublk_map_io(const struct ublk_queue *ubq, struct request *req,
struct ublk_io *io)
{
const unsigned int rq_bytes = blk_rq_bytes(req);
- if (ublk_support_user_copy(ubq))
+ if (ublk_support_user_copy(ubq)) {
+ if (ublk_support_zc(ubq)) {
+ int ret = ublk_init_zero_copy_buffer(req);
+
+ /*
+ * The only failure is -ENOMEM for allocating providing
+ * buffer command, return zero so that we can requeue
+ * this req.
+ */
+ if (unlikely(ret))
+ return 0;
+ }
return rq_bytes;
+ }
/*
* no zero copy, we delay copy WRITE request data into ublksrv
@@ -886,13 +985,16 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
}
static int ublk_unmap_io(const struct ublk_queue *ubq,
- const struct request *req,
+ struct request *req,
struct ublk_io *io)
{
const unsigned int rq_bytes = blk_rq_bytes(req);
- if (ublk_support_user_copy(ubq))
+ if (ublk_support_user_copy(ubq)) {
+ if (ublk_support_zc(ubq))
+ ublk_deinit_zero_copy_buffer(req);
return rq_bytes;
+ }
if (ublk_need_unmap_req(req)) {
struct iov_iter iter;
@@ -1038,6 +1140,7 @@ static inline void __ublk_complete_rq(struct request *req)
return;
exit:
+ ublk_deinit_zero_copy_buffer(req);
blk_mq_end_request(req, res);
}
@@ -1680,6 +1783,46 @@ static inline void ublk_prep_cancel(struct io_uring_cmd *cmd,
io_uring_cmd_mark_cancelable(cmd, issue_flags);
}
+static void ublk_io_buf_giveback_cb(struct io_rsrc_node *node)
+{
+ struct uring_kbuf *buf = (struct uring_kbuf *)node;
+ struct ublk_rq_data *data = container_of(buf, struct ublk_rq_data, buf[0]);
+ struct request *req = blk_mq_rq_from_pdu(data);
+ struct ublk_queue *ubq = req->mq_hctx->driver_data;
+
+ ublk_put_req_ref(ubq, req);
+}
+
+static int ublk_provide_io_buf(struct io_uring_cmd *cmd,
+ struct ublk_queue *ubq, int tag)
+{
+ struct ublk_device *ub = cmd->file->private_data;
+ struct ublk_rq_data *data;
+ struct request *req;
+
+ if (!ub)
+ return -EPERM;
+
+ req = __ublk_check_and_get_req(ub, ubq, tag, 0);
+ if (!req)
+ return -EINVAL;
+
+ pr_devel("%s: qid %d tag %u request bytes %u\n",
+ __func__, tag, ubq->q_id, blk_rq_bytes(req));
+
+ data = blk_mq_rq_to_pdu(req);
+
+ /*
+ * io_uring guarantees that the callback will be called after
+ * the provided buffer is consumed, and it is automatic removal
+ * before this uring command is freed.
+ *
+ * This request won't be completed unless the callback is called,
+ * so ublk module won't be unloaded too.
+ */
+ return io_uring_cmd_lease_kbuf(cmd, &data->buf->node);
+}
+
static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags,
const struct ublksrv_io_cmd *ub_cmd)
@@ -1731,6 +1874,10 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
ret = -EINVAL;
switch (_IOC_NR(cmd_op)) {
+ case UBLK_IO_PROVIDE_IO_BUF:
+ if (unlikely(!ublk_support_zc(ubq)))
+ goto out;
+ return ublk_provide_io_buf(cmd, ubq, tag);
case UBLK_IO_FETCH_REQ:
/* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
if (ublk_queue_ready(ubq)) {
@@ -2149,11 +2296,14 @@ static void ublk_align_max_io_size(struct ublk_device *ub)
static int ublk_add_tag_set(struct ublk_device *ub)
{
+ int zc = !!(ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY);
+ struct ublk_rq_data *data;
+
ub->tag_set.ops = &ublk_mq_ops;
ub->tag_set.nr_hw_queues = ub->dev_info.nr_hw_queues;
ub->tag_set.queue_depth = ub->dev_info.queue_depth;
ub->tag_set.numa_node = NUMA_NO_NODE;
- ub->tag_set.cmd_size = sizeof(struct ublk_rq_data);
+ ub->tag_set.cmd_size = struct_size(data, buf, zc);
ub->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
ub->tag_set.driver_data = ub;
return blk_mq_alloc_tag_set(&ub->tag_set);
@@ -2458,8 +2608,12 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
goto out_free_dev_number;
}
- /* We are not ready to support zero copy */
- ub->dev_info.flags &= ~UBLK_F_SUPPORT_ZERO_COPY;
+ /* zero copy depends on user copy */
+ if ((ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY) &&
+ !ublk_dev_is_user_copy(ub)) {
+ ret = -EINVAL;
+ goto out_free_dev_number;
+ }
ub->dev_info.nr_hw_queues = min_t(unsigned int,
ub->dev_info.nr_hw_queues, nr_cpu_ids);
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index 12873639ea96..04d73b349709 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -94,6 +94,8 @@
_IOWR('u', UBLK_IO_COMMIT_AND_FETCH_REQ, struct ublksrv_io_cmd)
#define UBLK_U_IO_NEED_GET_DATA \
_IOWR('u', UBLK_IO_NEED_GET_DATA, struct ublksrv_io_cmd)
+#define UBLK_U_IO_PROVIDE_IO_BUF \
+ _IOWR('u', 0x23, struct ublksrv_io_cmd)
/* only ABORT means that no re-fetch */
#define UBLK_IO_RES_OK 0
@@ -127,9 +129,12 @@
#define UBLKSRV_IO_BUF_TOTAL_SIZE (1ULL << UBLKSRV_IO_BUF_TOTAL_BITS)
/*
- * zero copy requires 4k block size, and can remap ublk driver's io
- * request into ublksrv's vm space
- */
+ * io_uring provide kbuf command based zero copy
+ *
+ * Not available for UBLK_F_UNPRIVILEGED_DEV, because we rely on ublk
+ * server to fill up request buffer for READ IO, and ublk server can't
+ * be trusted in case of UBLK_F_UNPRIVILEGED_DEV.
+*/
#define UBLK_F_SUPPORT_ZERO_COPY (1ULL << 0)
/*
--
2.47.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc
2024-11-07 11:01 [PATCH V10 0/12] io_uring: support group buffer & ublk zc Ming Lei
` (11 preceding siblings ...)
2024-11-07 11:01 ` [PATCH V10 12/12] ublk: support leasing io " Ming Lei
@ 2024-11-07 22:25 ` Jens Axboe
2024-11-07 22:25 ` Jens Axboe
12 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2024-11-07 22:25 UTC (permalink / raw)
To: io-uring, Pavel Begunkov, Ming Lei
Cc: linux-block, Uday Shankar, Akilesh Kailash
On Thu, 07 Nov 2024 19:01:33 +0800, Ming Lei wrote:
> Patch 1~3 cleans rsrc code.
>
> Patch 4~9 prepares for supporting kernel buffer.
>
> The 10th patch supports group buffer, so far only kernel buffer is
> supported, but it is pretty easy to extend for userspace group buffer.
>
> [...]
Applied, thanks!
[01/12] io_uring/rsrc: pass 'struct io_ring_ctx' reference to rsrc helpers
commit: 0d98c509086837a8cf5a32f82f2a58f39a539192
[02/12] io_uring/rsrc: remove '->ctx_ptr' of 'struct io_rsrc_node'
commit: 4f219fcce5e4366cc121fc98270beb1fbbb3df2b
[03/12] io_uring/rsrc: add & apply io_req_assign_buf_node()
commit: 039c878db7add23c1c9ea18424c442cce76670f9
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc
2024-11-07 22:25 ` (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc Jens Axboe
@ 2024-11-07 22:25 ` Jens Axboe
2024-11-12 0:53 ` Ming Lei
0 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2024-11-07 22:25 UTC (permalink / raw)
To: io-uring, Pavel Begunkov, Ming Lei
Cc: linux-block, Uday Shankar, Akilesh Kailash
On 11/7/24 3:25 PM, Jens Axboe wrote:
>
> On Thu, 07 Nov 2024 19:01:33 +0800, Ming Lei wrote:
>> Patch 1~3 cleans rsrc code.
>>
>> Patch 4~9 prepares for supporting kernel buffer.
>>
>> The 10th patch supports group buffer, so far only kernel buffer is
>> supported, but it is pretty easy to extend for userspace group buffer.
>>
>> [...]
>
> Applied, thanks!
>
> [01/12] io_uring/rsrc: pass 'struct io_ring_ctx' reference to rsrc helpers
> commit: 0d98c509086837a8cf5a32f82f2a58f39a539192
> [02/12] io_uring/rsrc: remove '->ctx_ptr' of 'struct io_rsrc_node'
> commit: 4f219fcce5e4366cc121fc98270beb1fbbb3df2b
> [03/12] io_uring/rsrc: add & apply io_req_assign_buf_node()
> commit: 039c878db7add23c1c9ea18424c442cce76670f9
Applied the first three as they stand alone quite nicely. I did ponder
on patch 1 to skip the make eg io_alloc_file_tables() not take both
the ctx and &ctx->file_table, but we may as well keep it symmetric.
I'll take a look at the rest of the series tomorrow.
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc
2024-11-07 22:25 ` Jens Axboe
@ 2024-11-12 0:53 ` Ming Lei
2024-11-13 13:43 ` Pavel Begunkov
0 siblings, 1 reply; 21+ messages in thread
From: Ming Lei @ 2024-11-12 0:53 UTC (permalink / raw)
To: Jens Axboe
Cc: io-uring, Pavel Begunkov, linux-block, Uday Shankar,
Akilesh Kailash
On Thu, Nov 07, 2024 at 03:25:59PM -0700, Jens Axboe wrote:
> On 11/7/24 3:25 PM, Jens Axboe wrote:
> >
> > On Thu, 07 Nov 2024 19:01:33 +0800, Ming Lei wrote:
> >> Patch 1~3 cleans rsrc code.
> >>
> >> Patch 4~9 prepares for supporting kernel buffer.
> >>
> >> The 10th patch supports group buffer, so far only kernel buffer is
> >> supported, but it is pretty easy to extend for userspace group buffer.
> >>
> >> [...]
> >
> > Applied, thanks!
> >
> > [01/12] io_uring/rsrc: pass 'struct io_ring_ctx' reference to rsrc helpers
> > commit: 0d98c509086837a8cf5a32f82f2a58f39a539192
> > [02/12] io_uring/rsrc: remove '->ctx_ptr' of 'struct io_rsrc_node'
> > commit: 4f219fcce5e4366cc121fc98270beb1fbbb3df2b
> > [03/12] io_uring/rsrc: add & apply io_req_assign_buf_node()
> > commit: 039c878db7add23c1c9ea18424c442cce76670f9
>
> Applied the first three as they stand alone quite nicely. I did ponder
> on patch 1 to skip the make eg io_alloc_file_tables() not take both
> the ctx and &ctx->file_table, but we may as well keep it symmetric.
>
> I'll take a look at the rest of the series tomorrow.
Hi Jens,
Any comment on the rest of the series?
thanks,
Ming
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc
2024-11-12 0:53 ` Ming Lei
@ 2024-11-13 13:43 ` Pavel Begunkov
2024-11-13 14:56 ` Jens Axboe
0 siblings, 1 reply; 21+ messages in thread
From: Pavel Begunkov @ 2024-11-13 13:43 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash
[-- Attachment #1: Type: text/plain, Size: 1097 bytes --]
On 11/12/24 00:53, Ming Lei wrote:
> On Thu, Nov 07, 2024 at 03:25:59PM -0700, Jens Axboe wrote:
>> On 11/7/24 3:25 PM, Jens Axboe wrote:
...
> Hi Jens,
>
> Any comment on the rest of the series?
Ming, it's dragging on because it's over complicated. I very much want
it to get to some conclusion, get it merged and move on, and I strongly
believe Jens shares the sentiment on getting the thing done.
Please, take the patches attached, adjust them to your needs and put
ublk on top. Or tell if there is a strong reason why it doesn't work.
The implementation is very simple and doesn't need almost anything
from io_uring, it's low risk and we can merge in no time.
If you can't cache the allocation in ublk, io_uring can add a cache.
If ublk needs more space and cannot embed the structure, we can add
a "private" pointer into io_mapped_ubuf. If it needs to check the IO
direction, we can add that as well (though I have doubts you really need
it, read-only might makes sense, write-only not so much). We'll also
merge Jens' patch allowing to remove a buffer with a request.
--
Pavel Begunkov
[-- Attachment #2: io_uring-leased-buffers.patch --]
[-- Type: text/x-patch, Size: 6484 bytes --]
From 78a9c8a3b9d59e7465d6c158283a531a221fa3b2 Mon Sep 17 00:00:00 2001
Date: Tue, 12 Nov 2024 22:58:18 +0000
Subject: [PATCH 1/4] io_uring: export io_mapped_ubuf definition
---
include/linux/io_uring/kbuf.h | 19 +++++++++++++++++++
io_uring/rsrc.h | 12 ++----------
2 files changed, 21 insertions(+), 10 deletions(-)
create mode 100644 include/linux/io_uring/kbuf.h
diff --git a/include/linux/io_uring/kbuf.h b/include/linux/io_uring/kbuf.h
new file mode 100644
index 000000000000..a32578df3d8e
--- /dev/null
+++ b/include/linux/io_uring/kbuf.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _LINUX_IO_URING_KBUF_H
+#define _LINUX_IO_URING_KBUF_H
+
+#include <uapi/linux/io_uring.h>
+#include <linux/io_uring_types.h>
+#include <linux/bvec.h>
+
+struct io_mapped_ubuf {
+ u64 ubuf;
+ unsigned int len;
+ unsigned int nr_bvecs;
+ unsigned int folio_shift;
+ refcount_t refs;
+ unsigned long acct_pages;
+ struct bio_vec bvec[] __counted_by(nr_bvecs);
+};
+
+#endif
\ No newline at end of file
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 7a4668deaa1a..885ccecade08 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -2,6 +2,8 @@
#ifndef IOU_RSRC_H
#define IOU_RSRC_H
+#include <linux/io_uring/kbuf.h>
+
#define IO_NODE_ALLOC_CACHE_MAX 32
#define IO_RSRC_TAG_TABLE_SHIFT (PAGE_SHIFT - 3)
@@ -24,16 +26,6 @@ struct io_rsrc_node {
};
};
-struct io_mapped_ubuf {
- u64 ubuf;
- unsigned int len;
- unsigned int nr_bvecs;
- unsigned int folio_shift;
- refcount_t refs;
- unsigned long acct_pages;
- struct bio_vec bvec[] __counted_by(nr_bvecs);
-};
-
struct io_imu_folio_data {
/* Head folio can be partially included in the fixed buf */
unsigned int nr_pages_head;
--
2.46.0
From 6839ca1ca94a89ec11362f32af22e2c0cfdfaa81 Mon Sep 17 00:00:00 2001
Date: Tue, 12 Nov 2024 23:12:15 +0000
Subject: [PATCH 2/4] io_uring: add io_mapped_ubuf release callback
---
include/linux/io_uring/kbuf.h | 10 ++++++++++
io_uring/rsrc.c | 6 +++++-
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/include/linux/io_uring/kbuf.h b/include/linux/io_uring/kbuf.h
index a32578df3d8e..aa3eeaa1ac25 100644
--- a/include/linux/io_uring/kbuf.h
+++ b/include/linux/io_uring/kbuf.h
@@ -13,7 +13,17 @@ struct io_mapped_ubuf {
unsigned int folio_shift;
refcount_t refs;
unsigned long acct_pages;
+ void (*release)(struct io_mapped_ubuf *);
struct bio_vec bvec[] __counted_by(nr_bvecs);
};
+static inline void iou_init_kbuf(struct io_mapped_ubuf *buf,
+ void (*release)(struct io_mapped_ubuf *))
+{
+ refcount_set(&buf->refs, 1);
+ buf->acct_pages = 0;
+ buf->ubuf = 0;
+ buf->release = release;
+}
+
#endif
\ No newline at end of file
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index adaae8630932..84ea5a480058 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -110,6 +110,10 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_rsrc_node *node)
if (!refcount_dec_and_test(&imu->refs))
return;
+ if (imu->release) {
+ imu->release(imu);
+ return;
+ }
for (i = 0; i < imu->nr_bvecs; i++)
unpin_user_page(imu->bvec[i].bv_page);
if (imu->acct_pages)
@@ -762,6 +766,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
}
size = iov->iov_len;
+ iou_init_kbuf(imu, NULL);
/* store original address for later verification */
imu->ubuf = (unsigned long) iov->iov_base;
imu->len = iov->iov_len;
@@ -769,7 +774,6 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
imu->folio_shift = PAGE_SHIFT;
if (coalesced)
imu->folio_shift = data.folio_shift;
- refcount_set(&imu->refs, 1);
off = (unsigned long) iov->iov_base & ((1UL << imu->folio_shift) - 1);
node->buf = imu;
ret = 0;
--
2.46.0
From 55b56ed8c5cb58727c7daabd1e56e6f194749e7f Mon Sep 17 00:00:00 2001
Date: Tue, 12 Nov 2024 23:33:24 +0000
Subject: [PATCH 3/4] io_uring: add a helper for leasing a buffer
---
include/linux/io_uring/kbuf.h | 23 +++++++++++++++++++++++
io_uring/rsrc.c | 32 ++++++++++++++++++++++++++++++++
2 files changed, 55 insertions(+)
diff --git a/include/linux/io_uring/kbuf.h b/include/linux/io_uring/kbuf.h
index aa3eeaa1ac25..91cfcdc685cc 100644
--- a/include/linux/io_uring/kbuf.h
+++ b/include/linux/io_uring/kbuf.h
@@ -4,6 +4,7 @@
#include <uapi/linux/io_uring.h>
#include <linux/io_uring_types.h>
+#include <linux/io_uring/cmd.h>
#include <linux/bvec.h>
struct io_mapped_ubuf {
@@ -26,4 +27,26 @@ static inline void iou_init_kbuf(struct io_mapped_ubuf *buf,
buf->release = release;
}
+#if defined(CONFIG_IO_URING)
+int iou_export_kbuf(struct io_ring_ctx *ctx, unsigned issue_flags,
+ struct io_mapped_ubuf *buf, unsigned index);
+#else
+static inline int iou_export_kbuf(struct io_ring_ctx *ctx,
+ unsigned issue_flags,
+ struct io_mapped_ubuf *buf, unsigned index)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
+static inline int io_uring_cmd_export_kbuf(struct io_uring_cmd *cmd,
+ unsigned issue_flags,
+ struct io_mapped_ubuf *buf,
+ unsigned index)
+{
+ struct io_ring_ctx *ctx = cmd_to_io_kiocb(cmd)->ctx;
+
+ return iou_export_kbuf(ctx, issue_flags, buf, index);
+}
+
#endif
\ No newline at end of file
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 84ea5a480058..07842a6a8020 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -797,6 +797,38 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx,
return node;
}
+static int __iou_export_kbuf(struct io_ring_ctx *ctx, unsigned issue_flags,
+ struct io_mapped_ubuf *buf, unsigned idx)
+{
+ struct io_rsrc_node *node;
+
+ if (unlikely(idx >= ctx->buf_table.nr)) {
+ if (!ctx->buf_table.nr)
+ return -ENXIO;
+ return -EINVAL;
+ }
+ idx = array_index_nospec(idx, ctx->buf_table.nr);
+
+ node = io_rsrc_node_alloc(ctx, IORING_RSRC_BUFFER);
+ if (!node)
+ return -ENOMEM;
+ node->buf = buf;
+ io_reset_rsrc_node(ctx, &ctx->buf_table, idx);
+ ctx->buf_table.nodes[idx] = node;
+ return 0;
+}
+
+int iou_export_kbuf(struct io_ring_ctx *ctx, unsigned issue_flags,
+ struct io_mapped_ubuf *buf, unsigned idx)
+{
+ int ret;
+
+ io_ring_submit_lock(ctx, issue_flags);
+ ret = __iou_export_kbuf(ctx, issue_flags, buf, idx);
+ io_ring_submit_unlock(ctx, issue_flags);
+ return ret;
+}
+
int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
unsigned int nr_args, u64 __user *tags)
{
--
2.46.0
[-- Attachment #3: leased-buffer-test.c --]
[-- Type: text/x-csrc, Size: 2124 bytes --]
#include <errno.h>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <assert.h>
#include <unistd.h>
#include "liburing.h"
#include "test.h"
#define BUF_SIZE 4096
static char buf[BUF_SIZE];
static char buf_tmp[BUF_SIZE];
static void io_uring_prep_my_cmd_lease(struct io_uring_sqe *sqe)
{
io_uring_prep_rw(IORING_OP_URING_CMD, sqe, fd, 0, 0, 0);
/* TODO */
...
}
static void do_submit_wait1(struct io_uring *ring)
{
struct io_uring_cqe *cqe;
int ret;
ret = io_uring_submit(ring);
assert(ret == 1);
ret = io_uring_wait_cqe(ring, &cqe);
assert(ret >= 0);
printf("cqe data %i res %i\n", (int)cqe->user_data, cqe->res);
io_uring_cqe_seen(ring, cqe);
}
int main(int argc, char *argv[])
{
unsigned long buf_offset = 0;
unsigned slot = 0; /* reg buffer table index */
struct io_uring_sqe *sqe;
int pipe1[2], pipe2[2];
struct io_uring ring;
int ret, i;
for (i = 0; i < BUF_SIZE; i++)
buf[i] = (char)i;
ret = pipe(pipe1);
assert(ret == 0);
ret = pipe(pipe2);
assert(ret == 0);
ret = io_uring_queue_init(8, &ring, 0);
assert(ret == 0);
ret = io_uring_register_buffers_sparse(&ring, 16);
assert(ret >= 0);
ret = write(pipe1[1], buf, BUF_SIZE);
assert(ret == BUF_SIZE);
sqe = io_uring_get_sqe(&ring);
io_uring_prep_my_cmd_lease(sqe, bdev_fd, slot);
sqe->user_data = 1;
do_submit_wait1(&ring);
// read data into the leased buffer
sqe = io_uring_get_sqe(&ring);
io_uring_prep_read_fixed(sqe, pipe1[0], (void *)buf_offset, BUF_SIZE, 0, slot);
sqe->user_data = 2;
do_submit_wait1(&ring);
// write from the leased buffer into a pipe
sqe = io_uring_get_sqe(&ring);
io_uring_prep_write_fixed(sqe, pipe2[1], (void *)buf_offset, BUF_SIZE, 0, slot);
sqe->user_data = 3;
do_submit_wait1(&ring);
// check the right data is in the pipe
ret = read(pipe2[0], buf_tmp, BUF_SIZE);
assert(ret == BUF_SIZE);
for (i = 0; i < BUF_SIZE; i++) {
assert(buf[i] == buf_tmp[i]);
}
struct iovec iovec = {};
ret = io_uring_register_buffers_update_tag(&ring, 0, &iovec, NULL, 1);
assert(ret >= 0);
io_uring_queue_exit(&ring);
return 0;
}
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: (subset) [PATCH V10 0/12] io_uring: support group buffer & ublk zc
2024-11-13 13:43 ` Pavel Begunkov
@ 2024-11-13 14:56 ` Jens Axboe
0 siblings, 0 replies; 21+ messages in thread
From: Jens Axboe @ 2024-11-13 14:56 UTC (permalink / raw)
To: Pavel Begunkov, Ming Lei
Cc: io-uring, linux-block, Uday Shankar, Akilesh Kailash
On 11/13/24 6:43 AM, Pavel Begunkov wrote:
> On 11/12/24 00:53, Ming Lei wrote:
>> On Thu, Nov 07, 2024 at 03:25:59PM -0700, Jens Axboe wrote:
>>> On 11/7/24 3:25 PM, Jens Axboe wrote:
> ...
>> Hi Jens,
>>
>> Any comment on the rest of the series?
>
> Ming, it's dragging on because it's over complicated. I very much want
> it to get to some conclusion, get it merged and move on, and I strongly
> believe Jens shares the sentiment on getting the thing done.
>
> Please, take the patches attached, adjust them to your needs and put
> ublk on top. Or tell if there is a strong reason why it doesn't work.
> The implementation is very simple and doesn't need almost anything
> from io_uring, it's low risk and we can merge in no time.
Indeed, nobody would love to get this moving forward more than me!
Pavel, can you setup a branch with the required patches? Should be your
two and then the buf update and mapping bits I did earlier. I can do it
too. Ming, would you mind if we try and setup a base that we can do this
on more trivially? I had merged the sqe grouping, but the more I think
about it, the less I like the added complexity, and the limitations we
had to put in there because relationships weren't fully understandable.
With the #1 goal of getting leased/borrowed buffers working asap, here's
what I suggest:
1) Pavel/I provide a base for doing the bare minimum, which is having an
ephemeral buffer that zc can use.
2) You do the ublk bits on top
Yes this won't have grouping, so buf update will have to be done
separately. The downside here is that it'll add a (tiny) bit of overhead
as there's an extra sqe involved, but I don't really think that's an
issue, not even a minor one. The main objective here is NOT copying the
data, which will dwarf any other tiny extra overhead added.
This avoids introducing sqe grouping as a concept as a requirement for
zero copy ublk, which I did mention earlier is part of the complication
here. I'm a BIG fan of keeping things simple initially, particularly
when it adds major dependency complexity to the core code.
The goal here is doing the simple buffer leasing in such a way that it's
trivially understandable, and doesn't depend on grouping. With that, we
can easily get this done for the 6.14 kernel and finally ship it. I wish
6.13 was a week more away because then I think we could get it in for
6.13, but we only really have a few days at this point, so it's a bit
late. Unfortunately!
Ming, what do you think? Let's get this sorted so we can move on to
actually being able to use zc ublk, which is the goal we all share here.
> If you can't cache the allocation in ublk, io_uring can add a cache.
> If ublk needs more space and cannot embed the structure, we can add
> a "private" pointer into io_mapped_ubuf. If it needs to check the IO
> direction, we can add that as well (though I have doubts you really need
> it, read-only might makes sense, write-only not so much). We'll also
> merge Jens' patch allowing to remove a buffer with a request.
Right, we can always improve the little things as we go forward and make
it more efficient. I do think it's diminishing returns at that point,
but that doesn't mean we should not do it and make it better. But first,
let's get the concept working.
That's just violent agreement btw, I think we all share that objective
:-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread