* [PATCHSET v4 0/16] Add support for ring mapped provided buffers
@ 2022-05-01 20:56 Jens Axboe
2022-05-01 20:56 ` [PATCH 01/16] io_uring: kill io_recv_buffer_select() wrapper Jens Axboe
` (15 more replies)
0 siblings, 16 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence
Hi,
This series builds to adding support for a different way of doing
provided buffers. The interesting bits here are patch 16, which also has
some performance numbers an an explanation of it.
Patches 1..6 are cleanups that should just applied separately, I
think the clean up the existing code quite nicely.
Patch 7 switches provided buffers from the hashed list approach to
using an array (for up to 64 groups), and using an xarray for a
larger sparse space.
Patches 8..13 are just cleanups and generic optimizations.
Patch 14 adds NOP support for provided buffers, just so that we can
benchmark the last change.
Patch 15 just abstracts out the pinning code.
Patch 16 finally adds the feature.
This passes the full liburing suite, and various test cases I adopted
to use ring provided buffers.
v4: - Shrink io_kiocb compared to before this series (-8 bytes)
- Save some space in io_buffer_list
- Add patch moving provided buffers to array + xarray
- Add comments
- Unify cflags handling for classic/ring buffers
- Fix bid/bgid types
Can also be found in my git repo, for-5.19/io_uring-pbuf branch:
https://git.kernel.dk/cgit/linux-block/log/?h=for-5.19/io_uring-pbuf
fs/io_uring.c | 599 ++++++++++++++++++++++++----------
include/uapi/linux/io_uring.h | 28 ++
2 files changed, 462 insertions(+), 165 deletions(-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 01/16] io_uring: kill io_recv_buffer_select() wrapper
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 02/16] io_uring: use 'sr' vs 'req->sr_msg' consistently Jens Axboe
` (14 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
It's just a thin wrapper around io_buffer_select(), get rid of it.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index dfebbf3a272a..12f61ce429dc 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5897,14 +5897,6 @@ static int io_recvmsg_copy_hdr(struct io_kiocb *req,
return __io_recvmsg_copy_hdr(req, iomsg);
}
-static struct io_buffer *io_recv_buffer_select(struct io_kiocb *req,
- unsigned int issue_flags)
-{
- struct io_sr_msg *sr = &req->sr_msg;
-
- return io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
-}
-
static int io_recvmsg_prep_async(struct io_kiocb *req)
{
int ret;
@@ -5961,7 +5953,7 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
}
if (req->flags & REQ_F_BUFFER_SELECT) {
- kbuf = io_recv_buffer_select(req, issue_flags);
+ kbuf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
if (IS_ERR(kbuf))
return PTR_ERR(kbuf);
kmsg->fast_iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
@@ -6022,7 +6014,7 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
return -ENOTSOCK;
if (req->flags & REQ_F_BUFFER_SELECT) {
- kbuf = io_recv_buffer_select(req, issue_flags);
+ kbuf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
if (IS_ERR(kbuf))
return PTR_ERR(kbuf);
buf = u64_to_user_ptr(kbuf->addr);
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 02/16] io_uring: use 'sr' vs 'req->sr_msg' consistently
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
2022-05-01 20:56 ` [PATCH 01/16] io_uring: kill io_recv_buffer_select() wrapper Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly Jens Axboe
` (13 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
For all of send/sendmsg and recv/recvmsg we have the local 'sr' variable,
yet some cases still use req->sr_msg which sr points to. Use 'sr'
consistently.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 12f61ce429dc..38bd5dfb4160 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5725,7 +5725,7 @@ static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
kmsg = &iomsg;
}
- flags = req->sr_msg.msg_flags;
+ flags = sr->msg_flags;
if (issue_flags & IO_URING_F_NONBLOCK)
flags |= MSG_DONTWAIT;
if (flags & MSG_WAITALL)
@@ -5780,7 +5780,7 @@ static int io_send(struct io_kiocb *req, unsigned int issue_flags)
msg.msg_controllen = 0;
msg.msg_namelen = 0;
- flags = req->sr_msg.msg_flags;
+ flags = sr->msg_flags;
if (issue_flags & IO_URING_F_NONBLOCK)
flags |= MSG_DONTWAIT;
if (flags & MSG_WAITALL)
@@ -5957,19 +5957,18 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
if (IS_ERR(kbuf))
return PTR_ERR(kbuf);
kmsg->fast_iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
- kmsg->fast_iov[0].iov_len = req->sr_msg.len;
- iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov,
- 1, req->sr_msg.len);
+ kmsg->fast_iov[0].iov_len = sr->len;
+ iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov, 1,
+ sr->len);
}
- flags = req->sr_msg.msg_flags;
+ flags = sr->msg_flags;
if (force_nonblock)
flags |= MSG_DONTWAIT;
if (flags & MSG_WAITALL)
min_ret = iov_iter_count(&kmsg->msg.msg_iter);
- ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
- kmsg->uaddr, flags);
+ ret = __sys_recvmsg_sock(sock, &kmsg->msg, sr->umsg, kmsg->uaddr, flags);
if (ret < min_ret) {
if (ret == -EAGAIN && force_nonblock)
return io_setup_async_msg(req, kmsg);
@@ -6031,7 +6030,7 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
msg.msg_iocb = NULL;
msg.msg_flags = 0;
- flags = req->sr_msg.msg_flags;
+ flags = sr->msg_flags;
if (force_nonblock)
flags |= MSG_DONTWAIT;
if (flags & MSG_WAITALL)
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
2022-05-01 20:56 ` [PATCH 01/16] io_uring: kill io_recv_buffer_select() wrapper Jens Axboe
2022-05-01 20:56 ` [PATCH 02/16] io_uring: use 'sr' vs 'req->sr_msg' consistently Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-09 12:06 ` Dylan Yudaken
2022-05-01 20:56 ` [PATCH 04/16] io_uring: kill io_rw_buffer_select() wrapper Jens Axboe
` (12 subsequent siblings)
15 siblings, 1 reply; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
There's no point in having callers provide a kbuf, we're just returning
the address anyway.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 42 ++++++++++++++++++------------------------
1 file changed, 18 insertions(+), 24 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 38bd5dfb4160..d9c9eb5e4bab 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3571,15 +3571,15 @@ static void io_buffer_add_list(struct io_ring_ctx *ctx,
list_add(&bl->list, list);
}
-static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
- int bgid, unsigned int issue_flags)
+static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+ int bgid, unsigned int issue_flags)
{
struct io_buffer *kbuf = req->kbuf;
struct io_ring_ctx *ctx = req->ctx;
struct io_buffer_list *bl;
if (req->flags & REQ_F_BUFFER_SELECTED)
- return kbuf;
+ return u64_to_user_ptr(kbuf->addr);
io_ring_submit_lock(req->ctx, issue_flags);
@@ -3591,25 +3591,18 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
*len = kbuf->len;
req->flags |= REQ_F_BUFFER_SELECTED;
req->kbuf = kbuf;
- } else {
- kbuf = ERR_PTR(-ENOBUFS);
+ io_ring_submit_unlock(req->ctx, issue_flags);
+ return u64_to_user_ptr(kbuf->addr);
}
io_ring_submit_unlock(req->ctx, issue_flags);
- return kbuf;
+ return ERR_PTR(-ENOBUFS);
}
static void __user *io_rw_buffer_select(struct io_kiocb *req, size_t *len,
unsigned int issue_flags)
{
- struct io_buffer *kbuf;
- u16 bgid;
-
- bgid = req->buf_index;
- kbuf = io_buffer_select(req, len, bgid, issue_flags);
- if (IS_ERR(kbuf))
- return kbuf;
- return u64_to_user_ptr(kbuf->addr);
+ return io_buffer_select(req, len, req->buf_index, issue_flags);
}
#ifdef CONFIG_COMPAT
@@ -5934,7 +5927,6 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
struct io_async_msghdr iomsg, *kmsg;
struct io_sr_msg *sr = &req->sr_msg;
struct socket *sock;
- struct io_buffer *kbuf;
unsigned flags;
int ret, min_ret = 0;
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
@@ -5953,10 +5945,12 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
}
if (req->flags & REQ_F_BUFFER_SELECT) {
- kbuf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
- if (IS_ERR(kbuf))
- return PTR_ERR(kbuf);
- kmsg->fast_iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
+ void __user *buf;
+
+ buf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+ kmsg->fast_iov[0].iov_base = buf;
kmsg->fast_iov[0].iov_len = sr->len;
iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov, 1,
sr->len);
@@ -5998,7 +5992,6 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
{
- struct io_buffer *kbuf;
struct io_sr_msg *sr = &req->sr_msg;
struct msghdr msg;
void __user *buf = sr->buf;
@@ -6013,10 +6006,11 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
return -ENOTSOCK;
if (req->flags & REQ_F_BUFFER_SELECT) {
- kbuf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
- if (IS_ERR(kbuf))
- return PTR_ERR(kbuf);
- buf = u64_to_user_ptr(kbuf->addr);
+ void __user *buf;
+
+ buf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
}
ret = import_single_range(READ, buf, sr->len, &iov, &msg.msg_iter);
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 04/16] io_uring: kill io_rw_buffer_select() wrapper
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (2 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 05/16] io_uring: ignore ->buf_index if REQ_F_BUFFER_SELECT isn't set Jens Axboe
` (11 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
After the recent changes, this is direct call to io_buffer_select()
anyway. With this change, there are no wrappers left for provided
buffer selection.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index d9c9eb5e4bab..fc8755f5ff86 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3599,12 +3599,6 @@ static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
return ERR_PTR(-ENOBUFS);
}
-static void __user *io_rw_buffer_select(struct io_kiocb *req, size_t *len,
- unsigned int issue_flags)
-{
- return io_buffer_select(req, len, req->buf_index, issue_flags);
-}
-
#ifdef CONFIG_COMPAT
static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
unsigned int issue_flags)
@@ -3612,7 +3606,7 @@ static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
struct compat_iovec __user *uiov;
compat_ssize_t clen;
void __user *buf;
- ssize_t len;
+ size_t len;
uiov = u64_to_user_ptr(req->rw.addr);
if (!access_ok(uiov, sizeof(*uiov)))
@@ -3623,7 +3617,7 @@ static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
return -EINVAL;
len = clen;
- buf = io_rw_buffer_select(req, &len, issue_flags);
+ buf = io_buffer_select(req, &len, req->buf_index, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
iov[0].iov_base = buf;
@@ -3645,7 +3639,7 @@ static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
len = iov[0].iov_len;
if (len < 0)
return -EINVAL;
- buf = io_rw_buffer_select(req, &len, issue_flags);
+ buf = io_buffer_select(req, &len, req->buf_index, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
iov[0].iov_base = buf;
@@ -3701,7 +3695,8 @@ static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
if (req->flags & REQ_F_BUFFER_SELECT) {
- buf = io_rw_buffer_select(req, &sqe_len, issue_flags);
+ buf = io_buffer_select(req, &sqe_len, req->buf_index,
+ issue_flags);
if (IS_ERR(buf))
return ERR_CAST(buf);
req->rw.len = sqe_len;
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 05/16] io_uring: ignore ->buf_index if REQ_F_BUFFER_SELECT isn't set
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (3 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 04/16] io_uring: kill io_rw_buffer_select() wrapper Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 06/16] io_uring: always use req->buf_index for the provided buffer group Jens Axboe
` (10 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
There's no point in validity checking buf_index if the request doesn't
have REQ_F_BUFFER_SELECT set, as we will never use it for that case.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index fc8755f5ff86..baa1b5426bfc 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3686,10 +3686,6 @@ static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
return NULL;
}
- /* buffer index only valid with fixed read/write, or buffer select */
- if (unlikely(req->buf_index && !(req->flags & REQ_F_BUFFER_SELECT)))
- return ERR_PTR(-EINVAL);
-
buf = u64_to_user_ptr(req->rw.addr);
sqe_len = req->rw.len;
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 06/16] io_uring: always use req->buf_index for the provided buffer group
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (4 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 05/16] io_uring: ignore ->buf_index if REQ_F_BUFFER_SELECT isn't set Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 07/16] io_uring: get rid of hashed provided buffer groups Jens Axboe
` (9 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
The read/write opcodes use it already, but the recv/recvmsg do not. If
we switch them over and read and validate this at init time while we're
checking if the opcode supports it anyway, then we can do it in one spot
and we don't have to pass in a separate group ID for io_buffer_select().
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index baa1b5426bfc..eba18685a705 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -644,7 +644,6 @@ struct io_sr_msg {
void __user *buf;
};
int msg_flags;
- int bgid;
size_t len;
size_t done_io;
};
@@ -3412,6 +3411,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
req->rw.addr = READ_ONCE(sqe->addr);
req->rw.len = READ_ONCE(sqe->len);
req->rw.flags = READ_ONCE(sqe->rw_flags);
+ /* used for fixed read/write too - just read unconditionally */
req->buf_index = READ_ONCE(sqe->buf_index);
return 0;
}
@@ -3572,7 +3572,7 @@ static void io_buffer_add_list(struct io_ring_ctx *ctx,
}
static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
- int bgid, unsigned int issue_flags)
+ unsigned int issue_flags)
{
struct io_buffer *kbuf = req->kbuf;
struct io_ring_ctx *ctx = req->ctx;
@@ -3583,7 +3583,7 @@ static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
io_ring_submit_lock(req->ctx, issue_flags);
- bl = io_buffer_get_list(ctx, bgid);
+ bl = io_buffer_get_list(ctx, req->buf_index);
if (bl && !list_empty(&bl->buf_list)) {
kbuf = list_first_entry(&bl->buf_list, struct io_buffer, list);
list_del(&kbuf->list);
@@ -3617,7 +3617,7 @@ static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
return -EINVAL;
len = clen;
- buf = io_buffer_select(req, &len, req->buf_index, issue_flags);
+ buf = io_buffer_select(req, &len, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
iov[0].iov_base = buf;
@@ -3639,7 +3639,7 @@ static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
len = iov[0].iov_len;
if (len < 0)
return -EINVAL;
- buf = io_buffer_select(req, &len, req->buf_index, issue_flags);
+ buf = io_buffer_select(req, &len, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
iov[0].iov_base = buf;
@@ -3691,8 +3691,7 @@ static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
if (req->flags & REQ_F_BUFFER_SELECT) {
- buf = io_buffer_select(req, &sqe_len, req->buf_index,
- issue_flags);
+ buf = io_buffer_select(req, &sqe_len, issue_flags);
if (IS_ERR(buf))
return ERR_CAST(buf);
req->rw.len = sqe_len;
@@ -5900,7 +5899,6 @@ static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
sr->len = READ_ONCE(sqe->len);
- sr->bgid = READ_ONCE(sqe->buf_group);
sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
if (sr->msg_flags & MSG_DONTWAIT)
req->flags |= REQ_F_NOWAIT;
@@ -5938,7 +5936,7 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
if (req->flags & REQ_F_BUFFER_SELECT) {
void __user *buf;
- buf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
+ buf = io_buffer_select(req, &sr->len, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
kmsg->fast_iov[0].iov_base = buf;
@@ -5999,7 +5997,7 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
if (req->flags & REQ_F_BUFFER_SELECT) {
void __user *buf;
- buf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
+ buf = io_buffer_select(req, &sr->len, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
}
@@ -8272,9 +8270,11 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
/* enforce forwards compatibility on users */
if (sqe_flags & ~SQE_VALID_FLAGS)
return -EINVAL;
- if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
- !io_op_defs[opcode].buffer_select)
- return -EOPNOTSUPP;
+ if (sqe_flags & IOSQE_BUFFER_SELECT) {
+ if (!io_op_defs[opcode].buffer_select)
+ return -EOPNOTSUPP;
+ req->buf_index = READ_ONCE(sqe->buf_group);
+ }
if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
ctx->drain_disabled = true;
if (sqe_flags & IOSQE_IO_DRAIN) {
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 07/16] io_uring: get rid of hashed provided buffer groups
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (5 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 06/16] io_uring: always use req->buf_index for the provided buffer group Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 08/16] io_uring: never call io_buffer_select() for a buffer re-select Jens Axboe
` (8 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
Use a plain array for any group ID that's less than 64, and punt
anything beyond that to an xarray. 64 fits in a page even for 4KB
page sizes and with the planned additions.
This makes the expected group usage faster by avoiding a hash and lookup
to find our list, and it uses less memory upfront by not allocating any
memory for provided buffers unless it's actually being used.
Suggested-by: Pavel Begunkov <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 97 ++++++++++++++++++++++++++++++---------------------
1 file changed, 58 insertions(+), 39 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index eba18685a705..7efe2de5ce81 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -283,7 +283,6 @@ struct io_rsrc_data {
};
struct io_buffer_list {
- struct list_head list;
struct list_head buf_list;
__u16 bgid;
};
@@ -358,7 +357,7 @@ struct io_ev_fd {
struct rcu_head rcu;
};
-#define IO_BUFFERS_HASH_BITS 5
+#define BGID_ARRAY 64
struct io_ring_ctx {
/* const or read-mostly hot data */
@@ -414,7 +413,8 @@ struct io_ring_ctx {
struct list_head timeout_list;
struct list_head ltimeout_list;
struct list_head cq_overflow_list;
- struct list_head *io_buffers;
+ struct io_buffer_list *io_bl;
+ struct xarray io_bl_xa;
struct list_head io_buffers_cache;
struct list_head apoll_cache;
struct xarray personalities;
@@ -1613,15 +1613,10 @@ static inline unsigned int io_put_kbuf(struct io_kiocb *req,
static struct io_buffer_list *io_buffer_get_list(struct io_ring_ctx *ctx,
unsigned int bgid)
{
- struct list_head *hash_list;
- struct io_buffer_list *bl;
-
- hash_list = &ctx->io_buffers[hash_32(bgid, IO_BUFFERS_HASH_BITS)];
- list_for_each_entry(bl, hash_list, list)
- if (bl->bgid == bgid || bgid == -1U)
- return bl;
+ if (ctx->io_bl && bgid < BGID_ARRAY)
+ return &ctx->io_bl[bgid];
- return NULL;
+ return xa_load(&ctx->io_bl_xa, bgid);
}
static void io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
@@ -1727,12 +1722,14 @@ static __cold void io_fallback_req_func(struct work_struct *work)
static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
{
struct io_ring_ctx *ctx;
- int i, hash_bits;
+ int hash_bits;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return NULL;
+ xa_init(&ctx->io_bl_xa);
+
/*
* Use 5 bits less than the max cq entries, that should give us around
* 32 entries per hash list if totally full and uniformly spread.
@@ -1754,13 +1751,6 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
/* set invalid range, so io_import_fixed() fails meeting it */
ctx->dummy_ubuf->ubuf = -1UL;
- ctx->io_buffers = kcalloc(1U << IO_BUFFERS_HASH_BITS,
- sizeof(struct list_head), GFP_KERNEL);
- if (!ctx->io_buffers)
- goto err;
- for (i = 0; i < (1U << IO_BUFFERS_HASH_BITS); i++)
- INIT_LIST_HEAD(&ctx->io_buffers[i]);
-
if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
goto err;
@@ -1796,7 +1786,8 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
err:
kfree(ctx->dummy_ubuf);
kfree(ctx->cancel_hash);
- kfree(ctx->io_buffers);
+ kfree(ctx->io_bl);
+ xa_destroy(&ctx->io_bl_xa);
kfree(ctx);
return NULL;
}
@@ -3560,15 +3551,14 @@ static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
return __io_import_fixed(req, rw, iter, imu);
}
-static void io_buffer_add_list(struct io_ring_ctx *ctx,
- struct io_buffer_list *bl, unsigned int bgid)
+static int io_buffer_add_list(struct io_ring_ctx *ctx,
+ struct io_buffer_list *bl, unsigned int bgid)
{
- struct list_head *list;
-
- list = &ctx->io_buffers[hash_32(bgid, IO_BUFFERS_HASH_BITS)];
- INIT_LIST_HEAD(&bl->buf_list);
bl->bgid = bgid;
- list_add(&bl->list, list);
+ if (bgid < BGID_ARRAY)
+ return 0;
+
+ return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL));
}
static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
@@ -5318,6 +5308,23 @@ static int io_add_buffers(struct io_ring_ctx *ctx, struct io_provide_buf *pbuf,
return i ? 0 : -ENOMEM;
}
+static __cold int io_init_bl_list(struct io_ring_ctx *ctx)
+{
+ int i;
+
+ ctx->io_bl = kcalloc(BGID_ARRAY, sizeof(struct io_buffer_list),
+ GFP_KERNEL);
+ if (!ctx->io_bl)
+ return -ENOMEM;
+
+ for (i = 0; i < BGID_ARRAY; i++) {
+ INIT_LIST_HEAD(&ctx->io_bl[i].buf_list);
+ ctx->io_bl[i].bgid = i;
+ }
+
+ return 0;
+}
+
static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_provide_buf *p = &req->pbuf;
@@ -5327,6 +5334,12 @@ static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
io_ring_submit_lock(ctx, issue_flags);
+ if (unlikely(p->bgid < BGID_ARRAY && !ctx->io_bl)) {
+ ret = io_init_bl_list(ctx);
+ if (ret)
+ goto err;
+ }
+
bl = io_buffer_get_list(ctx, p->bgid);
if (unlikely(!bl)) {
bl = kmalloc(sizeof(*bl), GFP_KERNEL);
@@ -5334,7 +5347,11 @@ static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
ret = -ENOMEM;
goto err;
}
- io_buffer_add_list(ctx, bl, p->bgid);
+ ret = io_buffer_add_list(ctx, bl, p->bgid);
+ if (ret) {
+ kfree(bl);
+ goto err;
+ }
}
ret = io_add_buffers(ctx, p, bl);
@@ -10437,19 +10454,19 @@ static int io_eventfd_unregister(struct io_ring_ctx *ctx)
static void io_destroy_buffers(struct io_ring_ctx *ctx)
{
+ struct io_buffer_list *bl;
+ unsigned long index;
int i;
- for (i = 0; i < (1U << IO_BUFFERS_HASH_BITS); i++) {
- struct list_head *list = &ctx->io_buffers[i];
-
- while (!list_empty(list)) {
- struct io_buffer_list *bl;
+ for (i = 0; i < BGID_ARRAY; i++) {
+ if (!ctx->io_bl)
+ break;
+ __io_remove_buffers(ctx, &ctx->io_bl[i], -1U);
+ }
- bl = list_first_entry(list, struct io_buffer_list, list);
- __io_remove_buffers(ctx, bl, -1U);
- list_del(&bl->list);
- kfree(bl);
- }
+ xa_for_each(&ctx->io_bl_xa, index, bl) {
+ xa_erase(&ctx->io_bl_xa, bl->bgid);
+ __io_remove_buffers(ctx, bl, -1U);
}
while (!list_empty(&ctx->io_buffers_pages)) {
@@ -10558,7 +10575,8 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
io_wq_put_hash(ctx->hash_map);
kfree(ctx->cancel_hash);
kfree(ctx->dummy_ubuf);
- kfree(ctx->io_buffers);
+ kfree(ctx->io_bl);
+ xa_destroy(&ctx->io_bl_xa);
kfree(ctx);
}
@@ -12467,6 +12485,7 @@ static int __init io_uring_init(void)
/* ->buf_index is u16 */
BUILD_BUG_ON(IORING_MAX_REG_BUFFERS >= (1u << 16));
+ BUILD_BUG_ON(BGID_ARRAY * sizeof(struct io_buffer_list) > PAGE_SIZE);
/* should fit into one byte */
BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 08/16] io_uring: never call io_buffer_select() for a buffer re-select
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (6 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 07/16] io_uring: get rid of hashed provided buffer groups Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 09/16] io_uring: abstract out provided buffer list selection Jens Axboe
` (7 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
Callers already have room to store the addr and length information,
clean it up by having the caller just assign the previously provided
data.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7efe2de5ce81..b4bcfd5c4c3d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3568,9 +3568,6 @@ static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
struct io_ring_ctx *ctx = req->ctx;
struct io_buffer_list *bl;
- if (req->flags & REQ_F_BUFFER_SELECTED)
- return u64_to_user_ptr(kbuf->addr);
-
io_ring_submit_lock(req->ctx, issue_flags);
bl = io_buffer_get_list(ctx, req->buf_index);
@@ -3610,8 +3607,9 @@ static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
buf = io_buffer_select(req, &len, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
+ req->rw.addr = (unsigned long) buf;
iov[0].iov_base = buf;
- iov[0].iov_len = (compat_size_t) len;
+ req->rw.len = iov[0].iov_len = (compat_size_t) len;
return 0;
}
#endif
@@ -3632,8 +3630,9 @@ static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
buf = io_buffer_select(req, &len, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
+ req->rw.addr = (unsigned long) buf;
iov[0].iov_base = buf;
- iov[0].iov_len = len;
+ req->rw.len = iov[0].iov_len = len;
return 0;
}
@@ -3641,10 +3640,8 @@ static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
unsigned int issue_flags)
{
if (req->flags & REQ_F_BUFFER_SELECTED) {
- struct io_buffer *kbuf = req->kbuf;
-
- iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
- iov[0].iov_len = kbuf->len;
+ iov[0].iov_base = u64_to_user_ptr(req->rw.addr);
+ iov[0].iov_len = req->rw.len;
return 0;
}
if (req->rw.len != 1)
@@ -3658,6 +3655,13 @@ static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
return __io_iov_buffer_select(req, iov, issue_flags);
}
+static inline bool io_do_buffer_select(struct io_kiocb *req)
+{
+ if (!(req->flags & REQ_F_BUFFER_SELECT))
+ return false;
+ return !(req->flags & REQ_F_BUFFER_SELECTED);
+}
+
static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
struct io_rw_state *s,
unsigned int issue_flags)
@@ -3680,10 +3684,11 @@ static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
sqe_len = req->rw.len;
if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
- if (req->flags & REQ_F_BUFFER_SELECT) {
+ if (io_do_buffer_select(req)) {
buf = io_buffer_select(req, &sqe_len, issue_flags);
if (IS_ERR(buf))
return ERR_CAST(buf);
+ req->rw.addr = (unsigned long) buf;
req->rw.len = sqe_len;
}
@@ -5950,7 +5955,7 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
kmsg = &iomsg;
}
- if (req->flags & REQ_F_BUFFER_SELECT) {
+ if (io_do_buffer_select(req)) {
void __user *buf;
buf = io_buffer_select(req, &sr->len, issue_flags);
@@ -6011,12 +6016,13 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
if (unlikely(!sock))
return -ENOTSOCK;
- if (req->flags & REQ_F_BUFFER_SELECT) {
+ if (io_do_buffer_select(req)) {
void __user *buf;
buf = io_buffer_select(req, &sr->len, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
+ sr->buf = buf;
}
ret = import_single_range(READ, buf, sr->len, &iov, &msg.msg_iter);
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 09/16] io_uring: abstract out provided buffer list selection
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (7 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 08/16] io_uring: never call io_buffer_select() for a buffer re-select Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 10/16] io_uring: move provided and fixed buffers into the same io_kiocb area Jens Axboe
` (6 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
In preparation for providing another way to select a buffer, move the
existing logic into a helper.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 34 +++++++++++++++++++++++-----------
1 file changed, 23 insertions(+), 11 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index b4bcfd5c4c3d..2f83c366e35b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3561,29 +3561,41 @@ static int io_buffer_add_list(struct io_ring_ctx *ctx,
return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL));
}
+static void __user *io_provided_buffer_select(struct io_kiocb *req, size_t *len,
+ struct io_buffer_list *bl,
+ unsigned int issue_flags)
+{
+ struct io_buffer *kbuf;
+
+ if (list_empty(&bl->buf_list))
+ return ERR_PTR(-ENOBUFS);
+
+ kbuf = list_first_entry(&bl->buf_list, struct io_buffer, list);
+ list_del(&kbuf->list);
+ if (*len > kbuf->len)
+ *len = kbuf->len;
+ req->flags |= REQ_F_BUFFER_SELECTED;
+ req->kbuf = kbuf;
+ io_ring_submit_unlock(req->ctx, issue_flags);
+ return u64_to_user_ptr(kbuf->addr);
+}
+
static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
unsigned int issue_flags)
{
- struct io_buffer *kbuf = req->kbuf;
struct io_ring_ctx *ctx = req->ctx;
struct io_buffer_list *bl;
io_ring_submit_lock(req->ctx, issue_flags);
bl = io_buffer_get_list(ctx, req->buf_index);
- if (bl && !list_empty(&bl->buf_list)) {
- kbuf = list_first_entry(&bl->buf_list, struct io_buffer, list);
- list_del(&kbuf->list);
- if (*len > kbuf->len)
- *len = kbuf->len;
- req->flags |= REQ_F_BUFFER_SELECTED;
- req->kbuf = kbuf;
+ if (unlikely(!bl)) {
io_ring_submit_unlock(req->ctx, issue_flags);
- return u64_to_user_ptr(kbuf->addr);
+ return ERR_PTR(-ENOBUFS);
}
- io_ring_submit_unlock(req->ctx, issue_flags);
- return ERR_PTR(-ENOBUFS);
+ /* selection helpers drop the submit lock again, if needed */
+ return io_provided_buffer_select(req, len, bl, issue_flags);
}
#ifdef CONFIG_COMPAT
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 10/16] io_uring: move provided and fixed buffers into the same io_kiocb area
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (8 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 09/16] io_uring: abstract out provided buffer list selection Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 11/16] io_uring: move provided buffer state closer to submit state Jens Axboe
` (5 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
These are mutually exclusive - if you use provided buffers, then you
cannot use fixed buffers and vice versa. Move them into the same spot
in the io_kiocb, which is also advantageous for provided buffers as
they get near the submit side hot cacheline.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 2f83c366e35b..84b867cff785 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -975,8 +975,14 @@ struct io_kiocb {
struct task_struct *task;
struct io_rsrc_node *rsrc_node;
- /* store used ubuf, so we can prevent reloading */
- struct io_mapped_ubuf *imu;
+
+ union {
+ /* store used ubuf, so we can prevent reloading */
+ struct io_mapped_ubuf *imu;
+
+ /* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
+ struct io_buffer *kbuf;
+ };
union {
/* used by request caches, completion batching and iopoll */
@@ -993,8 +999,6 @@ struct io_kiocb {
struct async_poll *apoll;
/* opcode allocated if it needs to store data for async defer */
void *async_data;
- /* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
- struct io_buffer *kbuf;
/* linked requests, IFF REQ_F_HARDLINK or REQ_F_LINK are set */
struct io_kiocb *link;
/* custom credentials, valid IFF REQ_F_CREDS is set */
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 11/16] io_uring: move provided buffer state closer to submit state
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (9 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 10/16] io_uring: move provided and fixed buffers into the same io_kiocb area Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 12/16] io_uring: eliminate the need to track provided buffer ID separately Jens Axboe
` (4 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
The timeout and other items that follow are less hot, so let's move the
provided buffer state above that.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 84b867cff785..23de92f5934f 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -410,12 +410,14 @@ struct io_ring_ctx {
struct io_mapped_ubuf **user_bufs;
struct io_submit_state submit_state;
- struct list_head timeout_list;
- struct list_head ltimeout_list;
- struct list_head cq_overflow_list;
+
struct io_buffer_list *io_bl;
struct xarray io_bl_xa;
struct list_head io_buffers_cache;
+
+ struct list_head timeout_list;
+ struct list_head ltimeout_list;
+ struct list_head cq_overflow_list;
struct list_head apoll_cache;
struct xarray personalities;
u32 pers_next;
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 12/16] io_uring: eliminate the need to track provided buffer ID separately
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (10 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 11/16] io_uring: move provided buffer state closer to submit state Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 13/16] io_uring: don't clear req->kbuf when buffer selection is done Jens Axboe
` (3 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
We have io_kiocb->buf_index which is used for either fixed buffers, or
for provided buffers. For the latter, it's used to hold the buffer group
ID for buffer selection. Post selection, req->kbuf->bid is used to get
the buffer ID.
Store the buffer ID, when selected, in req->buf_index. If we do end up
recycling the buffer, reset it back to the buffer group ID.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 23de92f5934f..ff3b803cf749 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -968,6 +968,11 @@ struct io_kiocb {
u8 opcode;
/* polled IO has completed */
u8 iopoll_completed;
+ /*
+ * Can be either a fixed buffer index, or used with provided buffers.
+ * For the latter, before issue it points to the buffer group ID,
+ * and after selection it points to the buffer ID itself.
+ */
u16 buf_index;
unsigned int flags;
@@ -1562,14 +1567,11 @@ static inline void io_req_set_rsrc_node(struct io_kiocb *req,
static unsigned int __io_put_kbuf(struct io_kiocb *req, struct list_head *list)
{
- struct io_buffer *kbuf = req->kbuf;
- unsigned int cflags;
-
- cflags = IORING_CQE_F_BUFFER | (kbuf->bid << IORING_CQE_BUFFER_SHIFT);
req->flags &= ~REQ_F_BUFFER_SELECTED;
- list_add(&kbuf->list, list);
+ list_add(&req->kbuf->list, list);
req->kbuf = NULL;
- return cflags;
+
+ return IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT);
}
static inline unsigned int io_put_kbuf_comp(struct io_kiocb *req)
@@ -1643,6 +1645,7 @@ static void io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
bl = io_buffer_get_list(ctx, buf->bgid);
list_add(&buf->list, &bl->buf_list);
req->flags &= ~REQ_F_BUFFER_SELECTED;
+ req->buf_index = buf->bgid;
req->kbuf = NULL;
io_ring_submit_unlock(ctx, issue_flags);
@@ -3582,6 +3585,7 @@ static void __user *io_provided_buffer_select(struct io_kiocb *req, size_t *len,
*len = kbuf->len;
req->flags |= REQ_F_BUFFER_SELECTED;
req->kbuf = kbuf;
+ req->buf_index = kbuf->bid;
io_ring_submit_unlock(req->ctx, issue_flags);
return u64_to_user_ptr(kbuf->addr);
}
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 13/16] io_uring: don't clear req->kbuf when buffer selection is done
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (11 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 12/16] io_uring: eliminate the need to track provided buffer ID separately Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 14/16] io_uring: add buffer selection support to IORING_OP_NOP Jens Axboe
` (2 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe, Dylan Yudaken
It's not needed as the REQ_F_BUFFER_SELECTED flag tracks the state of
whether or not kbuf is valid, so just drop it.
Suggested-by: Dylan Yudaken <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ff3b803cf749..cc6b5173d886 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1569,7 +1569,6 @@ static unsigned int __io_put_kbuf(struct io_kiocb *req, struct list_head *list)
{
req->flags &= ~REQ_F_BUFFER_SELECTED;
list_add(&req->kbuf->list, list);
- req->kbuf = NULL;
return IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT);
}
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 14/16] io_uring: add buffer selection support to IORING_OP_NOP
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (12 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 13/16] io_uring: don't clear req->kbuf when buffer selection is done Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 15/16] io_uring: add io_pin_pages() helper Jens Axboe
2022-05-01 20:56 ` [PATCH 16/16] io_uring: add support for ring mapped supplied buffers Jens Axboe
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
Obviously not really useful since it's not transferring data, but it
is helpful in benchmarking overhead of provided buffers.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index cc6b5173d886..850125c02c9d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1061,7 +1061,9 @@ struct io_op_def {
};
static const struct io_op_def io_op_defs[] = {
- [IORING_OP_NOP] = {},
+ [IORING_OP_NOP] = {
+ .buffer_select = 1,
+ },
[IORING_OP_READV] = {
.needs_file = 1,
.unbound_nonreg_file = 1,
@@ -4911,11 +4913,20 @@ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
static int io_nop(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_ring_ctx *ctx = req->ctx;
+ void __user *buf;
if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
- __io_req_complete(req, issue_flags, 0, 0);
+ if (req->flags & REQ_F_BUFFER_SELECT) {
+ size_t len = 1;
+
+ buf = io_buffer_select(req, &len, issue_flags);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+ }
+
+ __io_req_complete(req, issue_flags, 0, io_put_kbuf(req, issue_flags));
return 0;
}
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 15/16] io_uring: add io_pin_pages() helper
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (13 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 14/16] io_uring: add buffer selection support to IORING_OP_NOP Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 16/16] io_uring: add support for ring mapped supplied buffers Jens Axboe
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
Abstract this out from io_sqe_buffer_register() so we can use it
elsewhere too without duplicating this code.
No intended functional changes in this patch.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 77 +++++++++++++++++++++++++++++++++------------------
1 file changed, 50 insertions(+), 27 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 850125c02c9d..505c1d8cad30 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -10193,30 +10193,18 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
return ret;
}
-static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
- struct io_mapped_ubuf **pimu,
- struct page **last_hpage)
+static struct page **io_pin_pages(unsigned long ubuf, unsigned long len,
+ int *npages)
{
- struct io_mapped_ubuf *imu = NULL;
+ unsigned long start, end, nr_pages;
struct vm_area_struct **vmas = NULL;
struct page **pages = NULL;
- unsigned long off, start, end, ubuf;
- size_t size;
- int ret, pret, nr_pages, i;
-
- if (!iov->iov_base) {
- *pimu = ctx->dummy_ubuf;
- return 0;
- }
+ int i, pret, ret = -ENOMEM;
- ubuf = (unsigned long) iov->iov_base;
- end = (ubuf + iov->iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ end = (ubuf + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
start = ubuf >> PAGE_SHIFT;
nr_pages = end - start;
- *pimu = NULL;
- ret = -ENOMEM;
-
pages = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
if (!pages)
goto done;
@@ -10226,10 +10214,6 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
if (!vmas)
goto done;
- imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL);
- if (!imu)
- goto done;
-
ret = 0;
mmap_read_lock(current->mm);
pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
@@ -10247,6 +10231,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
break;
}
}
+ *npages = nr_pages;
} else {
ret = pret < 0 ? pret : -EFAULT;
}
@@ -10260,14 +10245,53 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
unpin_user_pages(pages, pret);
goto done;
}
+ ret = 0;
+done:
+ kvfree(vmas);
+ if (ret < 0) {
+ kvfree(pages);
+ pages = ERR_PTR(ret);
+ }
+ return pages;
+}
- ret = io_buffer_account_pin(ctx, pages, pret, imu, last_hpage);
+static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
+ struct io_mapped_ubuf **pimu,
+ struct page **last_hpage)
+{
+ struct io_mapped_ubuf *imu = NULL;
+ struct page **pages = NULL;
+ unsigned long off;
+ size_t size;
+ int ret, nr_pages, i;
+
+ if (!iov->iov_base) {
+ *pimu = ctx->dummy_ubuf;
+ return 0;
+ }
+
+ *pimu = NULL;
+ ret = -ENOMEM;
+
+ pages = io_pin_pages((unsigned long) iov->iov_base, iov->iov_len,
+ &nr_pages);
+ if (IS_ERR(pages)) {
+ ret = PTR_ERR(pages);
+ pages = NULL;
+ goto done;
+ }
+
+ imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL);
+ if (!imu)
+ goto done;
+
+ ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage);
if (ret) {
- unpin_user_pages(pages, pret);
+ unpin_user_pages(pages, nr_pages);
goto done;
}
- off = ubuf & ~PAGE_MASK;
+ off = (unsigned long) iov->iov_base & ~PAGE_MASK;
size = iov->iov_len;
for (i = 0; i < nr_pages; i++) {
size_t vec_len;
@@ -10280,8 +10304,8 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
size -= vec_len;
}
/* store original address for later verification */
- imu->ubuf = ubuf;
- imu->ubuf_end = ubuf + iov->iov_len;
+ imu->ubuf = (unsigned long) iov->iov_base;
+ imu->ubuf_end = imu->ubuf + iov->iov_len;
imu->nr_bvecs = nr_pages;
*pimu = imu;
ret = 0;
@@ -10289,7 +10313,6 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
if (ret)
kvfree(imu);
kvfree(pages);
- kvfree(vmas);
return ret;
}
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 16/16] io_uring: add support for ring mapped supplied buffers
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
` (14 preceding siblings ...)
2022-05-01 20:56 ` [PATCH 15/16] io_uring: add io_pin_pages() helper Jens Axboe
@ 2022-05-01 20:56 ` Jens Axboe
15 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-01 20:56 UTC (permalink / raw)
To: io-uring; +Cc: asml.silence, Jens Axboe
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 237 ++++++++++++++++++++++++++++++++--
include/uapi/linux/io_uring.h | 28 ++++
2 files changed, 253 insertions(+), 12 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 505c1d8cad30..6127cc020dc3 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -283,8 +283,25 @@ struct io_rsrc_data {
};
struct io_buffer_list {
- struct list_head buf_list;
+ /*
+ * If ->buf_nr_pages is set, then buf_pages/buf_ring are used. If not,
+ * then these are classic provided buffers and ->buf_list is used.
+ */
+ union {
+ struct list_head buf_list;
+ struct {
+ struct page **buf_pages;
+ struct io_uring_buf_ring *buf_ring;
+ };
+ };
__u16 bgid;
+
+ /* below is for ring provided buffers */
+ __u16 buf_nr_pages;
+ __u16 nr_entries;
+ __u16 buf_per_page;
+ __u32 tail;
+ __u32 mask;
};
struct io_buffer {
@@ -815,6 +832,7 @@ enum {
REQ_F_NEED_CLEANUP_BIT,
REQ_F_POLLED_BIT,
REQ_F_BUFFER_SELECTED_BIT,
+ REQ_F_BUFFER_RING_BIT,
REQ_F_COMPLETE_INLINE_BIT,
REQ_F_REISSUE_BIT,
REQ_F_CREDS_BIT,
@@ -865,6 +883,8 @@ enum {
REQ_F_POLLED = BIT(REQ_F_POLLED_BIT),
/* buffer already selected */
REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),
+ /* buffer selected from ring, needs commit */
+ REQ_F_BUFFER_RING = BIT(REQ_F_BUFFER_RING_BIT),
/* completion is deferred through io_comp_state */
REQ_F_COMPLETE_INLINE = BIT(REQ_F_COMPLETE_INLINE_BIT),
/* caller should reissue async */
@@ -989,6 +1009,12 @@ struct io_kiocb {
/* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
struct io_buffer *kbuf;
+
+ /*
+ * stores buffer ID for ring provided buffers, valid IFF
+ * REQ_F_BUFFER_RING is set.
+ */
+ struct io_buffer_list *buf_list;
};
union {
@@ -1569,8 +1595,14 @@ static inline void io_req_set_rsrc_node(struct io_kiocb *req,
static unsigned int __io_put_kbuf(struct io_kiocb *req, struct list_head *list)
{
- req->flags &= ~REQ_F_BUFFER_SELECTED;
- list_add(&req->kbuf->list, list);
+ if (req->flags & REQ_F_BUFFER_RING) {
+ if (req->buf_list)
+ req->buf_list->tail++;
+ req->flags &= ~REQ_F_BUFFER_RING;
+ } else {
+ list_add(&req->kbuf->list, list);
+ req->flags &= ~REQ_F_BUFFER_SELECTED;
+ }
return IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT);
}
@@ -1579,7 +1611,7 @@ static inline unsigned int io_put_kbuf_comp(struct io_kiocb *req)
{
lockdep_assert_held(&req->ctx->completion_lock);
- if (likely(!(req->flags & REQ_F_BUFFER_SELECTED)))
+ if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
return 0;
return __io_put_kbuf(req, &req->ctx->io_buffers_comp);
}
@@ -1589,7 +1621,7 @@ static inline unsigned int io_put_kbuf(struct io_kiocb *req,
{
unsigned int cflags;
- if (likely(!(req->flags & REQ_F_BUFFER_SELECTED)))
+ if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
return 0;
/*
@@ -1604,7 +1636,10 @@ static inline unsigned int io_put_kbuf(struct io_kiocb *req,
* We migrate buffers from the comp_list to the issue cache list
* when we need one.
*/
- if (issue_flags & IO_URING_F_UNLOCKED) {
+ if (req->flags & REQ_F_BUFFER_RING) {
+ /* no buffers to recycle for this case */
+ cflags = __io_put_kbuf(req, NULL);
+ } else if (issue_flags & IO_URING_F_UNLOCKED) {
struct io_ring_ctx *ctx = req->ctx;
spin_lock(&ctx->completion_lock);
@@ -1634,11 +1669,23 @@ static void io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
struct io_buffer_list *bl;
struct io_buffer *buf;
- if (likely(!(req->flags & REQ_F_BUFFER_SELECTED)))
+ if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
return;
/* don't recycle if we already did IO to this buffer */
if (req->flags & REQ_F_PARTIAL_IO)
return;
+ /*
+ * We don't need to recycle for REQ_F_BUFFER_RING, we can just clear
+ * the flag and hence ensure that bl->tail doesn't get incremented.
+ * If the tail has already been incremented, hang on to it.
+ */
+ if (req->flags & REQ_F_BUFFER_RING) {
+ if (req->buf_list) {
+ req->buf_index = req->buf_list->bgid;
+ req->flags &= ~REQ_F_BUFFER_RING;
+ }
+ return;
+ }
io_ring_submit_lock(ctx, issue_flags);
@@ -3591,6 +3638,53 @@ static void __user *io_provided_buffer_select(struct io_kiocb *req, size_t *len,
return u64_to_user_ptr(kbuf->addr);
}
+static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ struct io_buffer_list *bl,
+ unsigned int issue_flags)
+{
+ struct io_uring_buf_ring *br = bl->buf_ring;
+ struct io_uring_buf *buf = &br->bufs[0];
+ __u32 tail = bl->tail;
+
+ if (unlikely(smp_load_acquire(&br->head) == tail))
+ return ERR_PTR(-ENOBUFS);
+
+ tail &= bl->mask;
+ if (tail < bl->buf_per_page) {
+ buf = &br->bufs[tail];
+ } else {
+ int index = tail - bl->buf_per_page;
+ int off = index & bl->buf_per_page;
+
+ index = (index >> (PAGE_SHIFT - 4)) + 1;
+ buf = page_address(bl->buf_pages[index]);
+ buf += off;
+ }
+ if (*len > buf->len)
+ *len = buf->len;
+ req->flags |= REQ_F_BUFFER_RING;
+ req->buf_list = bl;
+ req->buf_index = buf->bid;
+
+ if (!(issue_flags & IO_URING_F_UNLOCKED))
+ return u64_to_user_ptr(buf->addr);
+
+ /*
+ * If we came in unlocked, we have no choice but to
+ * consume the buffer here. This does mean it'll be
+ * pinned until the IO completes. But coming in
+ * unlocked means we're in io-wq context, hence there
+ * should be no further retry. For the locked case, the
+ * caller must ensure to call the commit when the
+ * transfer completes (or if we get -EAGAIN and must
+ * poll or retry).
+ */
+ req->buf_list = NULL;
+ bl->tail++;
+ io_ring_submit_unlock(req->ctx, issue_flags);
+ return u64_to_user_ptr(buf->addr);
+}
+
static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
unsigned int issue_flags)
{
@@ -3606,6 +3700,9 @@ static void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
}
/* selection helpers drop the submit lock again, if needed */
+ if (bl->buf_nr_pages)
+ return io_ring_buffer_select(req, len, bl, issue_flags);
+
return io_provided_buffer_select(req, len, bl, issue_flags);
}
@@ -3662,7 +3759,7 @@ static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
unsigned int issue_flags)
{
- if (req->flags & REQ_F_BUFFER_SELECTED) {
+ if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)) {
iov[0].iov_base = u64_to_user_ptr(req->rw.addr);
iov[0].iov_len = req->rw.len;
return 0;
@@ -3682,7 +3779,7 @@ static inline bool io_do_buffer_select(struct io_kiocb *req)
{
if (!(req->flags & REQ_F_BUFFER_SELECT))
return false;
- return !(req->flags & REQ_F_BUFFER_SELECTED);
+ return !(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING));
}
static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
@@ -5204,6 +5301,19 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx,
if (!nbufs)
return 0;
+ if (bl->buf_nr_pages) {
+ int j;
+
+ if (WARN_ON_ONCE(nbufs != -1U))
+ return -EINVAL;
+ for (j = 0; j < bl->buf_nr_pages; j++)
+ unpin_user_page(bl->buf_pages[j]);
+ kvfree(bl->buf_pages);
+ bl->buf_pages = NULL;
+ bl->buf_nr_pages = 0;
+ return bl->buf_ring->head - bl->tail;
+ }
+
/* the head kbuf is the list itself */
while (!list_empty(&bl->buf_list)) {
struct io_buffer *nxt;
@@ -5230,8 +5340,12 @@ static int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
ret = -ENOENT;
bl = io_buffer_get_list(ctx, p->bgid);
- if (bl)
- ret = __io_remove_buffers(ctx, bl, p->nbufs);
+ if (bl) {
+ ret = -EINVAL;
+ /* can't use provide/remove buffers command on mapped buffers */
+ if (!bl->buf_nr_pages)
+ ret = __io_remove_buffers(ctx, bl, p->nbufs);
+ }
if (ret < 0)
req_set_fail(req);
@@ -5379,7 +5493,7 @@ static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
bl = io_buffer_get_list(ctx, p->bgid);
if (unlikely(!bl)) {
- bl = kmalloc(sizeof(*bl), GFP_KERNEL);
+ bl = kzalloc(sizeof(*bl), GFP_KERNEL);
if (!bl) {
ret = -ENOMEM;
goto err;
@@ -5390,6 +5504,11 @@ static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
goto err;
}
}
+ /* can't add buffers via this command for a mapped buffer ring */
+ if (bl->buf_nr_pages) {
+ ret = -EINVAL;
+ goto err;
+ }
ret = io_add_buffers(ctx, p, bl);
err:
@@ -12333,6 +12452,87 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
return ret;
}
+static int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
+{
+ struct io_uring_buf_ring *br;
+ struct io_uring_buf_reg reg;
+ struct io_buffer_list *bl;
+ struct page **pages;
+ int nr_pages;
+
+ if (copy_from_user(®, arg, sizeof(reg)))
+ return -EFAULT;
+
+ if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
+ return -EINVAL;
+ if (!reg.ring_addr)
+ return -EFAULT;
+ if (reg.ring_addr & ~PAGE_MASK)
+ return -EINVAL;
+ if (!is_power_of_2(reg.ring_entries))
+ return -EINVAL;
+
+ if (unlikely(reg.bgid < BGID_ARRAY && !ctx->io_bl)) {
+ int ret = io_init_bl_list(ctx);
+ if (ret)
+ return ret;
+ }
+
+ bl = io_buffer_get_list(ctx, reg.bgid);
+ if (bl && bl->buf_nr_pages)
+ return -EEXIST;
+ if (!bl) {
+ bl = kzalloc(sizeof(*bl), GFP_KERNEL);
+ if (!bl)
+ return -ENOMEM;
+ }
+
+ pages = io_pin_pages(reg.ring_addr,
+ struct_size(br, bufs, reg.ring_entries),
+ &nr_pages);
+ if (IS_ERR(pages)) {
+ kfree(bl);
+ return PTR_ERR(pages);
+ }
+
+ br = page_address(pages[0]);
+ br->head = 0;
+ bl->buf_pages = pages;
+ bl->buf_nr_pages = nr_pages;
+ bl->nr_entries = reg.ring_entries;
+ BUILD_BUG_ON(sizeof(struct io_uring_buf) != 16);
+ bl->buf_per_page = (PAGE_SIZE - sizeof(struct io_uring_buf)) /
+ sizeof(struct io_uring_buf);
+ bl->buf_ring = br;
+ bl->mask = reg.ring_entries - 1;
+ io_buffer_add_list(ctx, bl, reg.bgid);
+ return 0;
+}
+
+static int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
+{
+ struct io_uring_buf_reg reg;
+ struct io_buffer_list *bl;
+
+ if (copy_from_user(®, arg, sizeof(reg)))
+ return -EFAULT;
+ if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
+ return -EINVAL;
+
+ bl = io_buffer_get_list(ctx, reg.bgid);
+ if (!bl)
+ return -ENOENT;
+ if (!bl->buf_nr_pages)
+ return -EINVAL;
+
+ __io_remove_buffers(ctx, bl, -1U);
+ if (bl->bgid >= BGID_ARRAY) {
+ xa_erase(&ctx->io_bl_xa, bl->bgid);
+ kfree(bl);
+ }
+ return 0;
+}
+
static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
void __user *arg, unsigned nr_args)
__releases(ctx->uring_lock)
@@ -12461,6 +12661,18 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
case IORING_UNREGISTER_RING_FDS:
ret = io_ringfd_unregister(ctx, arg, nr_args);
break;
+ case IORING_REGISTER_PBUF_RING:
+ ret = -EINVAL;
+ if (!arg || nr_args != 1)
+ break;
+ ret = io_register_pbuf_ring(ctx, arg);
+ break;
+ case IORING_UNREGISTER_PBUF_RING:
+ ret = -EINVAL;
+ if (!arg || nr_args != 1)
+ break;
+ ret = io_unregister_pbuf_ring(ctx, arg);
+ break;
default:
ret = -EINVAL;
break;
@@ -12547,6 +12759,7 @@ static int __init io_uring_init(void)
/* ->buf_index is u16 */
BUILD_BUG_ON(IORING_MAX_REG_BUFFERS >= (1u << 16));
BUILD_BUG_ON(BGID_ARRAY * sizeof(struct io_buffer_list) > PAGE_SIZE);
+ BUILD_BUG_ON(offsetof(struct io_uring_buf_ring, bufs) != 16);
/* should fit into one byte */
BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 49d1f3994f8d..8c21068d31a6 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -352,6 +352,10 @@ enum {
IORING_REGISTER_RING_FDS = 20,
IORING_UNREGISTER_RING_FDS = 21,
+ /* register ring based provide buffer group */
+ IORING_REGISTER_PBUF_RING = 22,
+ IORING_UNREGISTER_PBUF_RING = 23,
+
/* this goes last */
IORING_REGISTER_LAST
};
@@ -423,6 +427,30 @@ struct io_uring_restriction {
__u32 resv2[3];
};
+struct io_uring_buf {
+ __u64 addr;
+ __u32 len;
+ __u16 bid;
+ __u16 resv;
+};
+
+struct io_uring_buf_ring {
+ union {
+ __u32 head;
+ struct io_uring_buf pad;
+ };
+ struct io_uring_buf bufs[];
+};
+
+/* argument for IORING_(UN)REGISTER_PBUF_RING */
+struct io_uring_buf_reg {
+ __u64 ring_addr;
+ __u32 ring_entries;
+ __u16 bgid;
+ __u16 pad;
+ __u64 resv[3];
+};
+
/*
* io_uring_restriction->opcode values
*/
--
2.35.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly
2022-05-01 20:56 ` [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly Jens Axboe
@ 2022-05-09 12:06 ` Dylan Yudaken
2022-05-09 12:12 ` Dylan Yudaken
2022-05-09 12:21 ` Jens Axboe
0 siblings, 2 replies; 23+ messages in thread
From: Dylan Yudaken @ 2022-05-09 12:06 UTC (permalink / raw)
To: [email protected], [email protected]; +Cc: [email protected]
On Sun, 2022-05-01 at 14:56 -0600, Jens Axboe wrote:
> There's no point in having callers provide a kbuf, we're just
> returning
> the address anyway.
>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
> fs/io_uring.c | 42 ++++++++++++++++++------------------------
> 1 file changed, 18 insertions(+), 24 deletions(-)
>
...
> @@ -6013,10 +6006,11 @@ static int io_recv(struct io_kiocb *req,
> unsigned int issue_flags)
> return -ENOTSOCK;
>
> if (req->flags & REQ_F_BUFFER_SELECT) {
> - kbuf = io_buffer_select(req, &sr->len, sr->bgid,
> issue_flags);
> - if (IS_ERR(kbuf))
> - return PTR_ERR(kbuf);
> - buf = u64_to_user_ptr(kbuf->addr);
> + void __user *buf;
this now shadows the outer buf, and so does not work at all as the buf
value is lost.
A bit surprised this did not show up in any tests.
> +
> + buf = io_buffer_select(req, &sr->len, sr->bgid,
> issue_flags);
> + if (IS_ERR(buf))
> + return PTR_ERR(buf);
> }
>
> ret = import_single_range(READ, buf, sr->len, &iov,
> &msg.msg_iter);
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly
2022-05-09 12:06 ` Dylan Yudaken
@ 2022-05-09 12:12 ` Dylan Yudaken
2022-05-09 12:28 ` Jens Axboe
2022-05-09 12:21 ` Jens Axboe
1 sibling, 1 reply; 23+ messages in thread
From: Dylan Yudaken @ 2022-05-09 12:12 UTC (permalink / raw)
To: [email protected], [email protected]; +Cc: [email protected]
On Mon, 2022-05-09 at 12:06 +0000, Dylan Yudaken wrote:
> On Sun, 2022-05-01 at 14:56 -0600, Jens Axboe wrote:
> > There's no point in having callers provide a kbuf, we're just
> > returning
> > the address anyway.
> >
> > Signed-off-by: Jens Axboe <[email protected]>
> > ---
> > fs/io_uring.c | 42 ++++++++++++++++++------------------------
> > 1 file changed, 18 insertions(+), 24 deletions(-)
> >
>
> ...
>
> > @@ -6013,10 +6006,11 @@ static int io_recv(struct io_kiocb *req,
> > unsigned int issue_flags)
> > return -ENOTSOCK;
> >
> > if (req->flags & REQ_F_BUFFER_SELECT) {
> > - kbuf = io_buffer_select(req, &sr->len, sr->bgid,
> > issue_flags);
> > - if (IS_ERR(kbuf))
> > - return PTR_ERR(kbuf);
> > - buf = u64_to_user_ptr(kbuf->addr);
> > + void __user *buf;
>
> this now shadows the outer buf, and so does not work at all as the buf
> value is lost.
> A bit surprised this did not show up in any tests.
>
> > +
> > + buf = io_buffer_select(req, &sr->len, sr->bgid,
> > issue_flags);
> > + if (IS_ERR(buf))
> > + return PTR_ERR(buf);
> > }
> >
> > ret = import_single_range(READ, buf, sr->len, &iov,
> > &msg.msg_iter);
>
The following seems to fix it for me. I can submit it separately if you
like.
diff --git a/fs/io_uring.c b/fs/io_uring.c
index b6d491c9a25f..22699cb359e9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5630,7 +5630,6 @@ static int io_recv(struct io_kiocb *req, unsigned
int issue_flags)
{
struct io_sr_msg *sr = &req->sr_msg;
struct msghdr msg;
- void __user *buf = sr->buf;
struct socket *sock;
struct iovec iov;
unsigned flags;
@@ -5654,7 +5653,7 @@ static int io_recv(struct io_kiocb *req, unsigned
int issue_flags)
sr->buf = buf;
}
- ret = import_single_range(READ, buf, sr->len, &iov,
&msg.msg_iter);
+ ret = import_single_range(READ, sr->buf, sr->len, &iov,
&msg.msg_iter);
if (unlikely(ret))
goto out_free;
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly
2022-05-09 12:06 ` Dylan Yudaken
2022-05-09 12:12 ` Dylan Yudaken
@ 2022-05-09 12:21 ` Jens Axboe
1 sibling, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-09 12:21 UTC (permalink / raw)
To: Dylan Yudaken, [email protected]; +Cc: [email protected]
On 5/9/22 6:06 AM, Dylan Yudaken wrote:
> On Sun, 2022-05-01 at 14:56 -0600, Jens Axboe wrote:
>> There's no point in having callers provide a kbuf, we're just
>> returning
>> the address anyway.
>>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>> fs/io_uring.c | 42 ++++++++++++++++++------------------------
>> 1 file changed, 18 insertions(+), 24 deletions(-)
>>
>
> ...
>
>> @@ -6013,10 +6006,11 @@ static int io_recv(struct io_kiocb *req,
>> unsigned int issue_flags)
>> return -ENOTSOCK;
>>
>> if (req->flags & REQ_F_BUFFER_SELECT) {
>> - kbuf = io_buffer_select(req, &sr->len, sr->bgid,
>> issue_flags);
>> - if (IS_ERR(kbuf))
>> - return PTR_ERR(kbuf);
>> - buf = u64_to_user_ptr(kbuf->addr);
>> + void __user *buf;
>
> this now shadows the outer buf, and so does not work at all as the buf
> value is lost. A bit surprised this did not show up in any tests.
Hmm indeed, that is odd! Please do submit your patch separately, thanks.
--
Jens Axboe
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly
2022-05-09 12:12 ` Dylan Yudaken
@ 2022-05-09 12:28 ` Jens Axboe
2022-05-09 12:43 ` Dylan Yudaken
0 siblings, 1 reply; 23+ messages in thread
From: Jens Axboe @ 2022-05-09 12:28 UTC (permalink / raw)
To: Dylan Yudaken, [email protected]; +Cc: [email protected]
On 5/9/22 6:12 AM, Dylan Yudaken wrote:
> On Mon, 2022-05-09 at 12:06 +0000, Dylan Yudaken wrote:
>> On Sun, 2022-05-01 at 14:56 -0600, Jens Axboe wrote:
>>> There's no point in having callers provide a kbuf, we're just
>>> returning
>>> the address anyway.
>>>
>>> Signed-off-by: Jens Axboe <[email protected]>
>>> ---
>>> fs/io_uring.c | 42 ++++++++++++++++++------------------------
>>> 1 file changed, 18 insertions(+), 24 deletions(-)
>>>
>>
>> ...
>>
>>> @@ -6013,10 +6006,11 @@ static int io_recv(struct io_kiocb *req,
>>> unsigned int issue_flags)
>>> return -ENOTSOCK;
>>>
>>> if (req->flags & REQ_F_BUFFER_SELECT) {
>>> - kbuf = io_buffer_select(req, &sr->len, sr->bgid,
>>> issue_flags);
>>> - if (IS_ERR(kbuf))
>>> - return PTR_ERR(kbuf);
>>> - buf = u64_to_user_ptr(kbuf->addr);
>>> + void __user *buf;
>>
>> this now shadows the outer buf, and so does not work at all as the buf
>> value is lost.
>> A bit surprised this did not show up in any tests.
>>
>>> +
>>> + buf = io_buffer_select(req, &sr->len, sr->bgid,
>>> issue_flags);
>>> + if (IS_ERR(buf))
>>> + return PTR_ERR(buf);
>>> }
>>>
>>> ret = import_single_range(READ, buf, sr->len, &iov,
>>> &msg.msg_iter);
>>
>
> The following seems to fix it for me. I can submit it separately if you
> like.
I think you want something like this:
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 19dd3ba92486..2b87c89d2375 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5599,7 +5599,6 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_sr_msg *sr = &req->sr_msg;
struct msghdr msg;
- void __user *buf = sr->buf;
struct socket *sock;
struct iovec iov;
unsigned flags;
@@ -5620,9 +5619,10 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
buf = io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
if (IS_ERR(buf))
return PTR_ERR(buf);
+ sr->buf = buf;
}
- ret = import_single_range(READ, buf, sr->len, &iov, &msg.msg_iter);
+ ret = import_single_range(READ, sr->buf, sr->len, &iov, &msg.msg_iter);
if (unlikely(ret))
goto out_free;
--
Jens Axboe
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly
2022-05-09 12:28 ` Jens Axboe
@ 2022-05-09 12:43 ` Dylan Yudaken
2022-05-09 12:46 ` Jens Axboe
0 siblings, 1 reply; 23+ messages in thread
From: Dylan Yudaken @ 2022-05-09 12:43 UTC (permalink / raw)
To: [email protected], [email protected]; +Cc: [email protected]
On Mon, 2022-05-09 at 06:28 -0600, Jens Axboe wrote:
> On 5/9/22 6:12 AM, Dylan Yudaken wrote:
> > On Mon, 2022-05-09 at 12:06 +0000, Dylan Yudaken wrote:
> > > On Sun, 2022-05-01 at 14:56 -0600, Jens Axboe wrote:
> > > > There's no point in having callers provide a kbuf, we're just
> > > > returning
> > > > the address anyway.
> > > >
> > > > Signed-off-by: Jens Axboe <[email protected]>
> > > > ---
> > > > fs/io_uring.c | 42 ++++++++++++++++++------------------------
> > > > 1 file changed, 18 insertions(+), 24 deletions(-)
> > > >
> > >
> > > ...
> > >
> > > > @@ -6013,10 +6006,11 @@ static int io_recv(struct io_kiocb
> > > > *req,
> > > > unsigned int issue_flags)
> > > > return -ENOTSOCK;
> > > >
> > > > if (req->flags & REQ_F_BUFFER_SELECT) {
> > > > - kbuf = io_buffer_select(req, &sr->len, sr-
> > > > >bgid,
> > > > issue_flags);
> > > > - if (IS_ERR(kbuf))
> > > > - return PTR_ERR(kbuf);
> > > > - buf = u64_to_user_ptr(kbuf->addr);
> > > > + void __user *buf;
> > >
> > > this now shadows the outer buf, and so does not work at all as
> > > the buf
> > > value is lost.
> > > A bit surprised this did not show up in any tests.
> > >
> > > > +
> > > > + buf = io_buffer_select(req, &sr->len, sr->bgid,
> > > > issue_flags);
> > > > + if (IS_ERR(buf))
> > > > + return PTR_ERR(buf);
> > > > }
> > > >
> > > > ret = import_single_range(READ, buf, sr->len, &iov,
> > > > &msg.msg_iter);
> > >
> >
> > The following seems to fix it for me. I can submit it separately if
> > you
> > like.
>
> I think you want something like this:
>
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 19dd3ba92486..2b87c89d2375 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -5599,7 +5599,6 @@ static int io_recv(struct io_kiocb *req,
> unsigned int issue_flags)
> {
> struct io_sr_msg *sr = &req->sr_msg;
> struct msghdr msg;
> - void __user *buf = sr->buf;
> struct socket *sock;
> struct iovec iov;
> unsigned flags;
> @@ -5620,9 +5619,10 @@ static int io_recv(struct io_kiocb *req,
> unsigned int issue_flags)
> buf = io_buffer_select(req, &sr->len, sr->bgid,
> issue_flags);
> if (IS_ERR(buf))
> return PTR_ERR(buf);
> + sr->buf = buf;
this line I think was added later on anyway in "io_uring: never call
io_buffer_select() for a buffer re-select"
> }
>
> - ret = import_single_range(READ, buf, sr->len, &iov,
> &msg.msg_iter);
> + ret = import_single_range(READ, sr->buf, sr->len, &iov,
> &msg.msg_iter);
> if (unlikely(ret))
> goto out_free;
>
>
I'll send a patch now.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly
2022-05-09 12:43 ` Dylan Yudaken
@ 2022-05-09 12:46 ` Jens Axboe
0 siblings, 0 replies; 23+ messages in thread
From: Jens Axboe @ 2022-05-09 12:46 UTC (permalink / raw)
To: Dylan Yudaken, [email protected]; +Cc: [email protected]
On 5/9/22 6:43 AM, Dylan Yudaken wrote:
> On Mon, 2022-05-09 at 06:28 -0600, Jens Axboe wrote:
>> On 5/9/22 6:12 AM, Dylan Yudaken wrote:
>>> On Mon, 2022-05-09 at 12:06 +0000, Dylan Yudaken wrote:
>>>> On Sun, 2022-05-01 at 14:56 -0600, Jens Axboe wrote:
>>>>> There's no point in having callers provide a kbuf, we're just
>>>>> returning
>>>>> the address anyway.
>>>>>
>>>>> Signed-off-by: Jens Axboe <[email protected]>
>>>>> ---
>>>>> fs/io_uring.c | 42 ++++++++++++++++++------------------------
>>>>> 1 file changed, 18 insertions(+), 24 deletions(-)
>>>>>
>>>>
>>>> ...
>>>>
>>>>> @@ -6013,10 +6006,11 @@ static int io_recv(struct io_kiocb
>>>>> *req,
>>>>> unsigned int issue_flags)
>>>>> return -ENOTSOCK;
>>>>>
>>>>> if (req->flags & REQ_F_BUFFER_SELECT) {
>>>>> - kbuf = io_buffer_select(req, &sr->len, sr-
>>>>>> bgid,
>>>>> issue_flags);
>>>>> - if (IS_ERR(kbuf))
>>>>> - return PTR_ERR(kbuf);
>>>>> - buf = u64_to_user_ptr(kbuf->addr);
>>>>> + void __user *buf;
>>>>
>>>> this now shadows the outer buf, and so does not work at all as
>>>> the buf
>>>> value is lost.
>>>> A bit surprised this did not show up in any tests.
>>>>
>>>>> +
>>>>> + buf = io_buffer_select(req, &sr->len, sr->bgid,
>>>>> issue_flags);
>>>>> + if (IS_ERR(buf))
>>>>> + return PTR_ERR(buf);
>>>>> }
>>>>>
>>>>> ret = import_single_range(READ, buf, sr->len, &iov,
>>>>> &msg.msg_iter);
>>>>
>>>
>>> The following seems to fix it for me. I can submit it separately if
>>> you
>>> like.
>>
>> I think you want something like this:
>>
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 19dd3ba92486..2b87c89d2375 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -5599,7 +5599,6 @@ static int io_recv(struct io_kiocb *req,
>> unsigned int issue_flags)
>> {
>> struct io_sr_msg *sr = &req->sr_msg;
>> struct msghdr msg;
>> - void __user *buf = sr->buf;
>> struct socket *sock;
>> struct iovec iov;
>> unsigned flags;
>> @@ -5620,9 +5619,10 @@ static int io_recv(struct io_kiocb *req,
>> unsigned int issue_flags)
>> buf = io_buffer_select(req, &sr->len, sr->bgid,
>> issue_flags);
>> if (IS_ERR(buf))
>> return PTR_ERR(buf);
>> + sr->buf = buf;
>
> this line I think was added later on anyway in "io_uring: never call
> io_buffer_select() for a buffer re-select"
OK good that makes sense for why the end result was ok, but it should be
added here to avoid breakage in the middle.
>> - ret = import_single_range(READ, buf, sr->len, &iov,
>> &msg.msg_iter);
>> + ret = import_single_range(READ, sr->buf, sr->len, &iov,
>> &msg.msg_iter);
>> if (unlikely(ret))
>> goto out_free;
>>
>>
>
> I'll send a patch now.
I decided to just fold in the patch to avoid having a broken point in
the middle.
--
Jens Axboe
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2022-05-09 12:46 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-05-01 20:56 [PATCHSET v4 0/16] Add support for ring mapped provided buffers Jens Axboe
2022-05-01 20:56 ` [PATCH 01/16] io_uring: kill io_recv_buffer_select() wrapper Jens Axboe
2022-05-01 20:56 ` [PATCH 02/16] io_uring: use 'sr' vs 'req->sr_msg' consistently Jens Axboe
2022-05-01 20:56 ` [PATCH 03/16] io_uring: make io_buffer_select() return the user address directly Jens Axboe
2022-05-09 12:06 ` Dylan Yudaken
2022-05-09 12:12 ` Dylan Yudaken
2022-05-09 12:28 ` Jens Axboe
2022-05-09 12:43 ` Dylan Yudaken
2022-05-09 12:46 ` Jens Axboe
2022-05-09 12:21 ` Jens Axboe
2022-05-01 20:56 ` [PATCH 04/16] io_uring: kill io_rw_buffer_select() wrapper Jens Axboe
2022-05-01 20:56 ` [PATCH 05/16] io_uring: ignore ->buf_index if REQ_F_BUFFER_SELECT isn't set Jens Axboe
2022-05-01 20:56 ` [PATCH 06/16] io_uring: always use req->buf_index for the provided buffer group Jens Axboe
2022-05-01 20:56 ` [PATCH 07/16] io_uring: get rid of hashed provided buffer groups Jens Axboe
2022-05-01 20:56 ` [PATCH 08/16] io_uring: never call io_buffer_select() for a buffer re-select Jens Axboe
2022-05-01 20:56 ` [PATCH 09/16] io_uring: abstract out provided buffer list selection Jens Axboe
2022-05-01 20:56 ` [PATCH 10/16] io_uring: move provided and fixed buffers into the same io_kiocb area Jens Axboe
2022-05-01 20:56 ` [PATCH 11/16] io_uring: move provided buffer state closer to submit state Jens Axboe
2022-05-01 20:56 ` [PATCH 12/16] io_uring: eliminate the need to track provided buffer ID separately Jens Axboe
2022-05-01 20:56 ` [PATCH 13/16] io_uring: don't clear req->kbuf when buffer selection is done Jens Axboe
2022-05-01 20:56 ` [PATCH 14/16] io_uring: add buffer selection support to IORING_OP_NOP Jens Axboe
2022-05-01 20:56 ` [PATCH 15/16] io_uring: add io_pin_pages() helper Jens Axboe
2022-05-01 20:56 ` [PATCH 16/16] io_uring: add support for ring mapped supplied buffers Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox