* [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu
2024-10-22 2:03 [PATCHSET v2 for-next 0/4] Get rid of io_kiocb->imu Jens Axboe
@ 2024-10-22 2:03 ` Jens Axboe
2024-10-22 2:43 ` Ming Lei
2024-10-22 2:03 ` [PATCH 2/4] io_uring/rw: " Jens Axboe
` (2 subsequent siblings)
3 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2024-10-22 2:03 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
It's pretty pointless to use io_kiocb as intermediate storage for this,
so split the validity check and the actual usage. The resource node is
assigned upfront at prep time, to prevent it from going away. The actual
import is never called with the ctx->uring_lock held, so grab it for
the import.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/uring_cmd.c | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 39c3c816ec78..313e2a389174 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -211,11 +211,15 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
struct io_ring_ctx *ctx = req->ctx;
u16 index;
- req->buf_index = READ_ONCE(sqe->buf_index);
- if (unlikely(req->buf_index >= ctx->nr_user_bufs))
+ index = READ_ONCE(sqe->buf_index);
+ if (unlikely(index >= ctx->nr_user_bufs))
return -EFAULT;
- index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
- req->imu = ctx->user_bufs[index];
+ req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
+ /*
+ * Pi node upfront, prior to io_uring_cmd_import_fixed()
+ * being called. This prevents destruction of the mapped buffer
+ * we'll need at actual import time.
+ */
io_req_set_rsrc_node(req, ctx, 0);
}
ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
@@ -272,8 +276,16 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd)
{
struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+ struct io_ring_ctx *ctx = req->ctx;
+ struct io_mapped_ubuf *imu;
+ int ret;
- return io_import_fixed(rw, iter, req->imu, ubuf, len);
+ mutex_lock(&ctx->uring_lock);
+ imu = ctx->user_bufs[req->buf_index];
+ ret = io_import_fixed(rw, iter, imu, ubuf, len);
+ mutex_unlock(&ctx->uring_lock);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu
2024-10-22 2:03 ` [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu Jens Axboe
@ 2024-10-22 2:43 ` Ming Lei
2024-10-22 2:59 ` Jens Axboe
0 siblings, 1 reply; 14+ messages in thread
From: Ming Lei @ 2024-10-22 2:43 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring
On Mon, Oct 21, 2024 at 08:03:20PM -0600, Jens Axboe wrote:
> It's pretty pointless to use io_kiocb as intermediate storage for this,
> so split the validity check and the actual usage. The resource node is
> assigned upfront at prep time, to prevent it from going away. The actual
> import is never called with the ctx->uring_lock held, so grab it for
> the import.
>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
> io_uring/uring_cmd.c | 22 +++++++++++++++++-----
> 1 file changed, 17 insertions(+), 5 deletions(-)
>
> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> index 39c3c816ec78..313e2a389174 100644
> --- a/io_uring/uring_cmd.c
> +++ b/io_uring/uring_cmd.c
> @@ -211,11 +211,15 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
> struct io_ring_ctx *ctx = req->ctx;
> u16 index;
>
> - req->buf_index = READ_ONCE(sqe->buf_index);
> - if (unlikely(req->buf_index >= ctx->nr_user_bufs))
> + index = READ_ONCE(sqe->buf_index);
> + if (unlikely(index >= ctx->nr_user_bufs))
> return -EFAULT;
> - index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
> - req->imu = ctx->user_bufs[index];
> + req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
> + /*
> + * Pi node upfront, prior to io_uring_cmd_import_fixed()
> + * being called. This prevents destruction of the mapped buffer
> + * we'll need at actual import time.
> + */
> io_req_set_rsrc_node(req, ctx, 0);
> }
> ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
> @@ -272,8 +276,16 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
> struct iov_iter *iter, void *ioucmd)
> {
> struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
> + struct io_ring_ctx *ctx = req->ctx;
> + struct io_mapped_ubuf *imu;
> + int ret;
>
> - return io_import_fixed(rw, iter, req->imu, ubuf, len);
> + mutex_lock(&ctx->uring_lock);
> + imu = ctx->user_bufs[req->buf_index];
> + ret = io_import_fixed(rw, iter, imu, ubuf, len);
> + mutex_unlock(&ctx->uring_lock);
io_uring_cmd_import_fixed is called in nvme ->issue(), and ->uring_lock
may be held already.
thanks,
Ming
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu
2024-10-22 2:43 ` Ming Lei
@ 2024-10-22 2:59 ` Jens Axboe
[not found] ` <CGME20241022084205epcas5p190eb2ba790815a6ac211cb4e3b6a0fe4@epcas5p1.samsung.com>
0 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2024-10-22 2:59 UTC (permalink / raw)
To: Ming Lei; +Cc: io-uring
On 10/21/24 8:43 PM, Ming Lei wrote:
> On Mon, Oct 21, 2024 at 08:03:20PM -0600, Jens Axboe wrote:
>> It's pretty pointless to use io_kiocb as intermediate storage for this,
>> so split the validity check and the actual usage. The resource node is
>> assigned upfront at prep time, to prevent it from going away. The actual
>> import is never called with the ctx->uring_lock held, so grab it for
>> the import.
>>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>> io_uring/uring_cmd.c | 22 +++++++++++++++++-----
>> 1 file changed, 17 insertions(+), 5 deletions(-)
>>
>> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
>> index 39c3c816ec78..313e2a389174 100644
>> --- a/io_uring/uring_cmd.c
>> +++ b/io_uring/uring_cmd.c
>> @@ -211,11 +211,15 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>> struct io_ring_ctx *ctx = req->ctx;
>> u16 index;
>>
>> - req->buf_index = READ_ONCE(sqe->buf_index);
>> - if (unlikely(req->buf_index >= ctx->nr_user_bufs))
>> + index = READ_ONCE(sqe->buf_index);
>> + if (unlikely(index >= ctx->nr_user_bufs))
>> return -EFAULT;
>> - index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
>> - req->imu = ctx->user_bufs[index];
>> + req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
>> + /*
>> + * Pi node upfront, prior to io_uring_cmd_import_fixed()
>> + * being called. This prevents destruction of the mapped buffer
>> + * we'll need at actual import time.
>> + */
>> io_req_set_rsrc_node(req, ctx, 0);
>> }
>> ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
>> @@ -272,8 +276,16 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
>> struct iov_iter *iter, void *ioucmd)
>> {
>> struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
>> + struct io_ring_ctx *ctx = req->ctx;
>> + struct io_mapped_ubuf *imu;
>> + int ret;
>>
>> - return io_import_fixed(rw, iter, req->imu, ubuf, len);
>> + mutex_lock(&ctx->uring_lock);
>> + imu = ctx->user_bufs[req->buf_index];
>> + ret = io_import_fixed(rw, iter, imu, ubuf, len);
>> + mutex_unlock(&ctx->uring_lock);
>
> io_uring_cmd_import_fixed is called in nvme ->issue(), and ->uring_lock
> may be held already.
Gah indeed, in fact it always should be, unless it's forcefully punted
to io-wq. I'll sort that out, thanks. And looks like we have zero tests
for uring_cmd + fixed buffers :-(
--
Jens Axboe
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 2/4] io_uring/rw: get rid of using req->imu
2024-10-22 2:03 [PATCHSET v2 for-next 0/4] Get rid of io_kiocb->imu Jens Axboe
2024-10-22 2:03 ` [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu Jens Axboe
@ 2024-10-22 2:03 ` Jens Axboe
2024-10-22 2:03 ` [PATCH 3/4] io_uring/net: move send zc fixed buffer import to issue path Jens Axboe
2024-10-22 2:03 ` [PATCH 4/4] io_uring: kill 'imu' from struct io_kiocb Jens Axboe
3 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2024-10-22 2:03 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
It's assigned in the same function that it's being used, get rid of
it. A local variable will do just fine.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/rw.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 80ae3c2ebb70..c633365aa37d 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -330,6 +330,7 @@ static int io_prep_rw_fixed(struct io_kiocb *req, const struct io_uring_sqe *sqe
{
struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
struct io_ring_ctx *ctx = req->ctx;
+ struct io_mapped_ubuf *imu;
struct io_async_rw *io;
u16 index;
int ret;
@@ -341,11 +342,11 @@ static int io_prep_rw_fixed(struct io_kiocb *req, const struct io_uring_sqe *sqe
if (unlikely(req->buf_index >= ctx->nr_user_bufs))
return -EFAULT;
index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
- req->imu = ctx->user_bufs[index];
+ imu = ctx->user_bufs[index];
io_req_set_rsrc_node(req, ctx, 0);
io = req->async_data;
- ret = io_import_fixed(ddir, &io->iter, req->imu, rw->addr, rw->len);
+ ret = io_import_fixed(ddir, &io->iter, imu, rw->addr, rw->len);
iov_iter_save_state(&io->iter, &io->iter_state);
return ret;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/4] io_uring/net: move send zc fixed buffer import to issue path
2024-10-22 2:03 [PATCHSET v2 for-next 0/4] Get rid of io_kiocb->imu Jens Axboe
2024-10-22 2:03 ` [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu Jens Axboe
2024-10-22 2:03 ` [PATCH 2/4] io_uring/rw: " Jens Axboe
@ 2024-10-22 2:03 ` Jens Axboe
2024-10-22 2:03 ` [PATCH 4/4] io_uring: kill 'imu' from struct io_kiocb Jens Axboe
3 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2024-10-22 2:03 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
Let's keep it close with the actual import, there's no reason to do this
on the prep side. With that, we can drop one of the branches checking
for whether or not IORING_RECVSEND_FIXED_BUF is set.
As a side-effect, get rid of req->imu usage.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/net.c | 29 ++++++++++++++++-------------
1 file changed, 16 insertions(+), 13 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index 18507658a921..a5b875a40bbf 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -76,6 +76,7 @@ struct io_sr_msg {
/* initialised and used only by !msg send variants */
u16 addr_len;
u16 buf_group;
+ u16 buf_index;
void __user *addr;
void __user *msg_control;
/* used only for send zerocopy */
@@ -1254,16 +1255,6 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
}
}
- if (zc->flags & IORING_RECVSEND_FIXED_BUF) {
- unsigned idx = READ_ONCE(sqe->buf_index);
-
- if (unlikely(idx >= ctx->nr_user_bufs))
- return -EFAULT;
- idx = array_index_nospec(idx, ctx->nr_user_bufs);
- req->imu = READ_ONCE(ctx->user_bufs[idx]);
- io_req_set_rsrc_node(notif, ctx, 0);
- }
-
if (req->opcode == IORING_OP_SEND_ZC) {
if (READ_ONCE(sqe->__pad3[0]))
return -EINVAL;
@@ -1279,6 +1270,7 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
zc->buf = u64_to_user_ptr(READ_ONCE(sqe->addr));
zc->len = READ_ONCE(sqe->len);
zc->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL | MSG_ZEROCOPY;
+ zc->buf_index = READ_ONCE(sqe->buf_index);
if (zc->msg_flags & MSG_DONTWAIT)
req->flags |= REQ_F_NOWAIT;
@@ -1339,13 +1331,24 @@ static int io_sg_from_iter(struct sk_buff *skb,
return ret;
}
-static int io_send_zc_import(struct io_kiocb *req, struct io_async_msghdr *kmsg)
+static int io_send_zc_import(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+ struct io_async_msghdr *kmsg = req->async_data;
int ret;
if (sr->flags & IORING_RECVSEND_FIXED_BUF) {
- ret = io_import_fixed(ITER_SOURCE, &kmsg->msg.msg_iter, req->imu,
+ struct io_ring_ctx *ctx = req->ctx;
+ struct io_mapped_ubuf *imu;
+ int idx;
+
+ if (unlikely(sr->buf_index >= ctx->nr_user_bufs))
+ return -EFAULT;
+ idx = array_index_nospec(sr->buf_index, ctx->nr_user_bufs);
+ imu = READ_ONCE(ctx->user_bufs[idx]);
+ io_req_set_rsrc_node(sr->notif, ctx, issue_flags);
+
+ ret = io_import_fixed(ITER_SOURCE, &kmsg->msg.msg_iter, imu,
(u64)(uintptr_t)sr->buf, sr->len);
if (unlikely(ret))
return ret;
@@ -1382,7 +1385,7 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags)
return -EAGAIN;
if (!zc->done_io) {
- ret = io_send_zc_import(req, kmsg);
+ ret = io_send_zc_import(req, issue_flags);
if (unlikely(ret))
return ret;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 4/4] io_uring: kill 'imu' from struct io_kiocb
2024-10-22 2:03 [PATCHSET v2 for-next 0/4] Get rid of io_kiocb->imu Jens Axboe
` (2 preceding siblings ...)
2024-10-22 2:03 ` [PATCH 3/4] io_uring/net: move send zc fixed buffer import to issue path Jens Axboe
@ 2024-10-22 2:03 ` Jens Axboe
3 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2024-10-22 2:03 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
It's no longer being used, remove it.
Signed-off-by: Jens Axboe <[email protected]>
---
include/linux/io_uring_types.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 391087144666..6d3ee71bd832 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -613,9 +613,6 @@ struct io_kiocb {
struct task_struct *task;
union {
- /* store used ubuf, so we can prevent reloading */
- struct io_mapped_ubuf *imu;
-
/* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
struct io_buffer *kbuf;
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu
2024-10-22 13:32 [PATCHSET v3 for-next 0/4] Get rid of io_kiocb->imu Jens Axboe
@ 2024-10-22 13:32 ` Jens Axboe
0 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2024-10-22 13:32 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
It's pretty pointless to use io_kiocb as intermediate storage for this,
so split the validity check and the actual usage. The resource node is
assigned upfront at prep time, to prevent it from going away. The actual
import is never called with the ctx->uring_lock held, so grab it for
the import.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/uring_cmd.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 39c3c816ec78..58d0b817d6ea 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -211,11 +211,15 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
struct io_ring_ctx *ctx = req->ctx;
u16 index;
- req->buf_index = READ_ONCE(sqe->buf_index);
- if (unlikely(req->buf_index >= ctx->nr_user_bufs))
+ index = READ_ONCE(sqe->buf_index);
+ if (unlikely(index >= ctx->nr_user_bufs))
return -EFAULT;
- index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
- req->imu = ctx->user_bufs[index];
+ req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
+ /*
+ * Pi node upfront, prior to io_uring_cmd_import_fixed()
+ * being called. This prevents destruction of the mapped buffer
+ * we'll need at actual import time.
+ */
io_req_set_rsrc_node(req, ctx, 0);
}
ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
@@ -272,8 +276,17 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd)
{
struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+ struct io_ring_ctx *ctx = req->ctx;
+
+ /* Must have had rsrc_node assigned at prep time */
+ if (req->rsrc_node) {
+ struct io_mapped_ubuf *imu;
+
+ imu = READ_ONCE(ctx->user_bufs[req->buf_index]);
+ return io_import_fixed(rw, iter, imu, ubuf, len);
+ }
- return io_import_fixed(rw, iter, req->imu, ubuf, len);
+ return -EFAULT;
}
EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu
2024-10-18 18:38 [PATCHSET for-next 0/4] Get rid of io_kiocb->imu Jens Axboe
@ 2024-10-18 18:38 ` Jens Axboe
2024-10-18 19:34 ` Pavel Begunkov
0 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2024-10-18 18:38 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
It's pretty pointless to use io_kiocb as intermediate storage for this,
so split the validity check and the actual usage.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/uring_cmd.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 39c3c816ec78..cc8bb5550ff5 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -211,11 +211,10 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
struct io_ring_ctx *ctx = req->ctx;
u16 index;
- req->buf_index = READ_ONCE(sqe->buf_index);
+ index = READ_ONCE(sqe->buf_index);
+ req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
if (unlikely(req->buf_index >= ctx->nr_user_bufs))
return -EFAULT;
- index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
- req->imu = ctx->user_bufs[index];
io_req_set_rsrc_node(req, ctx, 0);
}
ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
@@ -272,8 +271,10 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd)
{
struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+ struct io_mapped_ubuf *imu;
- return io_import_fixed(rw, iter, req->imu, ubuf, len);
+ imu = req->ctx->user_bufs[req->buf_index];
+ return io_import_fixed(rw, iter, imu, ubuf, len);
}
EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);
--
2.45.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu
2024-10-18 18:38 ` [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu Jens Axboe
@ 2024-10-18 19:34 ` Pavel Begunkov
2024-10-19 21:56 ` Jens Axboe
0 siblings, 1 reply; 14+ messages in thread
From: Pavel Begunkov @ 2024-10-18 19:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 10/18/24 19:38, Jens Axboe wrote:
> It's pretty pointless to use io_kiocb as intermediate storage for this,
> so split the validity check and the actual usage.
The table is uring_lock protected, if we don't resolve in advance
we should take care of locking when importing.
Another concern is adding a gap b/w setting a rsrc node and looking
up a buffer. That should be fine, but worth mentioning that when
you grab a rsrc node it also prevent destruction of all objects that
are created after this point.
> Signed-off-by: Jens Axboe <[email protected]>
> ---
> io_uring/uring_cmd.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> index 39c3c816ec78..cc8bb5550ff5 100644
> --- a/io_uring/uring_cmd.c
> +++ b/io_uring/uring_cmd.c
> @@ -211,11 +211,10 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
> struct io_ring_ctx *ctx = req->ctx;
> u16 index;
>
> - req->buf_index = READ_ONCE(sqe->buf_index);
> + index = READ_ONCE(sqe->buf_index);
> + req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
> if (unlikely(req->buf_index >= ctx->nr_user_bufs))
> return -EFAULT;
The rsrc should hold the table destruction, but it feels like it
should better move where importing happens.
> - index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
> - req->imu = ctx->user_bufs[index];
> io_req_set_rsrc_node(req, ctx, 0);
> }
> ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
> @@ -272,8 +271,10 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
> struct iov_iter *iter, void *ioucmd)
> {
> struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
> + struct io_mapped_ubuf *imu;
>
> - return io_import_fixed(rw, iter, req->imu, ubuf, len);
> + imu = req->ctx->user_bufs[req->buf_index];
> + return io_import_fixed(rw, iter, imu, ubuf, len);
> }
> EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/4] io_uring/uring_cmd: get rid of using req->imu
2024-10-18 19:34 ` Pavel Begunkov
@ 2024-10-19 21:56 ` Jens Axboe
0 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2024-10-19 21:56 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 10/18/24 1:34 PM, Pavel Begunkov wrote:
> On 10/18/24 19:38, Jens Axboe wrote:
>> It's pretty pointless to use io_kiocb as intermediate storage for this,
>> so split the validity check and the actual usage.
>
> The table is uring_lock protected, if we don't resolve in advance
> we should take care of locking when importing.
>
> Another concern is adding a gap b/w setting a rsrc node and looking
> up a buffer. That should be fine, but worth mentioning that when
> you grab a rsrc node it also prevent destruction of all objects that
> are created after this point.
Yeah the last part should be fine, the first one surely not! I also
notice that the check for too large an index now happens after
the array_index_nospec(), that's also an issue.
I'll spin a v2. We should just put it all in one place.
--
Jens Axboe
^ permalink raw reply [flat|nested] 14+ messages in thread