* [PATCH v6 1/3] io_uring: use an enumeration for io_uring_register(2) opcodes
2020-08-27 14:58 [PATCH v6 0/3] io_uring: add restrictions to support untrusted applications and guests Stefano Garzarella
@ 2020-08-27 14:58 ` Stefano Garzarella
2020-08-27 14:58 ` [PATCH v6 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode Stefano Garzarella
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Stefano Garzarella @ 2020-08-27 14:58 UTC (permalink / raw)
To: Jens Axboe
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
The enumeration allows us to keep track of the last
io_uring_register(2) opcode available.
Behaviour and opcodes names don't change.
Reviewed-by: Kees Cook <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
v5:
- explicitly assign enum values since this is UAPI [Kees]
---
include/uapi/linux/io_uring.h | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index d65fde732518..5f12ae6a415c 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -255,17 +255,22 @@ struct io_uring_params {
/*
* io_uring_register(2) opcodes and arguments
*/
-#define IORING_REGISTER_BUFFERS 0
-#define IORING_UNREGISTER_BUFFERS 1
-#define IORING_REGISTER_FILES 2
-#define IORING_UNREGISTER_FILES 3
-#define IORING_REGISTER_EVENTFD 4
-#define IORING_UNREGISTER_EVENTFD 5
-#define IORING_REGISTER_FILES_UPDATE 6
-#define IORING_REGISTER_EVENTFD_ASYNC 7
-#define IORING_REGISTER_PROBE 8
-#define IORING_REGISTER_PERSONALITY 9
-#define IORING_UNREGISTER_PERSONALITY 10
+enum {
+ IORING_REGISTER_BUFFERS = 0,
+ IORING_UNREGISTER_BUFFERS = 1,
+ IORING_REGISTER_FILES = 2,
+ IORING_UNREGISTER_FILES = 3,
+ IORING_REGISTER_EVENTFD = 4,
+ IORING_UNREGISTER_EVENTFD = 5,
+ IORING_REGISTER_FILES_UPDATE = 6,
+ IORING_REGISTER_EVENTFD_ASYNC = 7,
+ IORING_REGISTER_PROBE = 8,
+ IORING_REGISTER_PERSONALITY = 9,
+ IORING_UNREGISTER_PERSONALITY = 10,
+
+ /* this goes last */
+ IORING_REGISTER_LAST
+};
struct io_uring_files_update {
__u32 offset;
--
2.26.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v6 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode
2020-08-27 14:58 [PATCH v6 0/3] io_uring: add restrictions to support untrusted applications and guests Stefano Garzarella
2020-08-27 14:58 ` [PATCH v6 1/3] io_uring: use an enumeration for io_uring_register(2) opcodes Stefano Garzarella
@ 2020-08-27 14:58 ` Stefano Garzarella
2021-01-03 14:26 ` Daurnimator
2020-08-27 14:58 ` [PATCH v6 3/3] io_uring: allow disabling rings during the creation Stefano Garzarella
2020-08-28 3:01 ` [PATCH v6 0/3] io_uring: add restrictions to support untrusted applications and guests Jens Axboe
3 siblings, 1 reply; 11+ messages in thread
From: Stefano Garzarella @ 2020-08-27 14:58 UTC (permalink / raw)
To: Jens Axboe
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode
permanently installs a feature allowlist on an io_ring_ctx.
The io_ring_ctx can then be passed to untrusted code with the
knowledge that only operations present in the allowlist can be
executed.
The allowlist approach ensures that new features added to io_uring
do not accidentally become available when an existing application
is launched on a newer kernel version.
Currently is it possible to restrict sqe opcodes, sqe flags, and
register opcodes.
IOURING_REGISTER_RESTRICTIONS can only be made once. Afterwards
it is not possible to change restrictions anymore.
This prevents untrusted code from removing restrictions.
Suggested-by: Stefan Hajnoczi <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
v6:
- moved restriction checks in a function [Jens]
- changed ret value handling in io_register_restrictions() [Jens]
v5:
- explicitly assigned enum values [Kees]
- replaced kmalloc/copy_from_user with memdup_user [kernel test robot]
v3:
- added IORING_RESTRICTION_SQE_FLAGS_ALLOWED and
IORING_RESTRICTION_SQE_FLAGS_REQUIRED
- removed IORING_RESTRICTION_FIXED_FILES_ONLY
RFC v2:
- added 'restricted' flag in the ctx [Jens]
- added IORING_MAX_RESTRICTIONS define
- returned EBUSY instead of EINVAL when restrictions are already
registered
- reset restrictions if an error happened during the registration
---
fs/io_uring.c | 124 +++++++++++++++++++++++++++++++++-
include/uapi/linux/io_uring.h | 31 +++++++++
2 files changed, 154 insertions(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6df08287c59e..5f62997c147b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -98,6 +98,8 @@
#define IORING_MAX_FILES_TABLE (1U << IORING_FILE_TABLE_SHIFT)
#define IORING_FILE_TABLE_MASK (IORING_MAX_FILES_TABLE - 1)
#define IORING_MAX_FIXED_FILES (64 * IORING_MAX_FILES_TABLE)
+#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \
+ IORING_REGISTER_LAST + IORING_OP_LAST)
struct io_uring {
u32 head ____cacheline_aligned_in_smp;
@@ -219,6 +221,13 @@ struct io_buffer {
__u16 bid;
};
+struct io_restriction {
+ DECLARE_BITMAP(register_op, IORING_REGISTER_LAST);
+ DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
+ u8 sqe_flags_allowed;
+ u8 sqe_flags_required;
+};
+
struct io_ring_ctx {
struct {
struct percpu_ref refs;
@@ -231,6 +240,7 @@ struct io_ring_ctx {
unsigned int cq_overflow_flushed: 1;
unsigned int drain_next: 1;
unsigned int eventfd_async: 1;
+ unsigned int restricted: 1;
/*
* Ring buffer of indices into array of io_uring_sqe, which is
@@ -338,6 +348,7 @@ struct io_ring_ctx {
struct llist_head file_put_llist;
struct work_struct exit_work;
+ struct io_restriction restrictions;
};
/*
@@ -6381,6 +6392,32 @@ static inline void io_consume_sqe(struct io_ring_ctx *ctx)
ctx->cached_sq_head++;
}
+/*
+ * Check SQE restrictions (opcode and flags).
+ *
+ * Returns 'true' if SQE is allowed, 'false' otherwise.
+ */
+static inline bool io_check_restriction(struct io_ring_ctx *ctx,
+ struct io_kiocb *req,
+ unsigned int sqe_flags)
+{
+ if (!ctx->restricted)
+ return true;
+
+ if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
+ return false;
+
+ if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
+ ctx->restrictions.sqe_flags_required)
+ return false;
+
+ if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
+ ctx->restrictions.sqe_flags_required))
+ return false;
+
+ return true;
+}
+
#define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK| \
IOSQE_IO_HARDLINK | IOSQE_ASYNC | \
IOSQE_BUFFER_SELECT)
@@ -6414,6 +6451,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
if (unlikely(sqe_flags & ~SQE_VALID_FLAGS))
return -EINVAL;
+ if (unlikely(!io_check_restriction(ctx, req, sqe_flags)))
+ return -EACCES;
+
if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
!io_op_defs[req->opcode].buffer_select)
return -EOPNOTSUPP;
@@ -8714,6 +8754,72 @@ static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
return -EINVAL;
}
+static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
+ unsigned int nr_args)
+{
+ struct io_uring_restriction *res;
+ size_t size;
+ int i, ret;
+
+ /* We allow only a single restrictions registration */
+ if (ctx->restricted)
+ return -EBUSY;
+
+ if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
+ return -EINVAL;
+
+ size = array_size(nr_args, sizeof(*res));
+ if (size == SIZE_MAX)
+ return -EOVERFLOW;
+
+ res = memdup_user(arg, size);
+ if (IS_ERR(res))
+ return PTR_ERR(res);
+
+ ret = 0;
+
+ for (i = 0; i < nr_args; i++) {
+ switch (res[i].opcode) {
+ case IORING_RESTRICTION_REGISTER_OP:
+ if (res[i].register_op >= IORING_REGISTER_LAST) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ __set_bit(res[i].register_op,
+ ctx->restrictions.register_op);
+ break;
+ case IORING_RESTRICTION_SQE_OP:
+ if (res[i].sqe_op >= IORING_OP_LAST) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
+ break;
+ case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
+ ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
+ break;
+ case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
+ ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
+ break;
+ default:
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+
+out:
+ /* Reset all restrictions if an error happened */
+ if (ret != 0)
+ memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
+ else
+ ctx->restricted = 1;
+
+ kfree(res);
+ return ret;
+}
+
static bool io_register_op_must_quiesce(int op)
{
switch (op) {
@@ -8760,6 +8866,18 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
if (ret) {
percpu_ref_resurrect(&ctx->refs);
ret = -EINTR;
+ goto out_quiesce;
+ }
+ }
+
+ if (ctx->restricted) {
+ if (opcode >= IORING_REGISTER_LAST) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (!test_bit(opcode, ctx->restrictions.register_op)) {
+ ret = -EACCES;
goto out;
}
}
@@ -8823,15 +8941,19 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
break;
ret = io_unregister_personality(ctx, nr_args);
break;
+ case IORING_REGISTER_RESTRICTIONS:
+ ret = io_register_restrictions(ctx, arg, nr_args);
+ break;
default:
ret = -EINVAL;
break;
}
+out:
if (io_register_op_must_quiesce(opcode)) {
/* bring the ctx back to life */
percpu_ref_reinit(&ctx->refs);
-out:
+out_quiesce:
reinit_completion(&ctx->ref_comp);
}
return ret;
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 5f12ae6a415c..6e7f2e5e917b 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -267,6 +267,7 @@ enum {
IORING_REGISTER_PROBE = 8,
IORING_REGISTER_PERSONALITY = 9,
IORING_UNREGISTER_PERSONALITY = 10,
+ IORING_REGISTER_RESTRICTIONS = 11,
/* this goes last */
IORING_REGISTER_LAST
@@ -295,4 +296,34 @@ struct io_uring_probe {
struct io_uring_probe_op ops[0];
};
+struct io_uring_restriction {
+ __u16 opcode;
+ union {
+ __u8 register_op; /* IORING_RESTRICTION_REGISTER_OP */
+ __u8 sqe_op; /* IORING_RESTRICTION_SQE_OP */
+ __u8 sqe_flags; /* IORING_RESTRICTION_SQE_FLAGS_* */
+ };
+ __u8 resv;
+ __u32 resv2[3];
+};
+
+/*
+ * io_uring_restriction->opcode values
+ */
+enum {
+ /* Allow an io_uring_register(2) opcode */
+ IORING_RESTRICTION_REGISTER_OP = 0,
+
+ /* Allow an sqe opcode */
+ IORING_RESTRICTION_SQE_OP = 1,
+
+ /* Allow sqe flags */
+ IORING_RESTRICTION_SQE_FLAGS_ALLOWED = 2,
+
+ /* Require sqe flags (these flags must be set on each submission) */
+ IORING_RESTRICTION_SQE_FLAGS_REQUIRED = 3,
+
+ IORING_RESTRICTION_LAST
+};
+
#endif
--
2.26.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v6 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode
2020-08-27 14:58 ` [PATCH v6 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode Stefano Garzarella
@ 2021-01-03 14:26 ` Daurnimator
2021-01-07 8:39 ` Stefano Garzarella
0 siblings, 1 reply; 11+ messages in thread
From: Daurnimator @ 2021-01-03 14:26 UTC (permalink / raw)
To: Stefano Garzarella
Cc: Jens Axboe, Kernel Hardening, Christian Brauner, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
On Fri, 28 Aug 2020 at 00:59, Stefano Garzarella <[email protected]> wrote:
> + __u8 register_op; /* IORING_RESTRICTION_REGISTER_OP */
Can you confirm that this intentionally limited the future range of
IORING_REGISTER opcodes to 0-255?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode
2021-01-03 14:26 ` Daurnimator
@ 2021-01-07 8:39 ` Stefano Garzarella
0 siblings, 0 replies; 11+ messages in thread
From: Stefano Garzarella @ 2021-01-07 8:39 UTC (permalink / raw)
To: Daurnimator
Cc: Jens Axboe, Kernel Hardening, Christian Brauner, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
On Mon, Jan 04, 2021 at 01:26:41AM +1100, Daurnimator wrote:
>On Fri, 28 Aug 2020 at 00:59, Stefano Garzarella <[email protected]> wrote:
>> + __u8 register_op; /* IORING_RESTRICTION_REGISTER_OP */
>
>Can you confirm that this intentionally limited the future range of
>IORING_REGISTER opcodes to 0-255?
>
It was based on io_uring_probe, so we used u8 for opcodes, but we have
room to extend it in the future.
So, for now, this allow to register restrictions up to 255
IORING_REGISTER opcode.
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v6 3/3] io_uring: allow disabling rings during the creation
2020-08-27 14:58 [PATCH v6 0/3] io_uring: add restrictions to support untrusted applications and guests Stefano Garzarella
2020-08-27 14:58 ` [PATCH v6 1/3] io_uring: use an enumeration for io_uring_register(2) opcodes Stefano Garzarella
2020-08-27 14:58 ` [PATCH v6 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode Stefano Garzarella
@ 2020-08-27 14:58 ` Stefano Garzarella
2020-09-08 13:44 ` Stefano Garzarella
2020-08-28 3:01 ` [PATCH v6 0/3] io_uring: add restrictions to support untrusted applications and guests Jens Axboe
3 siblings, 1 reply; 11+ messages in thread
From: Stefano Garzarella @ 2020-08-27 14:58 UTC (permalink / raw)
To: Jens Axboe
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
This patch adds a new IORING_SETUP_R_DISABLED flag to start the
rings disabled, allowing the user to register restrictions,
buffers, files, before to start processing SQEs.
When IORING_SETUP_R_DISABLED is set, SQE are not processed and
SQPOLL kthread is not started.
The restrictions registration are allowed only when the rings
are disable to prevent concurrency issue while processing SQEs.
The rings can be enabled using IORING_REGISTER_ENABLE_RINGS
opcode with io_uring_register(2).
Suggested-by: Jens Axboe <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
v4:
- fixed io_uring_enter() exit path when ring is disabled
v3:
- enabled restrictions only when the rings start
RFC v2:
- removed return value of io_sq_offload_start()
---
fs/io_uring.c | 52 ++++++++++++++++++++++++++++++-----
include/uapi/linux/io_uring.h | 2 ++
2 files changed, 47 insertions(+), 7 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5f62997c147b..b036f3373fbe 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -226,6 +226,7 @@ struct io_restriction {
DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
u8 sqe_flags_allowed;
u8 sqe_flags_required;
+ bool registered;
};
struct io_ring_ctx {
@@ -7497,8 +7498,8 @@ static int io_init_wq_offload(struct io_ring_ctx *ctx,
return ret;
}
-static int io_sq_offload_start(struct io_ring_ctx *ctx,
- struct io_uring_params *p)
+static int io_sq_offload_create(struct io_ring_ctx *ctx,
+ struct io_uring_params *p)
{
int ret;
@@ -7532,7 +7533,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
ctx->sqo_thread = NULL;
goto err;
}
- wake_up_process(ctx->sqo_thread);
} else if (p->flags & IORING_SETUP_SQ_AFF) {
/* Can't have SQ_AFF without SQPOLL */
ret = -EINVAL;
@@ -7549,6 +7549,12 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
return ret;
}
+static void io_sq_offload_start(struct io_ring_ctx *ctx)
+{
+ if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sqo_thread)
+ wake_up_process(ctx->sqo_thread);
+}
+
static inline void __io_unaccount_mem(struct user_struct *user,
unsigned long nr_pages)
{
@@ -8295,6 +8301,9 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
if (!percpu_ref_tryget(&ctx->refs))
goto out_fput;
+ if (ctx->flags & IORING_SETUP_R_DISABLED)
+ goto out_fput;
+
/*
* For SQ polling, the thread will do all submissions and completions.
* Just return the requested submit count, and wake the thread if
@@ -8612,10 +8621,13 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
if (ret)
goto err;
- ret = io_sq_offload_start(ctx, p);
+ ret = io_sq_offload_create(ctx, p);
if (ret)
goto err;
+ if (!(p->flags & IORING_SETUP_R_DISABLED))
+ io_sq_offload_start(ctx);
+
memset(&p->sq_off, 0, sizeof(p->sq_off));
p->sq_off.head = offsetof(struct io_rings, sq.head);
p->sq_off.tail = offsetof(struct io_rings, sq.tail);
@@ -8678,7 +8690,8 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
- IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ))
+ IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
+ IORING_SETUP_R_DISABLED))
return -EINVAL;
return io_uring_create(entries, &p, params);
@@ -8761,8 +8774,12 @@ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
size_t size;
int i, ret;
+ /* Restrictions allowed only if rings started disabled */
+ if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+ return -EINVAL;
+
/* We allow only a single restrictions registration */
- if (ctx->restricted)
+ if (ctx->restrictions.registered)
return -EBUSY;
if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
@@ -8814,12 +8831,27 @@ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
if (ret != 0)
memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
else
- ctx->restricted = 1;
+ ctx->restrictions.registered = true;
kfree(res);
return ret;
}
+static int io_register_enable_rings(struct io_ring_ctx *ctx)
+{
+ if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+ return -EINVAL;
+
+ if (ctx->restrictions.registered)
+ ctx->restricted = 1;
+
+ ctx->flags &= ~IORING_SETUP_R_DISABLED;
+
+ io_sq_offload_start(ctx);
+
+ return 0;
+}
+
static bool io_register_op_must_quiesce(int op)
{
switch (op) {
@@ -8941,6 +8973,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
break;
ret = io_unregister_personality(ctx, nr_args);
break;
+ case IORING_REGISTER_ENABLE_RINGS:
+ ret = -EINVAL;
+ if (arg || nr_args)
+ break;
+ ret = io_register_enable_rings(ctx);
+ break;
case IORING_REGISTER_RESTRICTIONS:
ret = io_register_restrictions(ctx, arg, nr_args);
break;
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 6e7f2e5e917b..a0c85e0e9016 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -95,6 +95,7 @@ enum {
#define IORING_SETUP_CQSIZE (1U << 3) /* app defines CQ size */
#define IORING_SETUP_CLAMP (1U << 4) /* clamp SQ/CQ ring sizes */
#define IORING_SETUP_ATTACH_WQ (1U << 5) /* attach to existing wq */
+#define IORING_SETUP_R_DISABLED (1U << 6) /* start with ring disabled */
enum {
IORING_OP_NOP,
@@ -268,6 +269,7 @@ enum {
IORING_REGISTER_PERSONALITY = 9,
IORING_UNREGISTER_PERSONALITY = 10,
IORING_REGISTER_RESTRICTIONS = 11,
+ IORING_REGISTER_ENABLE_RINGS = 12,
/* this goes last */
IORING_REGISTER_LAST
--
2.26.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v6 3/3] io_uring: allow disabling rings during the creation
2020-08-27 14:58 ` [PATCH v6 3/3] io_uring: allow disabling rings during the creation Stefano Garzarella
@ 2020-09-08 13:44 ` Stefano Garzarella
2020-09-08 13:57 ` Jens Axboe
0 siblings, 1 reply; 11+ messages in thread
From: Stefano Garzarella @ 2020-09-08 13:44 UTC (permalink / raw)
To: Jens Axboe
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
Hi Jens,
On Thu, Aug 27, 2020 at 04:58:31PM +0200, Stefano Garzarella wrote:
> This patch adds a new IORING_SETUP_R_DISABLED flag to start the
> rings disabled, allowing the user to register restrictions,
> buffers, files, before to start processing SQEs.
>
> When IORING_SETUP_R_DISABLED is set, SQE are not processed and
> SQPOLL kthread is not started.
>
> The restrictions registration are allowed only when the rings
> are disable to prevent concurrency issue while processing SQEs.
>
> The rings can be enabled using IORING_REGISTER_ENABLE_RINGS
> opcode with io_uring_register(2).
>
> Suggested-by: Jens Axboe <[email protected]>
> Reviewed-by: Kees Cook <[email protected]>
> Signed-off-by: Stefano Garzarella <[email protected]>
> ---
> v4:
> - fixed io_uring_enter() exit path when ring is disabled
>
> v3:
> - enabled restrictions only when the rings start
>
> RFC v2:
> - removed return value of io_sq_offload_start()
> ---
> fs/io_uring.c | 52 ++++++++++++++++++++++++++++++-----
> include/uapi/linux/io_uring.h | 2 ++
> 2 files changed, 47 insertions(+), 7 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 5f62997c147b..b036f3373fbe 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -226,6 +226,7 @@ struct io_restriction {
> DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
> u8 sqe_flags_allowed;
> u8 sqe_flags_required;
> + bool registered;
> };
>
> struct io_ring_ctx {
> @@ -7497,8 +7498,8 @@ static int io_init_wq_offload(struct io_ring_ctx *ctx,
> return ret;
> }
>
> -static int io_sq_offload_start(struct io_ring_ctx *ctx,
> - struct io_uring_params *p)
> +static int io_sq_offload_create(struct io_ring_ctx *ctx,
> + struct io_uring_params *p)
> {
> int ret;
>
> @@ -7532,7 +7533,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
> ctx->sqo_thread = NULL;
> goto err;
> }
> - wake_up_process(ctx->sqo_thread);
> } else if (p->flags & IORING_SETUP_SQ_AFF) {
> /* Can't have SQ_AFF without SQPOLL */
> ret = -EINVAL;
> @@ -7549,6 +7549,12 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
> return ret;
> }
>
> +static void io_sq_offload_start(struct io_ring_ctx *ctx)
> +{
> + if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sqo_thread)
> + wake_up_process(ctx->sqo_thread);
> +}
> +
> static inline void __io_unaccount_mem(struct user_struct *user,
> unsigned long nr_pages)
> {
> @@ -8295,6 +8301,9 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
> if (!percpu_ref_tryget(&ctx->refs))
> goto out_fput;
>
> + if (ctx->flags & IORING_SETUP_R_DISABLED)
> + goto out_fput;
> +
While writing the man page paragraph, I discovered that if the rings are
disabled I returned ENXIO error in io_uring_enter(), coming from the previous
check.
I'm not sure it is the best one, maybe I can return EBADFD or another
error.
What do you suggest?
I'll add a test for this case.
Thanks,
Stefano
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 3/3] io_uring: allow disabling rings during the creation
2020-09-08 13:44 ` Stefano Garzarella
@ 2020-09-08 13:57 ` Jens Axboe
2020-09-08 14:10 ` Stefano Garzarella
0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2020-09-08 13:57 UTC (permalink / raw)
To: Stefano Garzarella
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
On 9/8/20 7:44 AM, Stefano Garzarella wrote:
> Hi Jens,
>
> On Thu, Aug 27, 2020 at 04:58:31PM +0200, Stefano Garzarella wrote:
>> This patch adds a new IORING_SETUP_R_DISABLED flag to start the
>> rings disabled, allowing the user to register restrictions,
>> buffers, files, before to start processing SQEs.
>>
>> When IORING_SETUP_R_DISABLED is set, SQE are not processed and
>> SQPOLL kthread is not started.
>>
>> The restrictions registration are allowed only when the rings
>> are disable to prevent concurrency issue while processing SQEs.
>>
>> The rings can be enabled using IORING_REGISTER_ENABLE_RINGS
>> opcode with io_uring_register(2).
>>
>> Suggested-by: Jens Axboe <[email protected]>
>> Reviewed-by: Kees Cook <[email protected]>
>> Signed-off-by: Stefano Garzarella <[email protected]>
>> ---
>> v4:
>> - fixed io_uring_enter() exit path when ring is disabled
>>
>> v3:
>> - enabled restrictions only when the rings start
>>
>> RFC v2:
>> - removed return value of io_sq_offload_start()
>> ---
>> fs/io_uring.c | 52 ++++++++++++++++++++++++++++++-----
>> include/uapi/linux/io_uring.h | 2 ++
>> 2 files changed, 47 insertions(+), 7 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 5f62997c147b..b036f3373fbe 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -226,6 +226,7 @@ struct io_restriction {
>> DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
>> u8 sqe_flags_allowed;
>> u8 sqe_flags_required;
>> + bool registered;
>> };
>>
>> struct io_ring_ctx {
>> @@ -7497,8 +7498,8 @@ static int io_init_wq_offload(struct io_ring_ctx *ctx,
>> return ret;
>> }
>>
>> -static int io_sq_offload_start(struct io_ring_ctx *ctx,
>> - struct io_uring_params *p)
>> +static int io_sq_offload_create(struct io_ring_ctx *ctx,
>> + struct io_uring_params *p)
>> {
>> int ret;
>>
>> @@ -7532,7 +7533,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
>> ctx->sqo_thread = NULL;
>> goto err;
>> }
>> - wake_up_process(ctx->sqo_thread);
>> } else if (p->flags & IORING_SETUP_SQ_AFF) {
>> /* Can't have SQ_AFF without SQPOLL */
>> ret = -EINVAL;
>> @@ -7549,6 +7549,12 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
>> return ret;
>> }
>>
>> +static void io_sq_offload_start(struct io_ring_ctx *ctx)
>> +{
>> + if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sqo_thread)
>> + wake_up_process(ctx->sqo_thread);
>> +}
>> +
>> static inline void __io_unaccount_mem(struct user_struct *user,
>> unsigned long nr_pages)
>> {
>> @@ -8295,6 +8301,9 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
>> if (!percpu_ref_tryget(&ctx->refs))
>> goto out_fput;
>>
>> + if (ctx->flags & IORING_SETUP_R_DISABLED)
>> + goto out_fput;
>> +
>
> While writing the man page paragraph, I discovered that if the rings are
> disabled I returned ENXIO error in io_uring_enter(), coming from the previous
> check.
>
> I'm not sure it is the best one, maybe I can return EBADFD or another
> error.
>
> What do you suggest?
EBADFD seems indeed the most appropriate - the fd is valid, but not in the
right state to do this.
> I'll add a test for this case.
Thanks!
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 3/3] io_uring: allow disabling rings during the creation
2020-09-08 13:57 ` Jens Axboe
@ 2020-09-08 14:10 ` Stefano Garzarella
2020-09-08 14:11 ` Jens Axboe
0 siblings, 1 reply; 11+ messages in thread
From: Stefano Garzarella @ 2020-09-08 14:10 UTC (permalink / raw)
To: Jens Axboe
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
On Tue, Sep 08, 2020 at 07:57:08AM -0600, Jens Axboe wrote:
> On 9/8/20 7:44 AM, Stefano Garzarella wrote:
> > Hi Jens,
> >
> > On Thu, Aug 27, 2020 at 04:58:31PM +0200, Stefano Garzarella wrote:
> >> This patch adds a new IORING_SETUP_R_DISABLED flag to start the
> >> rings disabled, allowing the user to register restrictions,
> >> buffers, files, before to start processing SQEs.
> >>
> >> When IORING_SETUP_R_DISABLED is set, SQE are not processed and
> >> SQPOLL kthread is not started.
> >>
> >> The restrictions registration are allowed only when the rings
> >> are disable to prevent concurrency issue while processing SQEs.
> >>
> >> The rings can be enabled using IORING_REGISTER_ENABLE_RINGS
> >> opcode with io_uring_register(2).
> >>
> >> Suggested-by: Jens Axboe <[email protected]>
> >> Reviewed-by: Kees Cook <[email protected]>
> >> Signed-off-by: Stefano Garzarella <[email protected]>
> >> ---
> >> v4:
> >> - fixed io_uring_enter() exit path when ring is disabled
> >>
> >> v3:
> >> - enabled restrictions only when the rings start
> >>
> >> RFC v2:
> >> - removed return value of io_sq_offload_start()
> >> ---
> >> fs/io_uring.c | 52 ++++++++++++++++++++++++++++++-----
> >> include/uapi/linux/io_uring.h | 2 ++
> >> 2 files changed, 47 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/fs/io_uring.c b/fs/io_uring.c
> >> index 5f62997c147b..b036f3373fbe 100644
> >> --- a/fs/io_uring.c
> >> +++ b/fs/io_uring.c
> >> @@ -226,6 +226,7 @@ struct io_restriction {
> >> DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
> >> u8 sqe_flags_allowed;
> >> u8 sqe_flags_required;
> >> + bool registered;
> >> };
> >>
> >> struct io_ring_ctx {
> >> @@ -7497,8 +7498,8 @@ static int io_init_wq_offload(struct io_ring_ctx *ctx,
> >> return ret;
> >> }
> >>
> >> -static int io_sq_offload_start(struct io_ring_ctx *ctx,
> >> - struct io_uring_params *p)
> >> +static int io_sq_offload_create(struct io_ring_ctx *ctx,
> >> + struct io_uring_params *p)
> >> {
> >> int ret;
> >>
> >> @@ -7532,7 +7533,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
> >> ctx->sqo_thread = NULL;
> >> goto err;
> >> }
> >> - wake_up_process(ctx->sqo_thread);
> >> } else if (p->flags & IORING_SETUP_SQ_AFF) {
> >> /* Can't have SQ_AFF without SQPOLL */
> >> ret = -EINVAL;
> >> @@ -7549,6 +7549,12 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
> >> return ret;
> >> }
> >>
> >> +static void io_sq_offload_start(struct io_ring_ctx *ctx)
> >> +{
> >> + if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sqo_thread)
> >> + wake_up_process(ctx->sqo_thread);
> >> +}
> >> +
> >> static inline void __io_unaccount_mem(struct user_struct *user,
> >> unsigned long nr_pages)
> >> {
> >> @@ -8295,6 +8301,9 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
> >> if (!percpu_ref_tryget(&ctx->refs))
> >> goto out_fput;
> >>
> >> + if (ctx->flags & IORING_SETUP_R_DISABLED)
> >> + goto out_fput;
> >> +
> >
> > While writing the man page paragraph, I discovered that if the rings are
> > disabled I returned ENXIO error in io_uring_enter(), coming from the previous
> > check.
> >
> > I'm not sure it is the best one, maybe I can return EBADFD or another
> > error.
> >
> > What do you suggest?
>
> EBADFD seems indeed the most appropriate - the fd is valid, but not in the
> right state to do this.
Yeah, the same interpretation as mine!
Also, in io_uring_register() I'm returning EINVAL if the rings are not
disabled and the user wants to register restrictions.
Maybe also in this case I can return EBADFD.
I'll send a patch with the fixes.
Thanks,
Stefano
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 3/3] io_uring: allow disabling rings during the creation
2020-09-08 14:10 ` Stefano Garzarella
@ 2020-09-08 14:11 ` Jens Axboe
0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2020-09-08 14:11 UTC (permalink / raw)
To: Stefano Garzarella
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
On 9/8/20 8:10 AM, Stefano Garzarella wrote:
> On Tue, Sep 08, 2020 at 07:57:08AM -0600, Jens Axboe wrote:
>> On 9/8/20 7:44 AM, Stefano Garzarella wrote:
>>> Hi Jens,
>>>
>>> On Thu, Aug 27, 2020 at 04:58:31PM +0200, Stefano Garzarella wrote:
>>>> This patch adds a new IORING_SETUP_R_DISABLED flag to start the
>>>> rings disabled, allowing the user to register restrictions,
>>>> buffers, files, before to start processing SQEs.
>>>>
>>>> When IORING_SETUP_R_DISABLED is set, SQE are not processed and
>>>> SQPOLL kthread is not started.
>>>>
>>>> The restrictions registration are allowed only when the rings
>>>> are disable to prevent concurrency issue while processing SQEs.
>>>>
>>>> The rings can be enabled using IORING_REGISTER_ENABLE_RINGS
>>>> opcode with io_uring_register(2).
>>>>
>>>> Suggested-by: Jens Axboe <[email protected]>
>>>> Reviewed-by: Kees Cook <[email protected]>
>>>> Signed-off-by: Stefano Garzarella <[email protected]>
>>>> ---
>>>> v4:
>>>> - fixed io_uring_enter() exit path when ring is disabled
>>>>
>>>> v3:
>>>> - enabled restrictions only when the rings start
>>>>
>>>> RFC v2:
>>>> - removed return value of io_sq_offload_start()
>>>> ---
>>>> fs/io_uring.c | 52 ++++++++++++++++++++++++++++++-----
>>>> include/uapi/linux/io_uring.h | 2 ++
>>>> 2 files changed, 47 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>>> index 5f62997c147b..b036f3373fbe 100644
>>>> --- a/fs/io_uring.c
>>>> +++ b/fs/io_uring.c
>>>> @@ -226,6 +226,7 @@ struct io_restriction {
>>>> DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
>>>> u8 sqe_flags_allowed;
>>>> u8 sqe_flags_required;
>>>> + bool registered;
>>>> };
>>>>
>>>> struct io_ring_ctx {
>>>> @@ -7497,8 +7498,8 @@ static int io_init_wq_offload(struct io_ring_ctx *ctx,
>>>> return ret;
>>>> }
>>>>
>>>> -static int io_sq_offload_start(struct io_ring_ctx *ctx,
>>>> - struct io_uring_params *p)
>>>> +static int io_sq_offload_create(struct io_ring_ctx *ctx,
>>>> + struct io_uring_params *p)
>>>> {
>>>> int ret;
>>>>
>>>> @@ -7532,7 +7533,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
>>>> ctx->sqo_thread = NULL;
>>>> goto err;
>>>> }
>>>> - wake_up_process(ctx->sqo_thread);
>>>> } else if (p->flags & IORING_SETUP_SQ_AFF) {
>>>> /* Can't have SQ_AFF without SQPOLL */
>>>> ret = -EINVAL;
>>>> @@ -7549,6 +7549,12 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
>>>> return ret;
>>>> }
>>>>
>>>> +static void io_sq_offload_start(struct io_ring_ctx *ctx)
>>>> +{
>>>> + if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sqo_thread)
>>>> + wake_up_process(ctx->sqo_thread);
>>>> +}
>>>> +
>>>> static inline void __io_unaccount_mem(struct user_struct *user,
>>>> unsigned long nr_pages)
>>>> {
>>>> @@ -8295,6 +8301,9 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
>>>> if (!percpu_ref_tryget(&ctx->refs))
>>>> goto out_fput;
>>>>
>>>> + if (ctx->flags & IORING_SETUP_R_DISABLED)
>>>> + goto out_fput;
>>>> +
>>>
>>> While writing the man page paragraph, I discovered that if the rings are
>>> disabled I returned ENXIO error in io_uring_enter(), coming from the previous
>>> check.
>>>
>>> I'm not sure it is the best one, maybe I can return EBADFD or another
>>> error.
>>>
>>> What do you suggest?
>>
>> EBADFD seems indeed the most appropriate - the fd is valid, but not in the
>> right state to do this.
>
> Yeah, the same interpretation as mine!
>
> Also, in io_uring_register() I'm returning EINVAL if the rings are not
> disabled and the user wants to register restrictions.
> Maybe also in this case I can return EBADFD.
Yes let's do that, EINVAL is always way too overloaded, and it makes sense
to use EBADFD consistently for any operation related to that.
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v6 0/3] io_uring: add restrictions to support untrusted applications and guests
2020-08-27 14:58 [PATCH v6 0/3] io_uring: add restrictions to support untrusted applications and guests Stefano Garzarella
` (2 preceding siblings ...)
2020-08-27 14:58 ` [PATCH v6 3/3] io_uring: allow disabling rings during the creation Stefano Garzarella
@ 2020-08-28 3:01 ` Jens Axboe
3 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2020-08-28 3:01 UTC (permalink / raw)
To: Stefano Garzarella
Cc: Kernel Hardening, Christian Brauner, linux-fsdevel, io-uring,
Alexander Viro, Stefan Hajnoczi, Jann Horn, Jeff Moyer,
Aleksa Sarai, Sargun Dhillon, linux-kernel, Kees Cook
On 8/27/20 8:58 AM, Stefano Garzarella wrote:
> v6:
> - moved restriction checks in a function [Jens]
> - changed ret value handling in io_register_restrictions() [Jens]
>
> v5: https://lore.kernel.org/io-uring/[email protected]/
> v4: https://lore.kernel.org/io-uring/[email protected]/
> v3: https://lore.kernel.org/io-uring/[email protected]/
> RFC v2: https://lore.kernel.org/io-uring/[email protected]
> RFC v1: https://lore.kernel.org/io-uring/[email protected]
>
> Following the proposal that I send about restrictions [1], I wrote this series
> to add restrictions in io_uring.
>
> I also wrote helpers in liburing and a test case (test/register-restrictions.c)
> available in this repository:
> https://github.com/stefano-garzarella/liburing (branch: io_uring_restrictions)
>
> Just to recap the proposal, the idea is to add some restrictions to the
> operations (sqe opcode and flags, register opcode) to safely allow untrusted
> applications or guests to use io_uring queues.
>
> The first patch changes io_uring_register(2) opcodes into an enumeration to
> keep track of the last opcode available.
>
> The second patch adds IOURING_REGISTER_RESTRICTIONS opcode and the code to
> handle restrictions.
>
> The third patch adds IORING_SETUP_R_DISABLED flag to start the rings disabled,
> allowing the user to register restrictions, buffers, files, before to start
> processing SQEs.
Applied, thanks.
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread