public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET 0/5] io_uring restrictions cleanups and improvements
@ 2026-01-12 15:14 Jens Axboe
  2026-01-12 15:14 ` [PATCH 1/5] io_uring/register: have io_parse_restrictions() return number of ops Jens Axboe
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-12 15:14 UTC (permalink / raw)
  To: io-uring

Hi,

In doing the task based restriction sets, I found myself doing a few
cleanups and improvements along the way. These really had nothing to
do with the added feature, hence I'm splitting them out into a
separate patchset.

This series is really 4 patches doing cleanup and preparation for
making it easier to add the task based restrictions, and patch 5 is
an improvement for how restrictions are checked. I ran some overhead
numbers and it's honestly surprisingly low for microbenchmarks. For
example, running a pure NOP workload at 13-15M op/sec, checking
restrictions is only about 1.5% of the CPU time. Never the less, I
suspect the most common restrictions applied is to limit the register
operations that can be done. Hence it makes sense to track whether
we have IORING_OP* or IORING_REGISTER* restrictions separately, so
it can be avoided to check ones op based restrictions if only register
based ones have been set.

 include/linux/io_uring_types.h |  8 ++++++--
 io_uring/io_uring.c            |  6 ++++--
 io_uring/register.c            | 35 ++++++++++++++++++++++------------
 3 files changed, 33 insertions(+), 16 deletions(-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/5] io_uring/register: have io_parse_restrictions() return number of ops
  2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
@ 2026-01-12 15:14 ` Jens Axboe
  2026-01-12 15:14 ` [PATCH 2/5] io_uring/register: have io_parse_restrictions() set restrictions enabled Jens Axboe
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-12 15:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Rather than return 0 on success, return >= 0 for success, where the
return value is that number of parsed entries. As before, any < 0
return is an error.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/register.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/io_uring/register.c b/io_uring/register.c
index 62d39b3ff317..2611cf87ecf8 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -103,6 +103,10 @@ static int io_register_personality(struct io_ring_ctx *ctx)
 	return id;
 }
 
+/*
+ * Returns number of restrictions parsed and added on success, or < 0 for
+ * an error.
+ */
 static __cold int io_parse_restrictions(void __user *arg, unsigned int nr_args,
 					struct io_restriction *restrictions)
 {
@@ -145,9 +149,7 @@ static __cold int io_parse_restrictions(void __user *arg, unsigned int nr_args,
 			goto err;
 		}
 	}
-
-	ret = 0;
-
+	ret = nr_args;
 err:
 	kfree(res);
 	return ret;
@@ -168,11 +170,12 @@ static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
 
 	ret = io_parse_restrictions(arg, nr_args, &ctx->restrictions);
 	/* Reset all restrictions if an error happened */
-	if (ret != 0)
+	if (ret < 0) {
 		memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
-	else
-		ctx->restrictions.registered = true;
-	return ret;
+		return ret;
+	}
+	ctx->restrictions.registered = true;
+	return 0;
 }
 
 static int io_register_enable_rings(struct io_ring_ctx *ctx)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/5] io_uring/register: have io_parse_restrictions() set restrictions enabled
  2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
  2026-01-12 15:14 ` [PATCH 1/5] io_uring/register: have io_parse_restrictions() return number of ops Jens Axboe
@ 2026-01-12 15:14 ` Jens Axboe
  2026-01-12 15:14 ` [PATCH 3/5] io_uring/register: set ctx->restricted when restrictions are parsed Jens Axboe
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-12 15:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Rather than leave this to the caller, have io_parse_restrictions() set
->registered = true if restrictions have been enabled. This is in
preparation for having finer grained restrictions.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/register.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/register.c b/io_uring/register.c
index 2611cf87ecf8..4b711c3966a8 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -150,6 +150,7 @@ static __cold int io_parse_restrictions(void __user *arg, unsigned int nr_args,
 		}
 	}
 	ret = nr_args;
+	restrictions->registered = true;
 err:
 	kfree(res);
 	return ret;
@@ -174,7 +175,6 @@ static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
 		memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
 		return ret;
 	}
-	ctx->restrictions.registered = true;
 	return 0;
 }
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/5] io_uring/register: set ctx->restricted when restrictions are parsed
  2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
  2026-01-12 15:14 ` [PATCH 1/5] io_uring/register: have io_parse_restrictions() return number of ops Jens Axboe
  2026-01-12 15:14 ` [PATCH 2/5] io_uring/register: have io_parse_restrictions() set restrictions enabled Jens Axboe
@ 2026-01-12 15:14 ` Jens Axboe
  2026-01-12 15:14 ` [PATCH 4/5] io_uring: move ctx->restricted check into io_check_restriction() Jens Axboe
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-12 15:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Rather than defer this until the rings are enabled, just set it
upfront when the restrictions are parsed and enabled anyway. There's
no reason to defer this setting until the rings are enabled.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/register.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/io_uring/register.c b/io_uring/register.c
index 4b711c3966a8..b3aac668a665 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -175,6 +175,8 @@ static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
 		memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
 		return ret;
 	}
+	if (ctx->restrictions.registered)
+		ctx->restricted = 1;
 	return 0;
 }
 
@@ -193,9 +195,6 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
 			io_activate_pollwq(ctx);
 	}
 
-	if (ctx->restrictions.registered)
-		ctx->restricted = 1;
-
 	ctx->flags &= ~IORING_SETUP_R_DISABLED;
 	if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
 		wake_up(&ctx->sq_data->wait);
@@ -626,7 +625,7 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 	if (ctx->submitter_task && ctx->submitter_task != current)
 		return -EEXIST;
 
-	if (ctx->restricted) {
+	if (ctx->restricted && !(ctx->flags & IORING_SETUP_R_DISABLED)) {
 		opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
 		if (!test_bit(opcode, ctx->restrictions.register_op))
 			return -EACCES;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/5] io_uring: move ctx->restricted check into io_check_restriction()
  2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
                   ` (2 preceding siblings ...)
  2026-01-12 15:14 ` [PATCH 3/5] io_uring/register: set ctx->restricted when restrictions are parsed Jens Axboe
@ 2026-01-12 15:14 ` Jens Axboe
  2026-01-12 15:14 ` [PATCH 5/5] io_uring: track restrictions separately for IORING_OP and IORING_REGISTER Jens Axboe
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-12 15:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Just a cleanup, makes the code easier to read without too many dependent
nested checks.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/io_uring.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 1aebdba425e8..452d87057527 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2056,6 +2056,8 @@ static inline bool io_check_restriction(struct io_ring_ctx *ctx,
 					struct io_kiocb *req,
 					unsigned int sqe_flags)
 {
+	if (!ctx->restricted)
+		return true;
 	if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
 		return false;
 
@@ -2158,7 +2160,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		}
 	}
 	if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
-		if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
+		if (!io_check_restriction(ctx, req, sqe_flags))
 			return io_init_fail_req(req, -EACCES);
 		/* knock it to the slow queue path, will be drained there */
 		if (ctx->drain_active)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/5] io_uring: track restrictions separately for IORING_OP and IORING_REGISTER
  2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
                   ` (3 preceding siblings ...)
  2026-01-12 15:14 ` [PATCH 4/5] io_uring: move ctx->restricted check into io_check_restriction() Jens Axboe
@ 2026-01-12 15:14 ` Jens Axboe
  2026-01-13 17:27 ` [PATCHSET 0/5] io_uring restrictions cleanups and improvements Gabriel Krisman Bertazi
  2026-01-13 18:23 ` Jens Axboe
  6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-12 15:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

It's quite likely that only register opcode restrictions exists, in
which case we'd never need to check the normal opcodes. Split
ctx->restricted into two separate fields, one for I/O opcodes, and one
for register opcodes.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 include/linux/io_uring_types.h |  8 ++++++--
 io_uring/io_uring.c            |  4 ++--
 io_uring/register.c            | 19 ++++++++++++++-----
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 54fd30abf2b8..e4c804f99c30 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -224,7 +224,10 @@ struct io_restriction {
 	DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
 	u8 sqe_flags_allowed;
 	u8 sqe_flags_required;
-	bool registered;
+	/* IORING_OP_* restrictions exist */
+	bool op_registered;
+	/* IORING_REGISTER_* restrictions exist */
+	bool reg_registered;
 };
 
 struct io_submit_link {
@@ -259,7 +262,8 @@ struct io_ring_ctx {
 	struct {
 		unsigned int		flags;
 		unsigned int		drain_next: 1;
-		unsigned int		restricted: 1;
+		unsigned int		op_restricted: 1;
+		unsigned int		reg_restricted: 1;
 		unsigned int		off_timeout_used: 1;
 		unsigned int		drain_active: 1;
 		unsigned int		has_evfd: 1;
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 452d87057527..8a1dfdc2c3a6 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2056,7 +2056,7 @@ static inline bool io_check_restriction(struct io_ring_ctx *ctx,
 					struct io_kiocb *req,
 					unsigned int sqe_flags)
 {
-	if (!ctx->restricted)
+	if (!ctx->op_restricted)
 		return true;
 	if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
 		return false;
@@ -2159,7 +2159,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			io_init_drain(ctx);
 		}
 	}
-	if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
+	if (unlikely(ctx->op_restricted || ctx->drain_active || ctx->drain_next)) {
 		if (!io_check_restriction(ctx, req, sqe_flags))
 			return io_init_fail_req(req, -EACCES);
 		/* knock it to the slow queue path, will be drained there */
diff --git a/io_uring/register.c b/io_uring/register.c
index b3aac668a665..abbc8cb1934c 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -133,24 +133,31 @@ static __cold int io_parse_restrictions(void __user *arg, unsigned int nr_args,
 			if (res[i].register_op >= IORING_REGISTER_LAST)
 				goto err;
 			__set_bit(res[i].register_op, restrictions->register_op);
+			restrictions->reg_registered = true;
 			break;
 		case IORING_RESTRICTION_SQE_OP:
 			if (res[i].sqe_op >= IORING_OP_LAST)
 				goto err;
 			__set_bit(res[i].sqe_op, restrictions->sqe_op);
+			restrictions->op_registered = true;
 			break;
 		case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
 			restrictions->sqe_flags_allowed = res[i].sqe_flags;
+			restrictions->op_registered = true;
 			break;
 		case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
 			restrictions->sqe_flags_required = res[i].sqe_flags;
+			restrictions->op_registered = true;
 			break;
 		default:
 			goto err;
 		}
 	}
 	ret = nr_args;
-	restrictions->registered = true;
+	if (!nr_args) {
+		restrictions->op_registered = true;
+		restrictions->reg_registered = true;
+	}
 err:
 	kfree(res);
 	return ret;
@@ -166,7 +173,7 @@ static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
 		return -EBADFD;
 
 	/* We allow only a single restrictions registration */
-	if (ctx->restrictions.registered)
+	if (ctx->restrictions.op_registered || ctx->restrictions.reg_registered)
 		return -EBUSY;
 
 	ret = io_parse_restrictions(arg, nr_args, &ctx->restrictions);
@@ -175,8 +182,10 @@ static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
 		memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
 		return ret;
 	}
-	if (ctx->restrictions.registered)
-		ctx->restricted = 1;
+	if (ctx->restrictions.op_registered)
+		ctx->op_restricted = 1;
+	if (ctx->restrictions.reg_registered)
+		ctx->reg_restricted = 1;
 	return 0;
 }
 
@@ -625,7 +634,7 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 	if (ctx->submitter_task && ctx->submitter_task != current)
 		return -EEXIST;
 
-	if (ctx->restricted && !(ctx->flags & IORING_SETUP_R_DISABLED)) {
+	if (ctx->reg_restricted && !(ctx->flags & IORING_SETUP_R_DISABLED)) {
 		opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
 		if (!test_bit(opcode, ctx->restrictions.register_op))
 			return -EACCES;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCHSET 0/5] io_uring restrictions cleanups and improvements
  2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
                   ` (4 preceding siblings ...)
  2026-01-12 15:14 ` [PATCH 5/5] io_uring: track restrictions separately for IORING_OP and IORING_REGISTER Jens Axboe
@ 2026-01-13 17:27 ` Gabriel Krisman Bertazi
  2026-01-13 18:22   ` Jens Axboe
  2026-01-13 18:23 ` Jens Axboe
  6 siblings, 1 reply; 9+ messages in thread
From: Gabriel Krisman Bertazi @ 2026-01-13 17:27 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring

Jens Axboe <axboe@kernel.dk> writes:

> Hi,
>
> In doing the task based restriction sets, I found myself doing a few
> cleanups and improvements along the way. These really had nothing to
> do with the added feature, hence I'm splitting them out into a
> separate patchset.
>
> This series is really 4 patches doing cleanup and preparation for
> making it easier to add the task based restrictions, and patch 5 is
> an improvement for how restrictions are checked. I ran some overhead
> numbers and it's honestly surprisingly low for microbenchmarks. For
> example, running a pure NOP workload at 13-15M op/sec, checking
> restrictions is only about 1.5% of the CPU time. Never the less, I
> suspect the most common restrictions applied is to limit the register
> operations that can be done. Hence it makes sense to track whether
> we have IORING_OP* or IORING_REGISTER* restrictions separately, so
> it can be avoided to check ones op based restrictions if only register
> based ones have been set.

Looks good to me.

Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>


-- 
Gabriel Krisman Bertazi

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCHSET 0/5] io_uring restrictions cleanups and improvements
  2026-01-13 17:27 ` [PATCHSET 0/5] io_uring restrictions cleanups and improvements Gabriel Krisman Bertazi
@ 2026-01-13 18:22   ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-13 18:22 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi; +Cc: io-uring

On 1/13/26 10:27 AM, Gabriel Krisman Bertazi wrote:
> Jens Axboe <axboe@kernel.dk> writes:
> 
>> Hi,
>>
>> In doing the task based restriction sets, I found myself doing a few
>> cleanups and improvements along the way. These really had nothing to
>> do with the added feature, hence I'm splitting them out into a
>> separate patchset.
>>
>> This series is really 4 patches doing cleanup and preparation for
>> making it easier to add the task based restrictions, and patch 5 is
>> an improvement for how restrictions are checked. I ran some overhead
>> numbers and it's honestly surprisingly low for microbenchmarks. For
>> example, running a pure NOP workload at 13-15M op/sec, checking
>> restrictions is only about 1.5% of the CPU time. Never the less, I
>> suspect the most common restrictions applied is to limit the register
>> operations that can be done. Hence it makes sense to track whether
>> we have IORING_OP* or IORING_REGISTER* restrictions separately, so
>> it can be avoided to check ones op based restrictions if only register
>> based ones have been set.
> 
> Looks good to me.
> 
> Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>

Thanks for taking a look!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCHSET 0/5] io_uring restrictions cleanups and improvements
  2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
                   ` (5 preceding siblings ...)
  2026-01-13 17:27 ` [PATCHSET 0/5] io_uring restrictions cleanups and improvements Gabriel Krisman Bertazi
@ 2026-01-13 18:23 ` Jens Axboe
  6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2026-01-13 18:23 UTC (permalink / raw)
  To: io-uring, Jens Axboe


On Mon, 12 Jan 2026 08:14:40 -0700, Jens Axboe wrote:
> In doing the task based restriction sets, I found myself doing a few
> cleanups and improvements along the way. These really had nothing to
> do with the added feature, hence I'm splitting them out into a
> separate patchset.
> 
> This series is really 4 patches doing cleanup and preparation for
> making it easier to add the task based restrictions, and patch 5 is
> an improvement for how restrictions are checked. I ran some overhead
> numbers and it's honestly surprisingly low for microbenchmarks. For
> example, running a pure NOP workload at 13-15M op/sec, checking
> restrictions is only about 1.5% of the CPU time. Never the less, I
> suspect the most common restrictions applied is to limit the register
> operations that can be done. Hence it makes sense to track whether
> we have IORING_OP* or IORING_REGISTER* restrictions separately, so
> it can be avoided to check ones op based restrictions if only register
> based ones have been set.
> 
> [...]

Applied, thanks!

[1/5] io_uring/register: have io_parse_restrictions() return number of ops
      commit: 51fff55a66d89d76fcaeaa277d53bdf5b19efa0e
[2/5] io_uring/register: have io_parse_restrictions() set restrictions enabled
      commit: e6ed0f051d557be5a782a42f74dddbc8ed5309ec
[3/5] io_uring/register: set ctx->restricted when restrictions are parsed
      commit: 09bd84421defa0a9dcebdcdaf8b7deb1870855d0
[4/5] io_uring: move ctx->restricted check into io_check_restriction()
      commit: 991fb85a1d43f0d0237a405d5535024f78a873e5
[5/5] io_uring: track restrictions separately for IORING_OP and IORING_REGISTER
      commit: d6406c45f14842019cfaaba19fe2a76ef9fa831c

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-01-13 18:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-12 15:14 [PATCHSET 0/5] io_uring restrictions cleanups and improvements Jens Axboe
2026-01-12 15:14 ` [PATCH 1/5] io_uring/register: have io_parse_restrictions() return number of ops Jens Axboe
2026-01-12 15:14 ` [PATCH 2/5] io_uring/register: have io_parse_restrictions() set restrictions enabled Jens Axboe
2026-01-12 15:14 ` [PATCH 3/5] io_uring/register: set ctx->restricted when restrictions are parsed Jens Axboe
2026-01-12 15:14 ` [PATCH 4/5] io_uring: move ctx->restricted check into io_check_restriction() Jens Axboe
2026-01-12 15:14 ` [PATCH 5/5] io_uring: track restrictions separately for IORING_OP and IORING_REGISTER Jens Axboe
2026-01-13 17:27 ` [PATCHSET 0/5] io_uring restrictions cleanups and improvements Gabriel Krisman Bertazi
2026-01-13 18:22   ` Jens Axboe
2026-01-13 18:23 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox