public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH 0/7] random fixes and cleanups
@ 2023-08-11 12:53 Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 1/7] io_uring/net: don't overflow multishot accept Pavel Begunkov
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Patch 1 and 2 don't allow multishot recv and accept to overflow
cqes indefinitely, the concept we're always trying to stick to is
that the request should complete, then userspace have to empty
the CQ and reissue.

Note, it breaks test/recv-multishot, I consider the test being
in the wrong, it should not rely on the overflow behaviour, and
I'm going to fix it.

Patch 3-7 are simple intermediate cleanups on top.

Pavel Begunkov (7):
  io_uring/net: don't overflow multishot accept
  io_uring/net: don't overflow multishot recv
  io_uring: open code io_fill_cqe_req()
  io_uring: remove return from io_req_cqe_overflow()
  io_uring: never overflow io_aux_cqe
  io_uring/rsrc: keep one global dummy_ubuf
  io_uring: simplify io_run_task_work_sig return

 io_uring/io_uring.c | 40 ++++++++++++++++++----------------------
 io_uring/io_uring.h | 16 +++-------------
 io_uring/net.c      |  8 ++++----
 io_uring/poll.c     |  4 ++--
 io_uring/rsrc.c     | 14 ++++++++++----
 io_uring/rw.c       |  2 +-
 io_uring/timeout.c  |  4 ++--
 7 files changed, 40 insertions(+), 48 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/7] io_uring/net: don't overflow multishot accept
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
@ 2023-08-11 12:53 ` Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 2/7] io_uring/net: don't overflow multishot recv Pavel Begunkov
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Don't allow overflowing multishot accept CQEs, we want to limit
the grows of the overflow list.

Cc: [email protected]
Fixes: 4e86a2c980137 ("io_uring: implement multishot mode for accept")
Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/net.c b/io_uring/net.c
index eb1f51ddcb23..1599493544a5 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -1367,7 +1367,7 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags)
 	if (ret < 0)
 		return ret;
 	if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER, ret,
-		       IORING_CQE_F_MORE, true))
+		       IORING_CQE_F_MORE, false))
 		goto retry;
 
 	return -ECANCELED;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/7] io_uring/net: don't overflow multishot recv
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 1/7] io_uring/net: don't overflow multishot accept Pavel Begunkov
@ 2023-08-11 12:53 ` Pavel Begunkov
  2023-09-14  8:34   ` Jiri Slaby
  2023-08-11 12:53 ` [PATCH 3/7] io_uring: open code io_fill_cqe_req() Pavel Begunkov
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Don't allow overflowing multishot recv CQEs, it might get out of
hand, hurt performanece, and in the worst case scenario OOM the task.

Cc: [email protected]
Fixes: b3fdea6ecb55c ("io_uring: multishot recv")
Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/net.c b/io_uring/net.c
index 1599493544a5..8c419c01a5db 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -642,7 +642,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
 
 	if (!mshot_finished) {
 		if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
-			       *ret, cflags | IORING_CQE_F_MORE, true)) {
+			       *ret, cflags | IORING_CQE_F_MORE, false)) {
 			io_recv_prep_retry(req);
 			/* Known not-empty or unknown state, retry */
 			if (cflags & IORING_CQE_F_SOCK_NONEMPTY ||
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/7] io_uring: open code io_fill_cqe_req()
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 1/7] io_uring/net: don't overflow multishot accept Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 2/7] io_uring/net: don't overflow multishot recv Pavel Begunkov
@ 2023-08-11 12:53 ` Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 4/7] io_uring: remove return from io_req_cqe_overflow() Pavel Begunkov
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

io_fill_cqe_req() is only called from one place, open code it, and
rename __io_fill_cqe_req().

Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/io_uring.c |  8 +++++---
 io_uring/io_uring.h | 11 +----------
 io_uring/rw.c       |  2 +-
 3 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 047576bc98d0..e969b4ca1c47 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -978,8 +978,10 @@ static void __io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 	struct io_rsrc_node *rsrc_node = NULL;
 
 	io_cq_lock(ctx);
-	if (!(req->flags & REQ_F_CQE_SKIP))
-		io_fill_cqe_req(ctx, req);
+	if (!(req->flags & REQ_F_CQE_SKIP)) {
+		if (!io_fill_cqe_req(ctx, req))
+			io_req_cqe_overflow(req);
+	}
 
 	/*
 	 * If we're the last reference to this request, add to our locked
@@ -1556,7 +1558,7 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 					    comp_list);
 
 		if (!(req->flags & REQ_F_CQE_SKIP) &&
-		    unlikely(!__io_fill_cqe_req(ctx, req))) {
+		    unlikely(!io_fill_cqe_req(ctx, req))) {
 			if (ctx->task_complete) {
 				spin_lock(&ctx->completion_lock);
 				io_req_cqe_overflow(req);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index ff153af28236..3aa208fbe905 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -133,8 +133,7 @@ static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
 	return io_get_cqe_overflow(ctx, false);
 }
 
-static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx,
-				     struct io_kiocb *req)
+static inline bool io_fill_cqe_req(struct io_ring_ctx *ctx, struct io_kiocb *req)
 {
 	struct io_uring_cqe *cqe;
 
@@ -168,14 +167,6 @@ static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx,
 	return true;
 }
 
-static inline bool io_fill_cqe_req(struct io_ring_ctx *ctx,
-				   struct io_kiocb *req)
-{
-	if (likely(__io_fill_cqe_req(ctx, req)))
-		return true;
-	return io_req_cqe_overflow(req);
-}
-
 static inline void req_set_fail(struct io_kiocb *req)
 {
 	req->flags |= REQ_F_FAIL;
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 1bce2208b65c..9b51afdae505 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -1064,7 +1064,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
 			continue;
 
 		req->cqe.flags = io_put_kbuf(req, 0);
-		if (unlikely(!__io_fill_cqe_req(ctx, req))) {
+		if (unlikely(!io_fill_cqe_req(ctx, req))) {
 			spin_lock(&ctx->completion_lock);
 			io_req_cqe_overflow(req);
 			spin_unlock(&ctx->completion_lock);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/7] io_uring: remove return from io_req_cqe_overflow()
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
                   ` (2 preceding siblings ...)
  2023-08-11 12:53 ` [PATCH 3/7] io_uring: open code io_fill_cqe_req() Pavel Begunkov
@ 2023-08-11 12:53 ` Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 5/7] io_uring: never overflow io_aux_cqe Pavel Begunkov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Nobody checks io_req_cqe_overflow()'s return, make it return void.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/io_uring.c | 8 ++++----
 io_uring/io_uring.h | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index e969b4ca1c47..7595658a5073 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -813,15 +813,15 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
 	return true;
 }
 
-bool io_req_cqe_overflow(struct io_kiocb *req)
+void io_req_cqe_overflow(struct io_kiocb *req)
 {
 	if (!(req->flags & REQ_F_CQE32_INIT)) {
 		req->extra1 = 0;
 		req->extra2 = 0;
 	}
-	return io_cqring_event_overflow(req->ctx, req->cqe.user_data,
-					req->cqe.res, req->cqe.flags,
-					req->extra1, req->extra2);
+	io_cqring_event_overflow(req->ctx, req->cqe.user_data,
+				req->cqe.res, req->cqe.flags,
+				req->extra1, req->extra2);
 }
 
 /*
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 3aa208fbe905..3dc0b6fb0ef7 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -39,7 +39,7 @@ enum {
 };
 
 struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx, bool overflow);
-bool io_req_cqe_overflow(struct io_kiocb *req);
+void io_req_cqe_overflow(struct io_kiocb *req);
 int io_run_task_work_sig(struct io_ring_ctx *ctx);
 void io_req_defer_failed(struct io_kiocb *req, s32 res);
 void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/7] io_uring: never overflow io_aux_cqe
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
                   ` (3 preceding siblings ...)
  2023-08-11 12:53 ` [PATCH 4/7] io_uring: remove return from io_req_cqe_overflow() Pavel Begunkov
@ 2023-08-11 12:53 ` Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 6/7] io_uring/rsrc: keep one global dummy_ubuf Pavel Begunkov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Now all callers of io_aux_cqe() set allow_overflow to false, remove the
parameter and not allow overflowing auxilary multishot cqes.

When CQ is full the function callers and all multishot requests in
general are expected to complete the request. That prevents indefinite
in-background grows of the overflow list and let's the userspace to
handle the backlog at its own pace.

Resubmitting a request should also be faster than accounting a bunch of
overflows, so it should be better for perf when it happens, but a well
behaving userspace should be trying to avoid overflows in any case.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/io_uring.c | 11 +++++++----
 io_uring/io_uring.h |  3 +--
 io_uring/net.c      |  8 ++++----
 io_uring/poll.c     |  4 ++--
 io_uring/timeout.c  |  4 ++--
 5 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 7595658a5073..e57d00939ab9 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -939,15 +939,18 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 	return __io_post_aux_cqe(ctx, user_data, res, cflags, true);
 }
 
-bool io_aux_cqe(const struct io_kiocb *req, bool defer, s32 res, u32 cflags,
-		bool allow_overflow)
+/*
+ * A helper for multishot requests posting additional CQEs.
+ * Should only be used from a task_work including IO_URING_F_MULTISHOT.
+ */
+bool io_fill_cqe_req_aux(struct io_kiocb *req, bool defer, s32 res, u32 cflags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	u64 user_data = req->cqe.user_data;
 	struct io_uring_cqe *cqe;
 
 	if (!defer)
-		return __io_post_aux_cqe(ctx, user_data, res, cflags, allow_overflow);
+		return __io_post_aux_cqe(ctx, user_data, res, cflags, false);
 
 	lockdep_assert_held(&ctx->uring_lock);
 
@@ -962,7 +965,7 @@ bool io_aux_cqe(const struct io_kiocb *req, bool defer, s32 res, u32 cflags,
 	 * however it's main job is to prevent unbounded posted completions,
 	 * and in that it works just as well.
 	 */
-	if (!allow_overflow && test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
+	if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
 		return false;
 
 	cqe = &ctx->submit_state.cqes[ctx->submit_state.cqes_count++];
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 3dc0b6fb0ef7..3e6ff3cd9a24 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -44,8 +44,7 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx);
 void io_req_defer_failed(struct io_kiocb *req, s32 res);
 void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags);
 bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
-bool io_aux_cqe(const struct io_kiocb *req, bool defer, s32 res, u32 cflags,
-		bool allow_overflow);
+bool io_fill_cqe_req_aux(struct io_kiocb *req, bool defer, s32 res, u32 cflags);
 void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
 
 struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages);
diff --git a/io_uring/net.c b/io_uring/net.c
index 8c419c01a5db..3d07bf79c1e0 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -641,8 +641,8 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
 	}
 
 	if (!mshot_finished) {
-		if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
-			       *ret, cflags | IORING_CQE_F_MORE, false)) {
+		if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
+					*ret, cflags | IORING_CQE_F_MORE)) {
 			io_recv_prep_retry(req);
 			/* Known not-empty or unknown state, retry */
 			if (cflags & IORING_CQE_F_SOCK_NONEMPTY ||
@@ -1366,8 +1366,8 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags)
 
 	if (ret < 0)
 		return ret;
-	if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER, ret,
-		       IORING_CQE_F_MORE, false))
+	if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
+				ret, IORING_CQE_F_MORE))
 		goto retry;
 
 	return -ECANCELED;
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 65ec363f6377..4c360ba8793a 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -300,8 +300,8 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts)
 			__poll_t mask = mangle_poll(req->cqe.res &
 						    req->apoll_events);
 
-			if (!io_aux_cqe(req, ts->locked, mask,
-					IORING_CQE_F_MORE, false)) {
+			if (!io_fill_cqe_req_aux(req, ts->locked, mask,
+						 IORING_CQE_F_MORE)) {
 				io_req_set_res(req, mask, 0);
 				return IOU_POLL_REMOVE_POLL_USE_RES;
 			}
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 6242130e73c6..7fd7dbb211d6 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -73,8 +73,8 @@ static void io_timeout_complete(struct io_kiocb *req, struct io_tw_state *ts)
 
 	if (!io_timeout_finish(timeout, data)) {
 		bool filled;
-		filled = io_aux_cqe(req, ts->locked, -ETIME, IORING_CQE_F_MORE,
-				    false);
+		filled = io_fill_cqe_req_aux(req, ts->locked, -ETIME,
+					     IORING_CQE_F_MORE);
 		if (filled) {
 			/* re-arm timer */
 			spin_lock_irq(&ctx->timeout_lock);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 6/7] io_uring/rsrc: keep one global dummy_ubuf
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
                   ` (4 preceding siblings ...)
  2023-08-11 12:53 ` [PATCH 5/7] io_uring: never overflow io_aux_cqe Pavel Begunkov
@ 2023-08-11 12:53 ` Pavel Begunkov
  2023-08-11 12:53 ` [PATCH 7/7] io_uring: simplify io_run_task_work_sig return Pavel Begunkov
  2023-08-11 16:43 ` [PATCH 0/7] random fixes and cleanups Jens Axboe
  7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

We set empty registered buffers to dummy_ubuf as an optimisation.
Currently, we allocate the dummy entry for each ring, whenever we can
simply have one global instance.

We're casting out const on assignment, it's fine as we're not going to
change the content of the dummy, the constness gives us an extra layer
of protection if sth ever goes wrong.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/io_uring.c |  9 ---------
 io_uring/rsrc.c     | 14 ++++++++++----
 2 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index e57d00939ab9..a7a4d637aee0 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -290,13 +290,6 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 		goto err;
 	if (io_alloc_hash_table(&ctx->cancel_table_locked, hash_bits))
 		goto err;
-
-	ctx->dummy_ubuf = kzalloc(sizeof(*ctx->dummy_ubuf), GFP_KERNEL);
-	if (!ctx->dummy_ubuf)
-		goto err;
-	/* set invalid range, so io_import_fixed() fails meeting it */
-	ctx->dummy_ubuf->ubuf = -1UL;
-
 	if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
 			    0, GFP_KERNEL))
 		goto err;
@@ -335,7 +328,6 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	INIT_WQ_LIST(&ctx->submit_state.compl_reqs);
 	return ctx;
 err:
-	kfree(ctx->dummy_ubuf);
 	kfree(ctx->cancel_table.hbs);
 	kfree(ctx->cancel_table_locked.hbs);
 	kfree(ctx->io_bl);
@@ -2897,7 +2889,6 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
 		io_wq_put_hash(ctx->hash_map);
 	kfree(ctx->cancel_table.hbs);
 	kfree(ctx->cancel_table_locked.hbs);
-	kfree(ctx->dummy_ubuf);
 	kfree(ctx->io_bl);
 	xa_destroy(&ctx->io_bl_xa);
 	kfree(ctx);
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 5e8fdd9b8ca6..d9c853d10587 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -33,6 +33,12 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
 #define IORING_MAX_FIXED_FILES	(1U << 20)
 #define IORING_MAX_REG_BUFFERS	(1U << 14)
 
+static const struct io_mapped_ubuf dummy_ubuf = {
+	/* set invalid range, so io_import_fixed() fails meeting it */
+	.ubuf = -1UL,
+	.ubuf_end = 0,
+};
+
 int __io_account_mem(struct user_struct *user, unsigned long nr_pages)
 {
 	unsigned long page_limit, cur_pages, new_pages;
@@ -132,7 +138,7 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slo
 	struct io_mapped_ubuf *imu = *slot;
 	unsigned int i;
 
-	if (imu != ctx->dummy_ubuf) {
+	if (imu != &dummy_ubuf) {
 		for (i = 0; i < imu->nr_bvecs; i++)
 			unpin_user_page(imu->bvec[i].bv_page);
 		if (imu->acct_pages)
@@ -459,14 +465,14 @@ static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
 			break;
 
 		i = array_index_nospec(up->offset + done, ctx->nr_user_bufs);
-		if (ctx->user_bufs[i] != ctx->dummy_ubuf) {
+		if (ctx->user_bufs[i] != &dummy_ubuf) {
 			err = io_queue_rsrc_removal(ctx->buf_data, i,
 						    ctx->user_bufs[i]);
 			if (unlikely(err)) {
 				io_buffer_unmap(ctx, &imu);
 				break;
 			}
-			ctx->user_bufs[i] = ctx->dummy_ubuf;
+			ctx->user_bufs[i] = (struct io_mapped_ubuf *)&dummy_ubuf;
 		}
 
 		ctx->user_bufs[i] = imu;
@@ -1077,7 +1083,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
 	int ret, nr_pages, i;
 	struct folio *folio = NULL;
 
-	*pimu = ctx->dummy_ubuf;
+	*pimu = (struct io_mapped_ubuf *)&dummy_ubuf;
 	if (!iov->iov_base)
 		return 0;
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 7/7] io_uring: simplify io_run_task_work_sig return
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
                   ` (5 preceding siblings ...)
  2023-08-11 12:53 ` [PATCH 6/7] io_uring/rsrc: keep one global dummy_ubuf Pavel Begunkov
@ 2023-08-11 12:53 ` Pavel Begunkov
  2023-08-11 16:43 ` [PATCH 0/7] random fixes and cleanups Jens Axboe
  7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-08-11 12:53 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Nobody cares about io_run_task_work_sig returning 1, we only check for
negative errors. Simplify by keeping to 0/-error returns.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/io_uring.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index a7a4d637aee0..e189158ebbdd 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2477,10 +2477,10 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx)
 	if (!llist_empty(&ctx->work_llist)) {
 		__set_current_state(TASK_RUNNING);
 		if (io_run_local_work(ctx) > 0)
-			return 1;
+			return 0;
 	}
 	if (io_run_task_work() > 0)
-		return 1;
+		return 0;
 	if (task_sigpending(current))
 		return -EINTR;
 	return 0;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/7] random fixes and cleanups
  2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
                   ` (6 preceding siblings ...)
  2023-08-11 12:53 ` [PATCH 7/7] io_uring: simplify io_run_task_work_sig return Pavel Begunkov
@ 2023-08-11 16:43 ` Jens Axboe
  7 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2023-08-11 16:43 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov


On Fri, 11 Aug 2023 13:53:40 +0100, Pavel Begunkov wrote:
> Patch 1 and 2 don't allow multishot recv and accept to overflow
> cqes indefinitely, the concept we're always trying to stick to is
> that the request should complete, then userspace have to empty
> the CQ and reissue.
> 
> Note, it breaks test/recv-multishot, I consider the test being
> in the wrong, it should not rely on the overflow behaviour, and
> I'm going to fix it.
> 
> [...]

Applied, thanks!

[1/7] io_uring/net: don't overflow multishot accept
      commit: 1bfed23349716a7811645336a7ce42c4b8f250bc
[2/7] io_uring/net: don't overflow multishot recv
      commit: b2e74db55dd93d6db22a813c9a775b5dbf87c560
[3/7] io_uring: open code io_fill_cqe_req()
      commit: 00b0db562485fbb259cd4054346208ad0885d662
[4/7] io_uring: remove return from io_req_cqe_overflow()
      commit: 056695bffa4beed5668dd4aa11efb696eacb3ed9
[5/7] io_uring: never overflow io_aux_cqe
      commit: b6b2bb58a75407660f638a68e6e34a07036146d0
[6/7] io_uring/rsrc: keep one global dummy_ubuf
      commit: 19a63c4021702e389a559726b16fcbf07a8a05f9
[7/7] io_uring: simplify io_run_task_work_sig return
      commit: d246c759c47eafe4688613e89b337e48c39c5968

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/7] io_uring/net: don't overflow multishot recv
  2023-08-11 12:53 ` [PATCH 2/7] io_uring/net: don't overflow multishot recv Pavel Begunkov
@ 2023-09-14  8:34   ` Jiri Slaby
  2023-09-14 13:02     ` Pavel Begunkov
  0 siblings, 1 reply; 12+ messages in thread
From: Jiri Slaby @ 2023-09-14  8:34 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Jens Axboe

On 11. 08. 23, 14:53, Pavel Begunkov wrote:
> Don't allow overflowing multishot recv CQEs, it might get out of
> hand, hurt performanece, and in the worst case scenario OOM the task.
> 
> Cc: [email protected]
> Fixes: b3fdea6ecb55c ("io_uring: multishot recv")
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
>   io_uring/net.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/io_uring/net.c b/io_uring/net.c
> index 1599493544a5..8c419c01a5db 100644
> --- a/io_uring/net.c
> +++ b/io_uring/net.c
> @@ -642,7 +642,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
>   
>   	if (!mshot_finished) {
>   		if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
> -			       *ret, cflags | IORING_CQE_F_MORE, true)) {
> +			       *ret, cflags | IORING_CQE_F_MORE, false)) {

This one breaks iouring's recv-multishot.t test:
Running test recv-multishot.t                                       MORE 
flag not set
test stream=0 wait_each=0 recvmsg=0 early_error=4  defer=0 failed
Test recv-multishot.t failed with ret 1

Is the commit or the test broken ;)?

>   			io_recv_prep_retry(req);
>   			/* Known not-empty or unknown state, retry */
>   			if (cflags & IORING_CQE_F_SOCK_NONEMPTY ||

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/7] io_uring/net: don't overflow multishot recv
  2023-09-14  8:34   ` Jiri Slaby
@ 2023-09-14 13:02     ` Pavel Begunkov
  2023-09-14 14:15       ` Jens Axboe
  0 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2023-09-14 13:02 UTC (permalink / raw)
  To: Jiri Slaby, io-uring; +Cc: Jens Axboe

On 9/14/23 09:34, Jiri Slaby wrote:
> On 11. 08. 23, 14:53, Pavel Begunkov wrote:
>> Don't allow overflowing multishot recv CQEs, it might get out of
>> hand, hurt performanece, and in the worst case scenario OOM the task.
>>
>> Cc: [email protected]
>> Fixes: b3fdea6ecb55c ("io_uring: multishot recv")
>> Signed-off-by: Pavel Begunkov <[email protected]>
>> ---
>>   io_uring/net.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/io_uring/net.c b/io_uring/net.c
>> index 1599493544a5..8c419c01a5db 100644
>> --- a/io_uring/net.c
>> +++ b/io_uring/net.c
>> @@ -642,7 +642,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
>>       if (!mshot_finished) {
>>           if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
>> -                   *ret, cflags | IORING_CQE_F_MORE, true)) {
>> +                   *ret, cflags | IORING_CQE_F_MORE, false)) {
> 
> This one breaks iouring's recv-multishot.t test:
> Running test recv-multishot.t                                       MORE flag not set
> test stream=0 wait_each=0 recvmsg=0 early_error=4  defer=0 failed
> Test recv-multishot.t failed with ret 1
> 
> Is the commit or the test broken ;)?

The test is not right. I fixed it up while sending the kernel patch,
but a new liburing version hasn't been cut yet and I assume you're
not using upstream version?

> 
>>               io_recv_prep_retry(req);
>>               /* Known not-empty or unknown state, retry */
>>               if (cflags & IORING_CQE_F_SOCK_NONEMPTY ||
> 
> thanks,

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/7] io_uring/net: don't overflow multishot recv
  2023-09-14 13:02     ` Pavel Begunkov
@ 2023-09-14 14:15       ` Jens Axboe
  0 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2023-09-14 14:15 UTC (permalink / raw)
  To: Pavel Begunkov, Jiri Slaby, io-uring

On 9/14/23 7:02 AM, Pavel Begunkov wrote:
> On 9/14/23 09:34, Jiri Slaby wrote:
>> On 11. 08. 23, 14:53, Pavel Begunkov wrote:
>>> Don't allow overflowing multishot recv CQEs, it might get out of
>>> hand, hurt performanece, and in the worst case scenario OOM the task.
>>>
>>> Cc: [email protected]
>>> Fixes: b3fdea6ecb55c ("io_uring: multishot recv")
>>> Signed-off-by: Pavel Begunkov <[email protected]>
>>> ---
>>>   io_uring/net.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/io_uring/net.c b/io_uring/net.c
>>> index 1599493544a5..8c419c01a5db 100644
>>> --- a/io_uring/net.c
>>> +++ b/io_uring/net.c
>>> @@ -642,7 +642,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
>>>       if (!mshot_finished) {
>>>           if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
>>> -                   *ret, cflags | IORING_CQE_F_MORE, true)) {
>>> +                   *ret, cflags | IORING_CQE_F_MORE, false)) {
>>
>> This one breaks iouring's recv-multishot.t test:
>> Running test recv-multishot.t                                       MORE flag not set
>> test stream=0 wait_each=0 recvmsg=0 early_error=4  defer=0 failed
>> Test recv-multishot.t failed with ret 1
>>
>> Is the commit or the test broken ;)?
> 
> The test is not right. I fixed it up while sending the kernel patch,
> but a new liburing version hasn't been cut yet and I assume you're
> not using upstream version?

I'll probably cut a new release soonish, we're after the merge window
now so whatever went into 6.6-rc should be solid/sane API wise.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-09-14 14:15 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-11 12:53 [PATCH 0/7] random fixes and cleanups Pavel Begunkov
2023-08-11 12:53 ` [PATCH 1/7] io_uring/net: don't overflow multishot accept Pavel Begunkov
2023-08-11 12:53 ` [PATCH 2/7] io_uring/net: don't overflow multishot recv Pavel Begunkov
2023-09-14  8:34   ` Jiri Slaby
2023-09-14 13:02     ` Pavel Begunkov
2023-09-14 14:15       ` Jens Axboe
2023-08-11 12:53 ` [PATCH 3/7] io_uring: open code io_fill_cqe_req() Pavel Begunkov
2023-08-11 12:53 ` [PATCH 4/7] io_uring: remove return from io_req_cqe_overflow() Pavel Begunkov
2023-08-11 12:53 ` [PATCH 5/7] io_uring: never overflow io_aux_cqe Pavel Begunkov
2023-08-11 12:53 ` [PATCH 6/7] io_uring/rsrc: keep one global dummy_ubuf Pavel Begunkov
2023-08-11 12:53 ` [PATCH 7/7] io_uring: simplify io_run_task_work_sig return Pavel Begunkov
2023-08-11 16:43 ` [PATCH 0/7] random fixes and cleanups Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox