* [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe @ 2022-11-07 12:52 Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 1/4] io_uring: revert "io_uring fix multishot accept ordering" Dylan Yudaken ` (4 more replies) 0 siblings, 5 replies; 6+ messages in thread From: Dylan Yudaken @ 2022-11-07 12:52 UTC (permalink / raw) To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken Previously, CQE ordering could be broken in multishot if there was an overflow, and so the multishot was stopped in overflow. However since Pavel's change in commit aa1df3a360a0 ("io_uring: fix CQE reordering"), there is no risk of out of order completions being received by userspace. So we can now clean up this code. Dylan Yudaken (4): io_uring: revert "io_uring fix multishot accept ordering" io_uring: revert "io_uring: fix multishot poll on overflow" io_uring: allow multishot recv CQEs to overflow io_uring: remove allow_overflow parameter io_uring/io_uring.c | 13 ++++--------- io_uring/io_uring.h | 6 ++---- io_uring/msg_ring.c | 4 ++-- io_uring/net.c | 19 ++++++------------- io_uring/poll.c | 6 ++---- io_uring/rsrc.c | 4 ++-- 6 files changed, 18 insertions(+), 34 deletions(-) base-commit: 765d0e263fccc8b22efef8258c3260e9d0ecf632 -- 2.30.2 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH for-next 1/4] io_uring: revert "io_uring fix multishot accept ordering" 2022-11-07 12:52 [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Dylan Yudaken @ 2022-11-07 12:52 ` Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 2/4] io_uring: revert "io_uring: fix multishot poll on overflow" Dylan Yudaken ` (3 subsequent siblings) 4 siblings, 0 replies; 6+ messages in thread From: Dylan Yudaken @ 2022-11-07 12:52 UTC (permalink / raw) To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken This is no longer needed after commit aa1df3a360a0 ("io_uring: fix CQE reordering"), since all reordering is now taken care of. This reverts commit cbd25748545c ("io_uring: fix multishot accept ordering"). Signed-off-by: Dylan Yudaken <[email protected]> --- io_uring/net.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index 9a07e79cc0e6..0d77ddcce0af 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -1325,14 +1325,11 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags) return IOU_OK; } - if (ret >= 0 && - io_post_aux_cqe(ctx, req->cqe.user_data, ret, IORING_CQE_F_MORE, false)) + if (ret < 0) + return ret; + if (io_post_aux_cqe(ctx, req->cqe.user_data, ret, IORING_CQE_F_MORE, true)) goto retry; - - io_req_set_res(req, ret, 0); - if (req->flags & REQ_F_POLLED) - return IOU_STOP_MULTISHOT; - return IOU_OK; + return -ECANCELED; } int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) -- 2.30.2 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-next 2/4] io_uring: revert "io_uring: fix multishot poll on overflow" 2022-11-07 12:52 [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 1/4] io_uring: revert "io_uring fix multishot accept ordering" Dylan Yudaken @ 2022-11-07 12:52 ` Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 3/4] io_uring: allow multishot recv CQEs to overflow Dylan Yudaken ` (2 subsequent siblings) 4 siblings, 0 replies; 6+ messages in thread From: Dylan Yudaken @ 2022-11-07 12:52 UTC (permalink / raw) To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken This is no longer needed after commit aa1df3a360a0 ("io_uring: fix CQE reordering"), since all reordering is now taken care of. This reverts commit a2da676376fe ("io_uring: fix multishot poll on overflow"). Signed-off-by: Dylan Yudaken <[email protected]> --- io_uring/poll.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/io_uring/poll.c b/io_uring/poll.c index 589b60fc740a..e1b8652b670f 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -244,10 +244,8 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked) req->apoll_events); if (!io_post_aux_cqe(ctx, req->cqe.user_data, - mask, IORING_CQE_F_MORE, false)) { - io_req_set_res(req, mask, 0); - return IOU_POLL_REMOVE_POLL_USE_RES; - } + mask, IORING_CQE_F_MORE, true)) + return -ECANCELED; } else { ret = io_poll_issue(req, locked); if (ret == IOU_STOP_MULTISHOT) -- 2.30.2 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-next 3/4] io_uring: allow multishot recv CQEs to overflow 2022-11-07 12:52 [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 1/4] io_uring: revert "io_uring fix multishot accept ordering" Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 2/4] io_uring: revert "io_uring: fix multishot poll on overflow" Dylan Yudaken @ 2022-11-07 12:52 ` Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 4/4] io_uring: remove allow_overflow parameter Dylan Yudaken 2022-11-07 20:18 ` [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Jens Axboe 4 siblings, 0 replies; 6+ messages in thread From: Dylan Yudaken @ 2022-11-07 12:52 UTC (permalink / raw) To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken With commit aa1df3a360a0 ("io_uring: fix CQE reordering"), there are stronger guarantees for overflow ordering. Specifically ensuring that userspace will not receive out of order receive CQEs. Therefore this is not needed any more for recv/recvmsg. Signed-off-by: Dylan Yudaken <[email protected]> --- io_uring/net.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index 0d77ddcce0af..4b79b61f5597 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -603,15 +603,11 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret, if (!mshot_finished) { if (io_post_aux_cqe(req->ctx, req->cqe.user_data, *ret, - cflags | IORING_CQE_F_MORE, false)) { + cflags | IORING_CQE_F_MORE, true)) { io_recv_prep_retry(req); return false; } - /* - * Otherwise stop multishot but use the current result. - * Probably will end up going into overflow, but this means - * we cannot trust the ordering anymore - */ + /* Otherwise stop multishot but use the current result. */ } io_req_set_res(req, *ret, cflags); -- 2.30.2 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-next 4/4] io_uring: remove allow_overflow parameter 2022-11-07 12:52 [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Dylan Yudaken ` (2 preceding siblings ...) 2022-11-07 12:52 ` [PATCH for-next 3/4] io_uring: allow multishot recv CQEs to overflow Dylan Yudaken @ 2022-11-07 12:52 ` Dylan Yudaken 2022-11-07 20:18 ` [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Jens Axboe 4 siblings, 0 replies; 6+ messages in thread From: Dylan Yudaken @ 2022-11-07 12:52 UTC (permalink / raw) To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken It is now always true, so just remove it Signed-off-by: Dylan Yudaken <[email protected]> --- io_uring/io_uring.c | 13 ++++--------- io_uring/io_uring.h | 6 ++---- io_uring/msg_ring.c | 4 ++-- io_uring/net.c | 4 ++-- io_uring/poll.c | 2 +- io_uring/rsrc.c | 4 ++-- 6 files changed, 13 insertions(+), 20 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index db0dec120f09..47631cab6517 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -773,8 +773,7 @@ struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx, bool overflow) return &rings->cqes[off]; } -bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags, - bool allow_overflow) +bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags) { struct io_uring_cqe *cqe; @@ -800,20 +799,16 @@ bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags return true; } - if (allow_overflow) - return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0); - - return false; + return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0); } bool io_post_aux_cqe(struct io_ring_ctx *ctx, - u64 user_data, s32 res, u32 cflags, - bool allow_overflow) + u64 user_data, s32 res, u32 cflags) { bool filled; io_cq_lock(ctx); - filled = io_fill_cqe_aux(ctx, user_data, res, cflags, allow_overflow); + filled = io_fill_cqe_aux(ctx, user_data, res, cflags); io_cq_unlock_post(ctx); return filled; } diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index e99a79f2df9b..d14534a2f8e7 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -33,10 +33,8 @@ void io_req_complete_failed(struct io_kiocb *req, s32 res); void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); void io_req_complete_post(struct io_kiocb *req); void __io_req_complete_post(struct io_kiocb *req); -bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags, - bool allow_overflow); -bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags, - bool allow_overflow); +bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); +bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); void __io_commit_cqring_flush(struct io_ring_ctx *ctx); struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages); diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c index 90d2fc6fd80e..afb543aab9f6 100644 --- a/io_uring/msg_ring.c +++ b/io_uring/msg_ring.c @@ -31,7 +31,7 @@ static int io_msg_ring_data(struct io_kiocb *req) if (msg->src_fd || msg->dst_fd || msg->flags) return -EINVAL; - if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0, true)) + if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0)) return 0; return -EOVERFLOW; @@ -116,7 +116,7 @@ static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags) * completes with -EOVERFLOW, then the sender must ensure that a * later IORING_OP_MSG_RING delivers the message. */ - if (!io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0, true)) + if (!io_post_aux_cqe(target_ctx, msg->user_data, msg->len, 0)) ret = -EOVERFLOW; out_unlock: io_double_unlock_ctx(ctx, target_ctx, issue_flags); diff --git a/io_uring/net.c b/io_uring/net.c index 4b79b61f5597..a1a0b8f223e0 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -603,7 +603,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret, if (!mshot_finished) { if (io_post_aux_cqe(req->ctx, req->cqe.user_data, *ret, - cflags | IORING_CQE_F_MORE, true)) { + cflags | IORING_CQE_F_MORE)) { io_recv_prep_retry(req); return false; } @@ -1323,7 +1323,7 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags) if (ret < 0) return ret; - if (io_post_aux_cqe(ctx, req->cqe.user_data, ret, IORING_CQE_F_MORE, true)) + if (io_post_aux_cqe(ctx, req->cqe.user_data, ret, IORING_CQE_F_MORE)) goto retry; return -ECANCELED; } diff --git a/io_uring/poll.c b/io_uring/poll.c index e1b8652b670f..d00c8dc76d34 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -244,7 +244,7 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked) req->apoll_events); if (!io_post_aux_cqe(ctx, req->cqe.user_data, - mask, IORING_CQE_F_MORE, true)) + mask, IORING_CQE_F_MORE)) return -ECANCELED; } else { ret = io_poll_issue(req, locked); diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 55d4ab96fb92..a10c1ea51933 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -170,10 +170,10 @@ static void __io_rsrc_put_work(struct io_rsrc_node *ref_node) if (prsrc->tag) { if (ctx->flags & IORING_SETUP_IOPOLL) { mutex_lock(&ctx->uring_lock); - io_post_aux_cqe(ctx, prsrc->tag, 0, 0, true); + io_post_aux_cqe(ctx, prsrc->tag, 0, 0); mutex_unlock(&ctx->uring_lock); } else { - io_post_aux_cqe(ctx, prsrc->tag, 0, 0, true); + io_post_aux_cqe(ctx, prsrc->tag, 0, 0); } } -- 2.30.2 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe 2022-11-07 12:52 [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Dylan Yudaken ` (3 preceding siblings ...) 2022-11-07 12:52 ` [PATCH for-next 4/4] io_uring: remove allow_overflow parameter Dylan Yudaken @ 2022-11-07 20:18 ` Jens Axboe 4 siblings, 0 replies; 6+ messages in thread From: Jens Axboe @ 2022-11-07 20:18 UTC (permalink / raw) To: Pavel Begunkov, Dylan Yudaken; +Cc: kernel-team, io-uring On Mon, 7 Nov 2022 04:52:32 -0800, Dylan Yudaken wrote: > Previously, CQE ordering could be broken in multishot if there was an > overflow, and so the multishot was stopped in overflow. However since > Pavel's change in commit aa1df3a360a0 ("io_uring: fix CQE reordering"), > there is no risk of out of order completions being received by userspace. > > So we can now clean up this code. > > [...] Applied, thanks! [1/4] io_uring: revert "io_uring fix multishot accept ordering" commit: 01661287389d6ab44150c4c05ff3910a12681790 [2/4] io_uring: revert "io_uring: fix multishot poll on overflow" commit: 7bf3f5a6acfb5c2daaf7657b28c73eee7ed5db8b [3/4] io_uring: allow multishot recv CQEs to overflow commit: beecb96e259f0d8e59f8bbebc6b007084f87d66d [4/4] io_uring: remove allow_overflow parameter commit: 6488182c989ac73a18dd83539d57a5afd52815ef Best regards, -- Jens Axboe ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-11-07 20:18 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2022-11-07 12:52 [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 1/4] io_uring: revert "io_uring fix multishot accept ordering" Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 2/4] io_uring: revert "io_uring: fix multishot poll on overflow" Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 3/4] io_uring: allow multishot recv CQEs to overflow Dylan Yudaken 2022-11-07 12:52 ` [PATCH for-next 4/4] io_uring: remove allow_overflow parameter Dylan Yudaken 2022-11-07 20:18 ` [PATCH for-next 0/4] io_uring: cleanup allow_overflow on post_cqe Jens Axboe
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox