public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/1] io_uring/net: fix io_req_post_cqe abuse by send bundle
@ 2025-03-26 22:21 Pavel Begunkov
  2025-03-26 22:29 ` Pavel Begunkov
  0 siblings, 1 reply; 2+ messages in thread
From: Pavel Begunkov @ 2025-03-26 22:21 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

[  114.987980][ T5313] WARNING: CPU: 6 PID: 5313 at io_uring/io_uring.c:872 io_req_post_cqe+0x12e/0x4f0
[  114.991597][ T5313] RIP: 0010:io_req_post_cqe+0x12e/0x4f0
[  115.001880][ T5313] Call Trace:
[  115.002222][ T5313]  <TASK>
[  115.007813][ T5313]  io_send+0x4fe/0x10f0
[  115.009317][ T5313]  io_issue_sqe+0x1a6/0x1740
[  115.012094][ T5313]  io_wq_submit_work+0x38b/0xed0
[  115.013223][ T5313]  io_worker_handle_work+0x62a/0x1600
[  115.013876][ T5313]  io_wq_worker+0x34f/0xdf0

As the comment states, io_req_post_cqe() should only be used by
multishot requests, i.e. REQ_F_APOLL_MULTISHOT, which bundled sends are
not. Add a flag signifying whether a request wants to post multiple
CQEs. Eventually REQ_F_APOLL_MULTISHOT should imply the new flag, but
that's left out for simplicity.

Cc: stable@vger.kernel.org
Fixes: a05d1f625c7aa ("io_uring/net: support bundles for send")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 include/linux/io_uring_types.h | 3 +++
 io_uring/io_uring.c            | 5 +++--
 io_uring/net.c                 | 1 +
 3 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 699e2c0895ae..b44d201520d8 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -490,6 +490,7 @@ enum {
 	REQ_F_SKIP_LINK_CQES_BIT,
 	REQ_F_SINGLE_POLL_BIT,
 	REQ_F_DOUBLE_POLL_BIT,
+	REQ_F_MULTISHOT_BIT,
 	REQ_F_APOLL_MULTISHOT_BIT,
 	REQ_F_CLEAR_POLLIN_BIT,
 	/* keep async read/write and isreg together and in order */
@@ -567,6 +568,8 @@ enum {
 	REQ_F_SINGLE_POLL	= IO_REQ_FLAG(REQ_F_SINGLE_POLL_BIT),
 	/* double poll may active */
 	REQ_F_DOUBLE_POLL	= IO_REQ_FLAG(REQ_F_DOUBLE_POLL_BIT),
+	/* request posts multiple completions, should be set at prep time */
+	REQ_F_MULTISHOT		= IO_REQ_FLAG(REQ_F_MULTISHOT_BIT),
 	/* fast poll multishot mode */
 	REQ_F_APOLL_MULTISHOT	= IO_REQ_FLAG(REQ_F_APOLL_MULTISHOT_BIT),
 	/* recvmsg special flag, clear EPOLLIN */
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 4ea684a17d01..c859630474fb 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -870,6 +870,7 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
 	bool posted;
 
 	lockdep_assert(!io_wq_current_is_worker());
+	lockdep_assert(req->flags & (REQ_F_MULTISHOT|REQ_F_APOLL_MULTISHOT));
 	lockdep_assert_held(&ctx->uring_lock);
 
 	__io_cq_lock(ctx);
@@ -1840,7 +1841,7 @@ void io_wq_submit_work(struct io_wq_work *work)
 	 * Don't allow any multishot execution from io-wq. It's more restrictive
 	 * than necessary and also cleaner.
 	 */
-	if (req->flags & REQ_F_APOLL_MULTISHOT) {
+	if (req->flags & (REQ_F_MULTISHOT|REQ_F_APOLL_MULTISHOT)) {
 		err = -EBADFD;
 		if (!io_file_can_poll(req))
 			goto fail;
@@ -1851,7 +1852,7 @@ void io_wq_submit_work(struct io_wq_work *work)
 				goto fail;
 			return;
 		} else {
-			req->flags &= ~REQ_F_APOLL_MULTISHOT;
+			req->flags &= ~(REQ_F_APOLL_MULTISHOT|REQ_F_MULTISHOT);
 		}
 	}
 
diff --git a/io_uring/net.c b/io_uring/net.c
index c0275e7f034a..616e953ef0ae 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -448,6 +448,7 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		sr->msg_flags |= MSG_WAITALL;
 		sr->buf_group = req->buf_index;
 		req->buf_list = NULL;
+		req->flags |= REQ_F_MULTISHOT;
 	}
 
 	if (io_is_compat(req->ctx))
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH 1/1] io_uring/net: fix io_req_post_cqe abuse by send bundle
  2025-03-26 22:21 [PATCH 1/1] io_uring/net: fix io_req_post_cqe abuse by send bundle Pavel Begunkov
@ 2025-03-26 22:29 ` Pavel Begunkov
  0 siblings, 0 replies; 2+ messages in thread
From: Pavel Begunkov @ 2025-03-26 22:29 UTC (permalink / raw)
  To: io-uring

On 3/26/25 22:21, Pavel Begunkov wrote:
> [  114.987980][ T5313] WARNING: CPU: 6 PID: 5313 at io_uring/io_uring.c:872 io_req_post_cqe+0x12e/0x4f0
> [  114.991597][ T5313] RIP: 0010:io_req_post_cqe+0x12e/0x4f0
> [  115.001880][ T5313] Call Trace:
> [  115.002222][ T5313]  <TASK>
> [  115.007813][ T5313]  io_send+0x4fe/0x10f0
> [  115.009317][ T5313]  io_issue_sqe+0x1a6/0x1740
> [  115.012094][ T5313]  io_wq_submit_work+0x38b/0xed0
> [  115.013223][ T5313]  io_worker_handle_work+0x62a/0x1600
> [  115.013876][ T5313]  io_wq_worker+0x34f/0xdf0
> 
> As the comment states, io_req_post_cqe() should only be used by
> multishot requests, i.e. REQ_F_APOLL_MULTISHOT, which bundled sends are
> not. Add a flag signifying whether a request wants to post multiple
> CQEs. Eventually REQ_F_APOLL_MULTISHOT should imply the new flag, but
> that's left out for simplicity.

Needs v2

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-03-26 22:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-26 22:21 [PATCH 1/1] io_uring/net: fix io_req_post_cqe abuse by send bundle Pavel Begunkov
2025-03-26 22:29 ` Pavel Begunkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox