public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: [email protected]
Cc: [email protected], Jens Axboe <[email protected]>
Subject: [PATCH 5/9] io_uring/msg_ring: avoid double indirection task_work for fd passing
Date: Wed,  5 Jun 2024 07:51:13 -0600	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

Like what was done for MSG_RING data passing avoiding a double task_work
roundtrip for IORING_SETUP_DEFER_TASKRUN, implement the same model for
fd passing. File descriptor passing is separately locked anyway, so the
only remaining issue is CQE posting, just like it was for data passing.
And for that, we can use the same approach.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/msg_ring.c | 57 ++++++++++++++++++++++++---------------------
 1 file changed, 31 insertions(+), 26 deletions(-)

diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
index 2b649087fe5c..1ee89bdbbb5b 100644
--- a/io_uring/msg_ring.c
+++ b/io_uring/msg_ring.c
@@ -71,22 +71,6 @@ static inline bool io_msg_need_remote(struct io_ring_ctx *target_ctx)
 	return target_ctx->task_complete;
 }
 
-static int io_msg_exec_remote(struct io_kiocb *req, task_work_func_t func)
-{
-	struct io_ring_ctx *ctx = req->file->private_data;
-	struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
-	struct task_struct *task = READ_ONCE(ctx->submitter_task);
-
-	if (unlikely(!task))
-		return -EOWNERDEAD;
-
-	init_task_work(&msg->tw, func);
-	if (task_work_add(task, &msg->tw, TWA_SIGNAL))
-		return -EOWNERDEAD;
-
-	return IOU_ISSUE_SKIP_COMPLETE;
-}
-
 static struct io_overflow_cqe *io_alloc_overflow(struct io_ring_ctx *target_ctx)
 {
 	bool is_cqe32 = target_ctx->flags & IORING_SETUP_CQE32;
@@ -227,17 +211,38 @@ static int io_msg_install_complete(struct io_kiocb *req, unsigned int issue_flag
 	return ret;
 }
 
-static void io_msg_tw_fd_complete(struct callback_head *head)
+static int io_msg_install_remote(struct io_kiocb *req, unsigned int issue_flags,
+				 struct io_ring_ctx *target_ctx)
 {
-	struct io_msg *msg = container_of(head, struct io_msg, tw);
-	struct io_kiocb *req = cmd_to_io_kiocb(msg);
-	int ret = -EOWNERDEAD;
+	struct io_msg *msg = io_kiocb_to_cmd(req, struct io_msg);
+	struct io_overflow_cqe *ocqe = NULL;
+	int ret;
 
-	if (!(current->flags & PF_EXITING))
-		ret = io_msg_install_complete(req, IO_URING_F_UNLOCKED);
-	if (ret < 0)
-		req_set_fail(req);
-	io_req_queue_tw_complete(req, ret);
+	if (!(msg->flags & IORING_MSG_RING_CQE_SKIP)) {
+		ocqe = io_alloc_overflow(target_ctx);
+		if (!ocqe)
+			return -ENOMEM;
+	}
+
+	if (unlikely(io_double_lock_ctx(target_ctx, issue_flags))) {
+		kfree(ocqe);
+		return -EAGAIN;
+	}
+
+	ret = __io_fixed_fd_install(target_ctx, msg->src_file, msg->dst_fd);
+	mutex_unlock(&target_ctx->uring_lock);
+
+	if (ret >= 0) {
+		msg->src_file = NULL;
+		req->flags &= ~REQ_F_NEED_CLEANUP;
+		if (ocqe) {
+			spin_lock(&target_ctx->completion_lock);
+			io_msg_add_overflow(msg, target_ctx, ocqe, ret, 0);
+			return 0;
+		}
+	}
+	kfree(ocqe);
+	return ret;
 }
 
 static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags)
@@ -262,7 +267,7 @@ static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags)
 	}
 
 	if (io_msg_need_remote(target_ctx))
-		return io_msg_exec_remote(req, io_msg_tw_fd_complete);
+		return io_msg_install_remote(req, issue_flags, target_ctx);
 	return io_msg_install_complete(req, issue_flags);
 }
 
-- 
2.43.0


  parent reply	other threads:[~2024-06-05 14:19 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-05 13:51 [PATCHSET v3 0/9] Improve MSG_RING DEFER_TASKRUN performance Jens Axboe
2024-06-05 13:51 ` [PATCH 1/9] io_uring/msg_ring: tighten requirement for remote posting Jens Axboe
2024-06-05 13:51 ` [PATCH 2/9] io_uring: keep track of overflow entry count Jens Axboe
2024-06-05 13:51 ` [PATCH 3/9] io_uring: abstract out helpers for DEFER_TASKRUN wakeup batching Jens Axboe
2024-06-05 13:51 ` [PATCH 4/9] io_uring/msg_ring: avoid double indirection task_work for data messages Jens Axboe
2024-06-05 13:51 ` Jens Axboe [this message]
2024-06-05 13:51 ` [PATCH 6/9] io_uring/msg_ring: add an alloc cache for CQE entries Jens Axboe
2024-06-05 13:51 ` [PATCH 7/9] io_uring/msg_ring: remove callback_head from struct io_msg Jens Axboe
2024-06-05 13:51 ` [PATCH 8/9] io_uring/msg_ring: add basic wakeup batch support Jens Axboe
2024-06-05 15:32   ` Pavel Begunkov
2024-06-05 15:50     ` Jens Axboe
2024-06-05 13:51 ` [PATCH 9/9] io_uring/msg_ring: remove non-remote message passing Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox