public inbox for io-uring@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] various net improvements
@ 2025-03-31 16:17 Pavel Begunkov
  2025-03-31 16:17 ` [PATCH 1/5] io_uring/net: avoid import_ubuf for regvec send Pavel Begunkov
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Pavel Begunkov @ 2025-03-31 16:17 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Patch 1 prevents checking registered buffers against access_ok().
Patches 4-5 simplify the use of req->buf_index, which now will
store only selected buffer bid and not bounce back and forth
between bgid and bid.

Pavel Begunkov (5):
  io_uring/net: avoid import_ubuf for regvec send
  io_uring/net: don't use io_do_buffer_select at prep
  io_uring: set IMPORT_BUFFER in generic send setup
  io_uring/kbuf: pass bgid to io_buffer_select()
  io_uring: don't store bgid in req->buf_index

 include/linux/io_uring_types.h |  3 +--
 io_uring/kbuf.c                | 15 ++++++--------
 io_uring/kbuf.h                |  3 ++-
 io_uring/net.c                 | 38 ++++++++++++++--------------------
 io_uring/rw.c                  |  5 ++++-
 io_uring/rw.h                  |  2 ++
 6 files changed, 31 insertions(+), 35 deletions(-)

-- 
2.48.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/5] io_uring/net: avoid import_ubuf for regvec send
  2025-03-31 16:17 [PATCH 0/5] various net improvements Pavel Begunkov
@ 2025-03-31 16:17 ` Pavel Begunkov
  2025-03-31 16:17 ` [PATCH 2/5] io_uring/net: don't use io_do_buffer_select at prep Pavel Begunkov
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Pavel Begunkov @ 2025-03-31 16:17 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

With registered buffers we set up iterators in helpers like
io_import_fixed(), and there is no need for a import_ubuf() before that.
It was fine as we used real pointers for offset calculation, but that's
not the case anymore since introduction of ublk kernel buffers.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/net.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/io_uring/net.c b/io_uring/net.c
index f8dfa6166e3c..3b50151577be 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -359,6 +359,8 @@ static int io_send_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		kmsg->msg.msg_name = &kmsg->addr;
 		kmsg->msg.msg_namelen = addr_len;
 	}
+	if (sr->flags & IORING_RECVSEND_FIXED_BUF)
+		return 0;
 	if (!io_do_buffer_select(req)) {
 		ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len,
 				  &kmsg->msg.msg_iter);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/5] io_uring/net: don't use io_do_buffer_select at prep
  2025-03-31 16:17 [PATCH 0/5] various net improvements Pavel Begunkov
  2025-03-31 16:17 ` [PATCH 1/5] io_uring/net: avoid import_ubuf for regvec send Pavel Begunkov
@ 2025-03-31 16:17 ` Pavel Begunkov
  2025-03-31 16:18 ` [PATCH 3/5] io_uring: set IMPORT_BUFFER in generic send setup Pavel Begunkov
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Pavel Begunkov @ 2025-03-31 16:17 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Prep code is interested whether it's a selected buffer request, not
whether a buffer has already been selected like what
io_do_buffer_select() returns. Check for REQ_F_BUFFER_SELECT directly.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/net.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/io_uring/net.c b/io_uring/net.c
index 3b50151577be..f0809102cdf4 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -361,13 +361,9 @@ static int io_send_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	}
 	if (sr->flags & IORING_RECVSEND_FIXED_BUF)
 		return 0;
-	if (!io_do_buffer_select(req)) {
-		ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len,
-				  &kmsg->msg.msg_iter);
-		if (unlikely(ret < 0))
-			return ret;
-	}
-	return 0;
+	if (req->flags & REQ_F_BUFFER_SELECT)
+		return 0;
+	return import_ubuf(ITER_SOURCE, sr->buf, sr->len, &kmsg->msg.msg_iter);
 }
 
 static int io_sendmsg_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe)
@@ -724,7 +720,6 @@ static int io_recvmsg_prep_setup(struct io_kiocb *req)
 {
 	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
 	struct io_async_msghdr *kmsg;
-	int ret;
 
 	kmsg = io_msg_alloc_async(req);
 	if (unlikely(!kmsg))
@@ -740,13 +735,10 @@ static int io_recvmsg_prep_setup(struct io_kiocb *req)
 		kmsg->msg.msg_iocb = NULL;
 		kmsg->msg.msg_ubuf = NULL;
 
-		if (!io_do_buffer_select(req)) {
-			ret = import_ubuf(ITER_DEST, sr->buf, sr->len,
-					  &kmsg->msg.msg_iter);
-			if (unlikely(ret))
-				return ret;
-		}
-		return 0;
+		if (req->flags & REQ_F_BUFFER_SELECT)
+			return 0;
+		return import_ubuf(ITER_DEST, sr->buf, sr->len,
+				   &kmsg->msg.msg_iter);
 	}
 
 	return io_recvmsg_copy_hdr(req, kmsg);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/5] io_uring: set IMPORT_BUFFER in generic send setup
  2025-03-31 16:17 [PATCH 0/5] various net improvements Pavel Begunkov
  2025-03-31 16:17 ` [PATCH 1/5] io_uring/net: avoid import_ubuf for regvec send Pavel Begunkov
  2025-03-31 16:17 ` [PATCH 2/5] io_uring/net: don't use io_do_buffer_select at prep Pavel Begunkov
@ 2025-03-31 16:18 ` Pavel Begunkov
  2025-03-31 16:18 ` [PATCH 4/5] io_uring/kbuf: pass bgid to io_buffer_select() Pavel Begunkov
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Pavel Begunkov @ 2025-03-31 16:18 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Move REQ_F_IMPORT_BUFFER to the common send setup. Currently, the only
user is send zc, but we'll want for normal sends to support that in the
future.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/net.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/io_uring/net.c b/io_uring/net.c
index f0809102cdf4..bddf41cdd2b3 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -359,8 +359,10 @@ static int io_send_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		kmsg->msg.msg_name = &kmsg->addr;
 		kmsg->msg.msg_namelen = addr_len;
 	}
-	if (sr->flags & IORING_RECVSEND_FIXED_BUF)
+	if (sr->flags & IORING_RECVSEND_FIXED_BUF) {
+		req->flags |= REQ_F_IMPORT_BUFFER;
 		return 0;
+	}
 	if (req->flags & REQ_F_BUFFER_SELECT)
 		return 0;
 	return import_ubuf(ITER_SOURCE, sr->buf, sr->len, &kmsg->msg.msg_iter);
@@ -1314,8 +1316,6 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		return -ENOMEM;
 
 	if (req->opcode == IORING_OP_SEND_ZC) {
-		if (zc->flags & IORING_RECVSEND_FIXED_BUF)
-			req->flags |= REQ_F_IMPORT_BUFFER;
 		ret = io_send_setup(req, sqe);
 	} else {
 		if (unlikely(sqe->addr2 || sqe->file_index))
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/5] io_uring/kbuf: pass bgid to io_buffer_select()
  2025-03-31 16:17 [PATCH 0/5] various net improvements Pavel Begunkov
                   ` (2 preceding siblings ...)
  2025-03-31 16:18 ` [PATCH 3/5] io_uring: set IMPORT_BUFFER in generic send setup Pavel Begunkov
@ 2025-03-31 16:18 ` Pavel Begunkov
  2025-03-31 16:18 ` [PATCH 5/5] io_uring: don't store bgid in req->buf_index Pavel Begunkov
  2025-03-31 19:06 ` [PATCH 0/5] various net improvements Jens Axboe
  5 siblings, 0 replies; 8+ messages in thread
From: Pavel Begunkov @ 2025-03-31 16:18 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

The current situation with buffer group id juggling is not ideal.
req->buf_index first stores the bgid, then it's overwritten by a buffer
id, and then it can get restored back no recycling / etc. It's not so
easy to control, and it's not handled consistently across request types
with receive requests saving and restoring the bgid it by hand.

It's a prep patch that adds a buffer group id argument to
io_buffer_select(). The caller will be responsible for stashing a copy
somewhere and passing it into the function.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/kbuf.c | 4 ++--
 io_uring/kbuf.h | 2 +-
 io_uring/net.c  | 9 ++++-----
 io_uring/rw.c   | 5 ++++-
 io_uring/rw.h   | 2 ++
 5 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index 3478be6d02ab..eb9a48b936bd 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -186,7 +186,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
 }
 
 void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
-			      unsigned int issue_flags)
+			      unsigned buf_group, unsigned int issue_flags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct io_buffer_list *bl;
@@ -194,7 +194,7 @@ void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
 
 	io_ring_submit_lock(req->ctx, issue_flags);
 
-	bl = io_buffer_get_list(ctx, req->buf_index);
+	bl = io_buffer_get_list(ctx, buf_group);
 	if (likely(bl)) {
 		if (bl->flags & IOBL_BUF_RING)
 			ret = io_ring_buffer_select(req, len, bl, issue_flags);
diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
index 2ec0b983ce24..09129115f3ef 100644
--- a/io_uring/kbuf.h
+++ b/io_uring/kbuf.h
@@ -58,7 +58,7 @@ struct buf_sel_arg {
 };
 
 void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
-			      unsigned int issue_flags);
+			      unsigned buf_group, unsigned int issue_flags);
 int io_buffers_select(struct io_kiocb *req, struct buf_sel_arg *arg,
 		      unsigned int issue_flags);
 int io_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg);
diff --git a/io_uring/net.c b/io_uring/net.c
index bddf41cdd2b3..6b7d3b64a441 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -407,13 +407,12 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
 	if (sr->msg_flags & MSG_DONTWAIT)
 		req->flags |= REQ_F_NOWAIT;
+	if (req->flags & REQ_F_BUFFER_SELECT)
+		sr->buf_group = req->buf_index;
 	if (sr->flags & IORING_RECVSEND_BUNDLE) {
 		if (req->opcode == IORING_OP_SENDMSG)
 			return -EINVAL;
-		if (!(req->flags & REQ_F_BUFFER_SELECT))
-			return -EINVAL;
 		sr->msg_flags |= MSG_WAITALL;
-		sr->buf_group = req->buf_index;
 		req->buf_list = NULL;
 		req->flags |= REQ_F_MULTISHOT;
 	}
@@ -980,7 +979,7 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
 		void __user *buf;
 		size_t len = sr->len;
 
-		buf = io_buffer_select(req, &len, issue_flags);
+		buf = io_buffer_select(req, &len, sr->buf_group, issue_flags);
 		if (!buf)
 			return -ENOBUFS;
 
@@ -1090,7 +1089,7 @@ static int io_recv_buf_select(struct io_kiocb *req, struct io_async_msghdr *kmsg
 		void __user *buf;
 
 		*len = sr->len;
-		buf = io_buffer_select(req, len, issue_flags);
+		buf = io_buffer_select(req, len, sr->buf_group, issue_flags);
 		if (!buf)
 			return -ENOBUFS;
 		sr->buf = buf;
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 246b22225919..bdf7df19fab2 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -119,7 +119,7 @@ static int __io_import_rw_buffer(int ddir, struct io_kiocb *req,
 		return io_import_vec(ddir, req, io, buf, sqe_len);
 
 	if (io_do_buffer_select(req)) {
-		buf = io_buffer_select(req, &sqe_len, issue_flags);
+		buf = io_buffer_select(req, &sqe_len, io->buf_group, issue_flags);
 		if (!buf)
 			return -ENOBUFS;
 		rw->addr = (unsigned long) buf;
@@ -253,16 +253,19 @@ static int __io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 			int ddir)
 {
 	struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
+	struct io_async_rw *io;
 	unsigned ioprio;
 	u64 attr_type_mask;
 	int ret;
 
 	if (io_rw_alloc_async(req))
 		return -ENOMEM;
+	io = req->async_data;
 
 	rw->kiocb.ki_pos = READ_ONCE(sqe->off);
 	/* used for fixed read/write too - just read unconditionally */
 	req->buf_index = READ_ONCE(sqe->buf_index);
+	io->buf_group = req->buf_index;
 
 	ioprio = READ_ONCE(sqe->ioprio);
 	if (ioprio) {
diff --git a/io_uring/rw.h b/io_uring/rw.h
index 81d6d9a8cf69..129a53fe5482 100644
--- a/io_uring/rw.h
+++ b/io_uring/rw.h
@@ -16,6 +16,8 @@ struct io_async_rw {
 		struct iov_iter			iter;
 		struct iov_iter_state		iter_state;
 		struct iovec			fast_iov;
+		unsigned			buf_group;
+
 		/*
 		 * wpq is for buffered io, while meta fields are used with
 		 * direct io
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 5/5] io_uring: don't store bgid in req->buf_index
  2025-03-31 16:17 [PATCH 0/5] various net improvements Pavel Begunkov
                   ` (3 preceding siblings ...)
  2025-03-31 16:18 ` [PATCH 4/5] io_uring/kbuf: pass bgid to io_buffer_select() Pavel Begunkov
@ 2025-03-31 16:18 ` Pavel Begunkov
  2025-03-31 19:06 ` [PATCH 0/5] various net improvements Jens Axboe
  5 siblings, 0 replies; 8+ messages in thread
From: Pavel Begunkov @ 2025-03-31 16:18 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Pass buffer group id into the rest of helpers via struct buf_sel_arg
and remove all reassignments of req->buf_index back to bgid. Now, it
only stores buffer indexes, and the group is provided by callers.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 include/linux/io_uring_types.h |  3 +--
 io_uring/kbuf.c                | 11 ++++-------
 io_uring/kbuf.h                |  1 +
 io_uring/net.c                 |  3 ++-
 4 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index b44d201520d8..3b467879bca8 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -653,8 +653,7 @@ struct io_kiocb {
 	u8				iopoll_completed;
 	/*
 	 * Can be either a fixed buffer index, or used with provided buffers.
-	 * For the latter, before issue it points to the buffer group ID,
-	 * and after selection it points to the buffer ID itself.
+	 * For the latter, it points to the selected buffer ID.
 	 */
 	u16				buf_index;
 
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
index eb9a48b936bd..8f8ec7cc7814 100644
--- a/io_uring/kbuf.c
+++ b/io_uring/kbuf.c
@@ -85,7 +85,6 @@ void io_kbuf_drop_legacy(struct io_kiocb *req)
 {
 	if (WARN_ON_ONCE(!(req->flags & REQ_F_BUFFER_SELECTED)))
 		return;
-	req->buf_index = req->kbuf->bgid;
 	req->flags &= ~REQ_F_BUFFER_SELECTED;
 	kfree(req->kbuf);
 	req->kbuf = NULL;
@@ -103,7 +102,6 @@ bool io_kbuf_recycle_legacy(struct io_kiocb *req, unsigned issue_flags)
 	bl = io_buffer_get_list(ctx, buf->bgid);
 	list_add(&buf->list, &bl->buf_list);
 	req->flags &= ~REQ_F_BUFFER_SELECTED;
-	req->buf_index = buf->bgid;
 
 	io_ring_submit_unlock(ctx, issue_flags);
 	return true;
@@ -306,7 +304,7 @@ int io_buffers_select(struct io_kiocb *req, struct buf_sel_arg *arg,
 	int ret = -ENOENT;
 
 	io_ring_submit_lock(ctx, issue_flags);
-	bl = io_buffer_get_list(ctx, req->buf_index);
+	bl = io_buffer_get_list(ctx, arg->buf_group);
 	if (unlikely(!bl))
 		goto out_unlock;
 
@@ -339,7 +337,7 @@ int io_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg)
 
 	lockdep_assert_held(&ctx->uring_lock);
 
-	bl = io_buffer_get_list(ctx, req->buf_index);
+	bl = io_buffer_get_list(ctx, arg->buf_group);
 	if (unlikely(!bl))
 		return -ENOENT;
 
@@ -359,10 +357,9 @@ static inline bool __io_put_kbuf_ring(struct io_kiocb *req, int len, int nr)
 	struct io_buffer_list *bl = req->buf_list;
 	bool ret = true;
 
-	if (bl) {
+	if (bl)
 		ret = io_kbuf_commit(req, bl, len, nr);
-		req->buf_index = bl->bgid;
-	}
+
 	req->flags &= ~REQ_F_BUFFER_RING;
 	return ret;
 }
diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
index 09129115f3ef..c576a15fbfd4 100644
--- a/io_uring/kbuf.h
+++ b/io_uring/kbuf.h
@@ -55,6 +55,7 @@ struct buf_sel_arg {
 	size_t max_len;
 	unsigned short nr_iovs;
 	unsigned short mode;
+	unsigned buf_group;
 };
 
 void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
diff --git a/io_uring/net.c b/io_uring/net.c
index 6b7d3b64a441..7852f0d8e2b6 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -190,7 +190,6 @@ static inline void io_mshot_prep_retry(struct io_kiocb *req,
 	sr->done_io = 0;
 	sr->retry = false;
 	sr->len = 0; /* get from the provided buffer */
-	req->buf_index = sr->buf_group;
 }
 
 static int io_net_import_vec(struct io_kiocb *req, struct io_async_msghdr *iomsg,
@@ -569,6 +568,7 @@ static int io_send_select_buffer(struct io_kiocb *req, unsigned int issue_flags,
 		.iovs = &kmsg->fast_iov,
 		.max_len = min_not_zero(sr->len, INT_MAX),
 		.nr_iovs = 1,
+		.buf_group = sr->buf_group,
 	};
 
 	if (kmsg->vec.iovec) {
@@ -1057,6 +1057,7 @@ static int io_recv_buf_select(struct io_kiocb *req, struct io_async_msghdr *kmsg
 			.iovs = &kmsg->fast_iov,
 			.nr_iovs = 1,
 			.mode = KBUF_MODE_EXPAND,
+			.buf_group = sr->buf_group,
 		};
 
 		if (kmsg->vec.iovec) {
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] various net improvements
  2025-03-31 16:17 [PATCH 0/5] various net improvements Pavel Begunkov
                   ` (4 preceding siblings ...)
  2025-03-31 16:18 ` [PATCH 5/5] io_uring: don't store bgid in req->buf_index Pavel Begunkov
@ 2025-03-31 19:06 ` Jens Axboe
  2025-03-31 19:07   ` Jens Axboe
  5 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2025-03-31 19:06 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov


On Mon, 31 Mar 2025 17:17:57 +0100, Pavel Begunkov wrote:
> Patch 1 prevents checking registered buffers against access_ok().
> Patches 4-5 simplify the use of req->buf_index, which now will
> store only selected buffer bid and not bounce back and forth
> between bgid and bid.
> 
> Pavel Begunkov (5):
>   io_uring/net: avoid import_ubuf for regvec send
>   io_uring/net: don't use io_do_buffer_select at prep
>   io_uring: set IMPORT_BUFFER in generic send setup
>   io_uring/kbuf: pass bgid to io_buffer_select()
>   io_uring: don't store bgid in req->buf_index
> 
> [...]

Applied, thanks!

[1/5] io_uring/net: avoid import_ubuf for regvec send
      commit: 81ed18015d65f111ddbc88599c48338a5e1927d0
[2/5] io_uring/net: don't use io_do_buffer_select at prep
      commit: 98920400c6417e7adfb4843d5799aa1262f81471
[3/5] io_uring: set IMPORT_BUFFER in generic send setup
      commit: 1e90d2ed901868924b04a1bf2621878ad8cbe172
[4/5] io_uring/kbuf: pass bgid to io_buffer_select()
      commit: bd0bb84751f2d4b119a689e5b46c733d9c72aa75
[5/5] io_uring: don't store bgid in req->buf_index
      commit: 0576f51ba44c65b072b6c216d250864beea2eb9b

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] various net improvements
  2025-03-31 19:06 ` [PATCH 0/5] various net improvements Jens Axboe
@ 2025-03-31 19:07   ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2025-03-31 19:07 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov

On 3/31/25 1:06 PM, Jens Axboe wrote:
> 
> On Mon, 31 Mar 2025 17:17:57 +0100, Pavel Begunkov wrote:
>> Patch 1 prevents checking registered buffers against access_ok().
>> Patches 4-5 simplify the use of req->buf_index, which now will
>> store only selected buffer bid and not bounce back and forth
>> between bgid and bid.
>>
>> Pavel Begunkov (5):
>>   io_uring/net: avoid import_ubuf for regvec send
>>   io_uring/net: don't use io_do_buffer_select at prep
>>   io_uring: set IMPORT_BUFFER in generic send setup
>>   io_uring/kbuf: pass bgid to io_buffer_select()
>>   io_uring: don't store bgid in req->buf_index
>>
>> [...]
> 
> Applied, thanks!
> 
> [1/5] io_uring/net: avoid import_ubuf for regvec send
>       commit: 81ed18015d65f111ddbc88599c48338a5e1927d0
> [2/5] io_uring/net: don't use io_do_buffer_select at prep
>       commit: 98920400c6417e7adfb4843d5799aa1262f81471
> [3/5] io_uring: set IMPORT_BUFFER in generic send setup
>       commit: 1e90d2ed901868924b04a1bf2621878ad8cbe172
> [4/5] io_uring/kbuf: pass bgid to io_buffer_select()
>       commit: bd0bb84751f2d4b119a689e5b46c733d9c72aa75
> [5/5] io_uring: don't store bgid in req->buf_index
>       commit: 0576f51ba44c65b072b6c216d250864beea2eb9b

Since the tool doesn't distinguish - queued 1/5 for 6.15, and the
rest for 6.16.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-03-31 19:07 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-31 16:17 [PATCH 0/5] various net improvements Pavel Begunkov
2025-03-31 16:17 ` [PATCH 1/5] io_uring/net: avoid import_ubuf for regvec send Pavel Begunkov
2025-03-31 16:17 ` [PATCH 2/5] io_uring/net: don't use io_do_buffer_select at prep Pavel Begunkov
2025-03-31 16:18 ` [PATCH 3/5] io_uring: set IMPORT_BUFFER in generic send setup Pavel Begunkov
2025-03-31 16:18 ` [PATCH 4/5] io_uring/kbuf: pass bgid to io_buffer_select() Pavel Begunkov
2025-03-31 16:18 ` [PATCH 5/5] io_uring: don't store bgid in req->buf_index Pavel Begunkov
2025-03-31 19:06 ` [PATCH 0/5] various net improvements Jens Axboe
2025-03-31 19:07   ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox