public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-next 0/8] read/write cleanup
@ 2021-10-14 15:10 Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 1/8] io_uring: consistent typing for issue_flags Pavel Begunkov
                   ` (8 more replies)
  0 siblings, 9 replies; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

gave very slight boost (nullb IO) for my testing, 2.89 vs 2.92 MIOPS,
but the main motivation is that I like the code better.

Pavel Begunkov (8):
  io_uring: consistent typing for issue_flags
  io_uring: prioritise read success path over fails
  io_uring: optimise rw comletion handlers
  io_uring: encapsulate rw state
  io_uring: optimise read/write iov state storing
  io_uring: optimise io_import_iovec nonblock passing
  io_uring: clean up io_import_iovec
  io_uring: rearrange io_read()/write()

 fs/io_uring.c | 233 ++++++++++++++++++++++++++------------------------
 1 file changed, 122 insertions(+), 111 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/8] io_uring: consistent typing for issue_flags
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 2/8] io_uring: prioritise read success path over fails Pavel Begunkov
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Some of the functions keep issue_flags as int, change those to unsigned.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 55435464cee0..561610e30085 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3748,7 +3748,7 @@ static int io_mkdirat_prep(struct io_kiocb *req,
 	return 0;
 }
 
-static int io_mkdirat(struct io_kiocb *req, int issue_flags)
+static int io_mkdirat(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_mkdir *mkd = &req->mkdir;
 	int ret;
@@ -3797,7 +3797,7 @@ static int io_symlinkat_prep(struct io_kiocb *req,
 	return 0;
 }
 
-static int io_symlinkat(struct io_kiocb *req, int issue_flags)
+static int io_symlinkat(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_symlink *sl = &req->symlink;
 	int ret;
@@ -3847,7 +3847,7 @@ static int io_linkat_prep(struct io_kiocb *req,
 	return 0;
 }
 
-static int io_linkat(struct io_kiocb *req, int issue_flags)
+static int io_linkat(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_hardlink *lnk = &req->hardlink;
 	int ret;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/8] io_uring: prioritise read success path over fails
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 1/8] io_uring: consistent typing for issue_flags Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 3/8] io_uring: optimise rw comletion handlers Pavel Begunkov
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Rearrange io_read return handling so first we expect it completing
successfully and only then checking for errors, which is a colder path.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 561610e30085..6e1d70411589 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3449,7 +3449,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 		ret = 0;
 	} else if (ret == -EIOCBQUEUED) {
 		goto out_free;
-	} else if (ret <= 0 || ret == req->result || !force_nonblock ||
+	} else if (ret == req->result || ret <= 0 || !force_nonblock ||
 		   (req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
 		/* read all, failed, already did sync or don't want to retry */
 		goto done;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/8] io_uring: optimise rw comletion handlers
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 1/8] io_uring: consistent typing for issue_flags Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 2/8] io_uring: prioritise read success path over fails Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 4/8] io_uring: encapsulate rw state Pavel Begunkov
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Don't override req->result in io_complete_rw_iopoll() when it's already
of the same value, we have an if just above it, so move the assignment
there. Also, add one simle unlikely() in __io_complete_rw_common().

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6e1d70411589..2f193893cf1b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2594,7 +2594,7 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
 {
 	if (req->rw.kiocb.ki_flags & IOCB_WRITE)
 		kiocb_end_write(req);
-	if (res != req->result) {
+	if (unlikely(res != req->result)) {
 		if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
 		    io_rw_should_reissue(req)) {
 			req->flags |= REQ_F_REISSUE;
@@ -2649,9 +2649,9 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
 			req->flags |= REQ_F_REISSUE;
 			return;
 		}
+		req->result = res;
 	}
 
-	req->result = res;
 	/* order with io_iopoll_complete() checking ->iopoll_completed */
 	smp_store_release(&req->iopoll_completed, 1);
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/8] io_uring: encapsulate rw state
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
                   ` (2 preceding siblings ...)
  2021-10-14 15:10 ` [PATCH 3/8] io_uring: optimise rw comletion handlers Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-18  6:06   ` Hao Xu
  2021-10-14 15:10 ` [PATCH 5/8] io_uring: optimise read/write iov state storing Pavel Begunkov
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Add a new struct io_rw_state storing all iov related bits: fast iov,
iterator and iterator state. Not much changes here, simply convert
struct io_async_rw to use it.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 42 +++++++++++++++++++++++-------------------
 1 file changed, 23 insertions(+), 19 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 2f193893cf1b..3447243805d9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -693,11 +693,15 @@ struct io_async_msghdr {
 	struct sockaddr_storage		addr;
 };
 
-struct io_async_rw {
+struct io_rw_state {
 	struct iovec			fast_iov[UIO_FASTIOV];
-	const struct iovec		*free_iovec;
 	struct iov_iter			iter;
 	struct iov_iter_state		iter_state;
+};
+
+struct io_async_rw {
+	struct io_rw_state		s;
+	const struct iovec		*free_iovec;
 	size_t				bytes_done;
 	struct wait_page_queue		wpq;
 };
@@ -2550,7 +2554,7 @@ static bool io_resubmit_prep(struct io_kiocb *req)
 
 	if (!req_has_async_data(req))
 		return !io_req_prep_async(req);
-	iov_iter_restore(&rw->iter, &rw->iter_state);
+	iov_iter_restore(&rw->s.iter, &rw->s.iter_state);
 	return true;
 }
 
@@ -3221,7 +3225,7 @@ static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
 {
 	struct io_async_rw *rw = req->async_data;
 
-	memcpy(&rw->iter, iter, sizeof(*iter));
+	memcpy(&rw->s.iter, iter, sizeof(*iter));
 	rw->free_iovec = iovec;
 	rw->bytes_done = 0;
 	/* can only be fixed buffers, no need to do anything */
@@ -3230,13 +3234,13 @@ static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
 	if (!iovec) {
 		unsigned iov_off = 0;
 
-		rw->iter.iov = rw->fast_iov;
+		rw->s.iter.iov = rw->s.fast_iov;
 		if (iter->iov != fast_iov) {
 			iov_off = iter->iov - fast_iov;
-			rw->iter.iov += iov_off;
+			rw->s.iter.iov += iov_off;
 		}
-		if (rw->fast_iov != fast_iov)
-			memcpy(rw->fast_iov + iov_off, fast_iov + iov_off,
+		if (rw->s.fast_iov != fast_iov)
+			memcpy(rw->s.fast_iov + iov_off, fast_iov + iov_off,
 			       sizeof(struct iovec) * iter->nr_segs);
 	} else {
 		req->flags |= REQ_F_NEED_CLEANUP;
@@ -3271,7 +3275,7 @@ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
 		io_req_map_rw(req, iovec, fast_iov, iter);
 		iorw = req->async_data;
 		/* we've copied and mapped the iter, ensure state is saved */
-		iov_iter_save_state(&iorw->iter, &iorw->iter_state);
+		iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
 	}
 	return 0;
 }
@@ -3279,10 +3283,10 @@ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
 static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
 {
 	struct io_async_rw *iorw = req->async_data;
-	struct iovec *iov = iorw->fast_iov;
+	struct iovec *iov = iorw->s.fast_iov;
 	int ret;
 
-	ret = io_import_iovec(rw, req, &iov, &iorw->iter, false);
+	ret = io_import_iovec(rw, req, &iov, &iorw->s.iter, false);
 	if (unlikely(ret < 0))
 		return ret;
 
@@ -3290,7 +3294,7 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
 	iorw->free_iovec = iov;
 	if (iov)
 		req->flags |= REQ_F_NEED_CLEANUP;
-	iov_iter_save_state(&iorw->iter, &iorw->iter_state);
+	iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
 	return 0;
 }
 
@@ -3400,8 +3404,8 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 
 	if (req_has_async_data(req)) {
 		rw = req->async_data;
-		iter = &rw->iter;
-		state = &rw->iter_state;
+		iter = &rw->s.iter;
+		state = &rw->s.iter_state;
 		/*
 		 * We come here from an earlier attempt, restore our state to
 		 * match in case it doesn't. It's cheap enough that we don't
@@ -3472,9 +3476,9 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 	 * Now use our persistent iterator and state, if we aren't already.
 	 * We've restored and mapped the iter to match.
 	 */
-	if (iter != &rw->iter) {
-		iter = &rw->iter;
-		state = &rw->iter_state;
+	if (iter != &rw->s.iter) {
+		iter = &rw->s.iter;
+		state = &rw->s.iter_state;
 	}
 
 	do {
@@ -3536,8 +3540,8 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 
 	if (req_has_async_data(req)) {
 		rw = req->async_data;
-		iter = &rw->iter;
-		state = &rw->iter_state;
+		iter = &rw->s.iter;
+		state = &rw->s.iter_state;
 		iov_iter_restore(iter, state);
 		iovec = NULL;
 	} else {
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/8] io_uring: optimise read/write iov state storing
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
                   ` (3 preceding siblings ...)
  2021-10-14 15:10 ` [PATCH 4/8] io_uring: encapsulate rw state Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 6/8] io_uring: optimise io_import_iovec nonblock passing Pavel Begunkov
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Currently io_read() and io_write() keep separate pointers to an iter and
to struct iov_iter_state, which is not great for register spilling and
requires more on-stack copies. They are both either on-stack or in
req->async_data at the same time, so use struct io_rw_state and keep a
pointer only to it, so having all the state with just one pointer.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 79 ++++++++++++++++++++++++---------------------------
 1 file changed, 37 insertions(+), 42 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3447243805d9..248ef7b09268 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -694,9 +694,9 @@ struct io_async_msghdr {
 };
 
 struct io_rw_state {
-	struct iovec			fast_iov[UIO_FASTIOV];
 	struct iov_iter			iter;
 	struct iov_iter_state		iter_state;
+	struct iovec			fast_iov[UIO_FASTIOV];
 };
 
 struct io_async_rw {
@@ -3259,8 +3259,7 @@ static inline bool io_alloc_async_data(struct io_kiocb *req)
 }
 
 static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
-			     const struct iovec *fast_iov,
-			     struct iov_iter *iter, bool force)
+			     struct io_rw_state *s, bool force)
 {
 	if (!force && !io_op_defs[req->opcode].needs_async_setup)
 		return 0;
@@ -3272,7 +3271,7 @@ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
 			return -ENOMEM;
 		}
 
-		io_req_map_rw(req, iovec, fast_iov, iter);
+		io_req_map_rw(req, iovec, s->fast_iov, &s->iter);
 		iorw = req->async_data;
 		/* we've copied and mapped the iter, ensure state is saved */
 		iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
@@ -3394,33 +3393,33 @@ static bool need_read_all(struct io_kiocb *req)
 
 static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 {
-	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+	struct io_rw_state __s, *s;
+	struct iovec *iovec;
 	struct kiocb *kiocb = &req->rw.kiocb;
-	struct iov_iter __iter, *iter = &__iter;
 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
-	struct iov_iter_state __state, *state;
 	struct io_async_rw *rw;
 	ssize_t ret, ret2;
 
 	if (req_has_async_data(req)) {
 		rw = req->async_data;
-		iter = &rw->s.iter;
-		state = &rw->s.iter_state;
+		s = &rw->s;
 		/*
 		 * We come here from an earlier attempt, restore our state to
 		 * match in case it doesn't. It's cheap enough that we don't
 		 * need to make this conditional.
 		 */
-		iov_iter_restore(iter, state);
+		iov_iter_restore(&s->iter, &s->iter_state);
 		iovec = NULL;
 	} else {
-		ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock);
+		s = &__s;
+		iovec = s->fast_iov;
+		ret = io_import_iovec(READ, req, &iovec, &s->iter, !force_nonblock);
 		if (ret < 0)
 			return ret;
-		state = &__state;
-		iov_iter_save_state(iter, state);
+
+		iov_iter_save_state(&s->iter, &s->iter_state);
 	}
-	req->result = iov_iter_count(iter);
+	req->result = iov_iter_count(&s->iter);
 
 	/* Ensure we clear previously set non-block flag */
 	if (!force_nonblock)
@@ -3430,7 +3429,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 
 	/* If the file doesn't support async, just async punt */
 	if (force_nonblock && !io_file_supports_nowait(req, READ)) {
-		ret = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
+		ret = io_setup_async_rw(req, iovec, s, true);
 		return ret ?: -EAGAIN;
 	}
 
@@ -3440,7 +3439,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 		return ret;
 	}
 
-	ret = io_iter_do_read(req, iter);
+	ret = io_iter_do_read(req, &s->iter);
 
 	if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
 		req->flags &= ~REQ_F_REISSUE;
@@ -3464,22 +3463,19 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 	 * untouched in case of error. Restore it and we'll advance it
 	 * manually if we need to.
 	 */
-	iov_iter_restore(iter, state);
+	iov_iter_restore(&s->iter, &s->iter_state);
 
-	ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
+	ret2 = io_setup_async_rw(req, iovec, s, true);
 	if (ret2)
 		return ret2;
 
 	iovec = NULL;
 	rw = req->async_data;
+	s = &rw->s;
 	/*
 	 * Now use our persistent iterator and state, if we aren't already.
 	 * We've restored and mapped the iter to match.
 	 */
-	if (iter != &rw->s.iter) {
-		iter = &rw->s.iter;
-		state = &rw->s.iter_state;
-	}
 
 	do {
 		/*
@@ -3487,11 +3483,11 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 		 * above or inside this loop. Advance the iter by the bytes
 		 * that were consumed.
 		 */
-		iov_iter_advance(iter, ret);
-		if (!iov_iter_count(iter))
+		iov_iter_advance(&s->iter, ret);
+		if (!iov_iter_count(&s->iter))
 			break;
 		rw->bytes_done += ret;
-		iov_iter_save_state(iter, state);
+		iov_iter_save_state(&s->iter, &s->iter_state);
 
 		/* if we can retry, do so with the callbacks armed */
 		if (!io_rw_should_retry(req)) {
@@ -3505,12 +3501,12 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 		 * desired page gets unlocked. We can also get a partial read
 		 * here, and if we do, then just retry at the new offset.
 		 */
-		ret = io_iter_do_read(req, iter);
+		ret = io_iter_do_read(req, &s->iter);
 		if (ret == -EIOCBQUEUED)
 			return 0;
 		/* we got some bytes, but not all. retry. */
 		kiocb->ki_flags &= ~IOCB_WAITQ;
-		iov_iter_restore(iter, state);
+		iov_iter_restore(&s->iter, &s->iter_state);
 	} while (ret > 0);
 done:
 	kiocb_done(kiocb, ret, issue_flags);
@@ -3530,28 +3526,27 @@ static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 
 static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 {
-	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+	struct io_rw_state __s, *s;
+	struct io_async_rw *rw;
+	struct iovec *iovec;
 	struct kiocb *kiocb = &req->rw.kiocb;
-	struct iov_iter __iter, *iter = &__iter;
 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
-	struct iov_iter_state __state, *state;
-	struct io_async_rw *rw;
 	ssize_t ret, ret2;
 
 	if (req_has_async_data(req)) {
 		rw = req->async_data;
-		iter = &rw->s.iter;
-		state = &rw->s.iter_state;
-		iov_iter_restore(iter, state);
+		s = &rw->s;
+		iov_iter_restore(&s->iter, &s->iter_state);
 		iovec = NULL;
 	} else {
-		ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock);
+		s = &__s;
+		iovec = s->fast_iov;
+		ret = io_import_iovec(WRITE, req, &iovec, &s->iter, !force_nonblock);
 		if (ret < 0)
 			return ret;
-		state = &__state;
-		iov_iter_save_state(iter, state);
+		iov_iter_save_state(&s->iter, &s->iter_state);
 	}
-	req->result = iov_iter_count(iter);
+	req->result = iov_iter_count(&s->iter);
 
 	/* Ensure we clear previously set non-block flag */
 	if (!force_nonblock)
@@ -3587,9 +3582,9 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 	kiocb->ki_flags |= IOCB_WRITE;
 
 	if (req->file->f_op->write_iter)
-		ret2 = call_write_iter(req->file, kiocb, iter);
+		ret2 = call_write_iter(req->file, kiocb, &s->iter);
 	else if (req->file->f_op->write)
-		ret2 = loop_rw_iter(WRITE, req, iter);
+		ret2 = loop_rw_iter(WRITE, req, &s->iter);
 	else
 		ret2 = -EINVAL;
 
@@ -3615,8 +3610,8 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 		kiocb_done(kiocb, ret2, issue_flags);
 	} else {
 copy_iov:
-		iov_iter_restore(iter, state);
-		ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
+		iov_iter_restore(&s->iter, &s->iter_state);
+		ret = io_setup_async_rw(req, iovec, s, false);
 		return ret ?: -EAGAIN;
 	}
 out_free:
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 6/8] io_uring: optimise io_import_iovec nonblock passing
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
                   ` (4 preceding siblings ...)
  2021-10-14 15:10 ` [PATCH 5/8] io_uring: optimise read/write iov state storing Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 7/8] io_uring: clean up io_import_iovec Pavel Begunkov
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

First, change IO_URING_F_NONBLOCK to take sign bit of the int, so
checking for it can be turned into test + sign-based-jump, makes the
binary smaller and may be faster.

Then, instead of passing need_lock boolean into io_import_iovec() just
give it issue_flags, which is already stored somewhere. Saves some space
on stack, a couple of test + cmov operations and other conversions.

note: we still leave
force_nonblock = issue_flags & IO_URING_F_NONBLOCK
variable, but it's optimised out by the compiler into testing
issue_flags directly.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 47 +++++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 22 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 248ef7b09268..9a22a983fb53 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -198,8 +198,9 @@ struct io_rings {
 };
 
 enum io_uring_cmd_flags {
-	IO_URING_F_NONBLOCK		= 1,
-	IO_URING_F_COMPLETE_DEFER	= 2,
+	IO_URING_F_COMPLETE_DEFER	= 1,
+	/* int's last bit, sign checks are usually faster than a bit test */
+	IO_URING_F_NONBLOCK		= INT_MIN,
 };
 
 struct io_mapped_ubuf {
@@ -2999,10 +3000,11 @@ static void io_ring_submit_lock(struct io_ring_ctx *ctx, bool needs_lock)
 }
 
 static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
-					  int bgid, bool needs_lock)
+					  int bgid, unsigned int issue_flags)
 {
 	struct io_buffer *kbuf = req->kbuf;
 	struct io_buffer *head;
+	bool needs_lock = !(issue_flags & IO_URING_F_NONBLOCK);
 
 	if (req->flags & REQ_F_BUFFER_SELECTED)
 		return kbuf;
@@ -3034,13 +3036,13 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
 }
 
 static void __user *io_rw_buffer_select(struct io_kiocb *req, size_t *len,
-					bool needs_lock)
+					unsigned int issue_flags)
 {
 	struct io_buffer *kbuf;
 	u16 bgid;
 
 	bgid = req->buf_index;
-	kbuf = io_buffer_select(req, len, bgid, needs_lock);
+	kbuf = io_buffer_select(req, len, bgid, issue_flags);
 	if (IS_ERR(kbuf))
 		return kbuf;
 	return u64_to_user_ptr(kbuf->addr);
@@ -3048,7 +3050,7 @@ static void __user *io_rw_buffer_select(struct io_kiocb *req, size_t *len,
 
 #ifdef CONFIG_COMPAT
 static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
-				bool needs_lock)
+				unsigned int issue_flags)
 {
 	struct compat_iovec __user *uiov;
 	compat_ssize_t clen;
@@ -3064,7 +3066,7 @@ static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
 		return -EINVAL;
 
 	len = clen;
-	buf = io_rw_buffer_select(req, &len, needs_lock);
+	buf = io_rw_buffer_select(req, &len, issue_flags);
 	if (IS_ERR(buf))
 		return PTR_ERR(buf);
 	iov[0].iov_base = buf;
@@ -3074,7 +3076,7 @@ static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
 #endif
 
 static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
-				      bool needs_lock)
+				      unsigned int issue_flags)
 {
 	struct iovec __user *uiov = u64_to_user_ptr(req->rw.addr);
 	void __user *buf;
@@ -3086,7 +3088,7 @@ static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
 	len = iov[0].iov_len;
 	if (len < 0)
 		return -EINVAL;
-	buf = io_rw_buffer_select(req, &len, needs_lock);
+	buf = io_rw_buffer_select(req, &len, issue_flags);
 	if (IS_ERR(buf))
 		return PTR_ERR(buf);
 	iov[0].iov_base = buf;
@@ -3095,7 +3097,7 @@ static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
 }
 
 static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
-				    bool needs_lock)
+				    unsigned int issue_flags)
 {
 	if (req->flags & REQ_F_BUFFER_SELECTED) {
 		struct io_buffer *kbuf = req->kbuf;
@@ -3109,14 +3111,14 @@ static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
 
 #ifdef CONFIG_COMPAT
 	if (req->ctx->compat)
-		return io_compat_import(req, iov, needs_lock);
+		return io_compat_import(req, iov, issue_flags);
 #endif
 
-	return __io_iov_buffer_select(req, iov, needs_lock);
+	return __io_iov_buffer_select(req, iov, issue_flags);
 }
 
 static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
-			   struct iov_iter *iter, bool needs_lock)
+			   struct iov_iter *iter, unsigned int issue_flags)
 {
 	void __user *buf = u64_to_user_ptr(req->rw.addr);
 	size_t sqe_len = req->rw.len;
@@ -3134,7 +3136,7 @@ static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
 
 	if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
 		if (req->flags & REQ_F_BUFFER_SELECT) {
-			buf = io_rw_buffer_select(req, &sqe_len, needs_lock);
+			buf = io_rw_buffer_select(req, &sqe_len, issue_flags);
 			if (IS_ERR(buf))
 				return PTR_ERR(buf);
 			req->rw.len = sqe_len;
@@ -3146,7 +3148,7 @@ static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
 	}
 
 	if (req->flags & REQ_F_BUFFER_SELECT) {
-		ret = io_iov_buffer_select(req, *iovec, needs_lock);
+		ret = io_iov_buffer_select(req, *iovec, issue_flags);
 		if (!ret)
 			iov_iter_init(iter, rw, *iovec, 1, (*iovec)->iov_len);
 		*iovec = NULL;
@@ -3285,7 +3287,8 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
 	struct iovec *iov = iorw->s.fast_iov;
 	int ret;
 
-	ret = io_import_iovec(rw, req, &iov, &iorw->s.iter, false);
+	/* submission path, ->uring_lock should already be taken */
+	ret = io_import_iovec(rw, req, &iov, &iorw->s.iter, IO_URING_F_NONBLOCK);
 	if (unlikely(ret < 0))
 		return ret;
 
@@ -3413,7 +3416,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 	} else {
 		s = &__s;
 		iovec = s->fast_iov;
-		ret = io_import_iovec(READ, req, &iovec, &s->iter, !force_nonblock);
+		ret = io_import_iovec(READ, req, &iovec, &s->iter, issue_flags);
 		if (ret < 0)
 			return ret;
 
@@ -3541,7 +3544,7 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 	} else {
 		s = &__s;
 		iovec = s->fast_iov;
-		ret = io_import_iovec(WRITE, req, &iovec, &s->iter, !force_nonblock);
+		ret = io_import_iovec(WRITE, req, &iovec, &s->iter, issue_flags);
 		if (ret < 0)
 			return ret;
 		iov_iter_save_state(&s->iter, &s->iter_state);
@@ -4864,11 +4867,11 @@ static int io_recvmsg_copy_hdr(struct io_kiocb *req,
 }
 
 static struct io_buffer *io_recv_buffer_select(struct io_kiocb *req,
-					       bool needs_lock)
+					       unsigned int issue_flags)
 {
 	struct io_sr_msg *sr = &req->sr_msg;
 
-	return io_buffer_select(req, &sr->len, sr->bgid, needs_lock);
+	return io_buffer_select(req, &sr->len, sr->bgid, issue_flags);
 }
 
 static inline unsigned int io_put_recv_kbuf(struct io_kiocb *req)
@@ -4931,7 +4934,7 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
 	}
 
 	if (req->flags & REQ_F_BUFFER_SELECT) {
-		kbuf = io_recv_buffer_select(req, !force_nonblock);
+		kbuf = io_recv_buffer_select(req, issue_flags);
 		if (IS_ERR(kbuf))
 			return PTR_ERR(kbuf);
 		kmsg->fast_iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
@@ -4983,7 +4986,7 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
 		return -ENOTSOCK;
 
 	if (req->flags & REQ_F_BUFFER_SELECT) {
-		kbuf = io_recv_buffer_select(req, !force_nonblock);
+		kbuf = io_recv_buffer_select(req, issue_flags);
 		if (IS_ERR(kbuf))
 			return PTR_ERR(kbuf);
 		buf = u64_to_user_ptr(kbuf->addr);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 7/8] io_uring: clean up io_import_iovec
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
                   ` (5 preceding siblings ...)
  2021-10-14 15:10 ` [PATCH 6/8] io_uring: optimise io_import_iovec nonblock passing Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-14 15:10 ` [PATCH 8/8] io_uring: rearrange io_read()/write() Pavel Begunkov
  2021-10-14 18:17 ` [PATCH for-next 0/8] read/write cleanup Jens Axboe
  8 siblings, 0 replies; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Make io_import_iovec taking struct io_rw_state instead of an iter
pointer. First it takes care of initialising iovec pointer, which can be
forgotten. Even more, we can not init it if not needed, e.g. in case of
IORING_OP_READ_FIXED or IORING_OP_READ. Also hide saving iter_state
inside of it by splitting out an inline function of it to avoid extra
ifs.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 40 +++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 9a22a983fb53..f9af54b10238 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3117,9 +3117,10 @@ static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
 	return __io_iov_buffer_select(req, iov, issue_flags);
 }
 
-static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
-			   struct iov_iter *iter, unsigned int issue_flags)
+static int __io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
+			     struct io_rw_state *s, unsigned int issue_flags)
 {
+	struct iov_iter *iter = &s->iter;
 	void __user *buf = u64_to_user_ptr(req->rw.addr);
 	size_t sqe_len = req->rw.len;
 	u8 opcode = req->opcode;
@@ -3142,11 +3143,13 @@ static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
 			req->rw.len = sqe_len;
 		}
 
-		ret = import_single_range(rw, buf, sqe_len, *iovec, iter);
+		ret = import_single_range(rw, buf, sqe_len, s->fast_iov, iter);
 		*iovec = NULL;
 		return ret;
 	}
 
+	*iovec = s->fast_iov;
+
 	if (req->flags & REQ_F_BUFFER_SELECT) {
 		ret = io_iov_buffer_select(req, *iovec, issue_flags);
 		if (!ret)
@@ -3159,6 +3162,19 @@ static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
 			      req->ctx->compat);
 }
 
+static inline int io_import_iovec(int rw, struct io_kiocb *req,
+				  struct iovec **iovec, struct io_rw_state *s,
+				  unsigned int issue_flags)
+{
+	int ret;
+
+	ret = __io_import_iovec(rw, req, iovec, s, issue_flags);
+	if (unlikely(ret < 0))
+		return ret;
+	iov_iter_save_state(&s->iter, &s->iter_state);
+	return ret;
+}
+
 static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
 {
 	return (kiocb->ki_filp->f_mode & FMODE_STREAM) ? NULL : &kiocb->ki_pos;
@@ -3284,11 +3300,11 @@ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
 static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
 {
 	struct io_async_rw *iorw = req->async_data;
-	struct iovec *iov = iorw->s.fast_iov;
+	struct iovec *iov;
 	int ret;
 
 	/* submission path, ->uring_lock should already be taken */
-	ret = io_import_iovec(rw, req, &iov, &iorw->s.iter, IO_URING_F_NONBLOCK);
+	ret = io_import_iovec(rw, req, &iov, &iorw->s, IO_URING_F_NONBLOCK);
 	if (unlikely(ret < 0))
 		return ret;
 
@@ -3296,7 +3312,6 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
 	iorw->free_iovec = iov;
 	if (iov)
 		req->flags |= REQ_F_NEED_CLEANUP;
-	iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
 	return 0;
 }
 
@@ -3415,12 +3430,9 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 		iovec = NULL;
 	} else {
 		s = &__s;
-		iovec = s->fast_iov;
-		ret = io_import_iovec(READ, req, &iovec, &s->iter, issue_flags);
-		if (ret < 0)
+		ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
+		if (unlikely(ret < 0))
 			return ret;
-
-		iov_iter_save_state(&s->iter, &s->iter_state);
 	}
 	req->result = iov_iter_count(&s->iter);
 
@@ -3543,11 +3555,9 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 		iovec = NULL;
 	} else {
 		s = &__s;
-		iovec = s->fast_iov;
-		ret = io_import_iovec(WRITE, req, &iovec, &s->iter, issue_flags);
-		if (ret < 0)
+		ret = io_import_iovec(WRITE, req, &iovec, s, issue_flags);
+		if (unlikely(ret < 0))
 			return ret;
-		iov_iter_save_state(&s->iter, &s->iter_state);
 	}
 	req->result = iov_iter_count(&s->iter);
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 8/8] io_uring: rearrange io_read()/write()
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
                   ` (6 preceding siblings ...)
  2021-10-14 15:10 ` [PATCH 7/8] io_uring: clean up io_import_iovec Pavel Begunkov
@ 2021-10-14 15:10 ` Pavel Begunkov
  2021-10-16 22:52   ` Noah Goldstein
  2021-10-14 18:17 ` [PATCH for-next 0/8] read/write cleanup Jens Axboe
  8 siblings, 1 reply; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-14 15:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Combine force_nonblock branches (which is already optimised by
compiler), flip branches so the most hot/common path is the first, e.g.
as with non on-stack iov setup, and add extra likely/unlikely
attributions for errror paths.

Signed-off-by: Pavel Begunkov <[email protected]>
---
 fs/io_uring.c | 75 +++++++++++++++++++++++++--------------------------
 1 file changed, 37 insertions(+), 38 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f9af54b10238..8bbbe7ccad54 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3395,7 +3395,7 @@ static bool io_rw_should_retry(struct io_kiocb *req)
 
 static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
 {
-	if (req->file->f_op->read_iter)
+	if (likely(req->file->f_op->read_iter))
 		return call_read_iter(req->file, &req->rw.kiocb, iter);
 	else if (req->file->f_op->read)
 		return loop_rw_iter(READ, req, iter);
@@ -3411,14 +3411,18 @@ static bool need_read_all(struct io_kiocb *req)
 
 static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 {
-	struct io_rw_state __s, *s;
+	struct io_rw_state __s, *s = &__s;
 	struct iovec *iovec;
 	struct kiocb *kiocb = &req->rw.kiocb;
 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
 	struct io_async_rw *rw;
 	ssize_t ret, ret2;
 
-	if (req_has_async_data(req)) {
+	if (!req_has_async_data(req)) {
+		ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
+		if (unlikely(ret < 0))
+			return ret;
+	} else {
 		rw = req->async_data;
 		s = &rw->s;
 		/*
@@ -3428,24 +3432,19 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 		 */
 		iov_iter_restore(&s->iter, &s->iter_state);
 		iovec = NULL;
-	} else {
-		s = &__s;
-		ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
-		if (unlikely(ret < 0))
-			return ret;
 	}
 	req->result = iov_iter_count(&s->iter);
 
-	/* Ensure we clear previously set non-block flag */
-	if (!force_nonblock)
-		kiocb->ki_flags &= ~IOCB_NOWAIT;
-	else
+	if (force_nonblock) {
+		/* If the file doesn't support async, just async punt */
+		if (unlikely(!io_file_supports_nowait(req, READ))) {
+			ret = io_setup_async_rw(req, iovec, s, true);
+			return ret ?: -EAGAIN;
+		}
 		kiocb->ki_flags |= IOCB_NOWAIT;
-
-	/* If the file doesn't support async, just async punt */
-	if (force_nonblock && !io_file_supports_nowait(req, READ)) {
-		ret = io_setup_async_rw(req, iovec, s, true);
-		return ret ?: -EAGAIN;
+	} else {
+		/* Ensure we clear previously set non-block flag */
+		kiocb->ki_flags &= ~IOCB_NOWAIT;
 	}
 
 	ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), req->result);
@@ -3541,40 +3540,40 @@ static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 
 static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 {
-	struct io_rw_state __s, *s;
-	struct io_async_rw *rw;
+	struct io_rw_state __s, *s = &__s;
 	struct iovec *iovec;
 	struct kiocb *kiocb = &req->rw.kiocb;
 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
 	ssize_t ret, ret2;
 
-	if (req_has_async_data(req)) {
-		rw = req->async_data;
-		s = &rw->s;
-		iov_iter_restore(&s->iter, &s->iter_state);
-		iovec = NULL;
-	} else {
-		s = &__s;
+	if (!req_has_async_data(req)) {
 		ret = io_import_iovec(WRITE, req, &iovec, s, issue_flags);
 		if (unlikely(ret < 0))
 			return ret;
+	} else {
+		struct io_async_rw *rw = req->async_data;
+
+		s = &rw->s;
+		iov_iter_restore(&s->iter, &s->iter_state);
+		iovec = NULL;
 	}
 	req->result = iov_iter_count(&s->iter);
 
-	/* Ensure we clear previously set non-block flag */
-	if (!force_nonblock)
-		kiocb->ki_flags &= ~IOCB_NOWAIT;
-	else
-		kiocb->ki_flags |= IOCB_NOWAIT;
+	if (force_nonblock) {
+		/* If the file doesn't support async, just async punt */
+		if (unlikely(!io_file_supports_nowait(req, WRITE)))
+			goto copy_iov;
 
-	/* If the file doesn't support async, just async punt */
-	if (force_nonblock && !io_file_supports_nowait(req, WRITE))
-		goto copy_iov;
+		/* file path doesn't support NOWAIT for non-direct_IO */
+		if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
+		    (req->flags & REQ_F_ISREG))
+			goto copy_iov;
 
-	/* file path doesn't support NOWAIT for non-direct_IO */
-	if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
-	    (req->flags & REQ_F_ISREG))
-		goto copy_iov;
+		kiocb->ki_flags |= IOCB_NOWAIT;
+	} else {
+		/* Ensure we clear previously set non-block flag */
+		kiocb->ki_flags &= ~IOCB_NOWAIT;
+	}
 
 	ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
 	if (unlikely(ret))
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH for-next 0/8] read/write cleanup
  2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
                   ` (7 preceding siblings ...)
  2021-10-14 15:10 ` [PATCH 8/8] io_uring: rearrange io_read()/write() Pavel Begunkov
@ 2021-10-14 18:17 ` Jens Axboe
  8 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2021-10-14 18:17 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Jens Axboe

On Thu, 14 Oct 2021 16:10:11 +0100, Pavel Begunkov wrote:
> gave very slight boost (nullb IO) for my testing, 2.89 vs 2.92 MIOPS,
> but the main motivation is that I like the code better.
> 
> Pavel Begunkov (8):
>   io_uring: consistent typing for issue_flags
>   io_uring: prioritise read success path over fails
>   io_uring: optimise rw comletion handlers
>   io_uring: encapsulate rw state
>   io_uring: optimise read/write iov state storing
>   io_uring: optimise io_import_iovec nonblock passing
>   io_uring: clean up io_import_iovec
>   io_uring: rearrange io_read()/write()
> 
> [...]

Applied, thanks!

[1/8] io_uring: consistent typing for issue_flags
      (no commit info)
[2/8] io_uring: prioritise read success path over fails
      (no commit info)
[3/8] io_uring: optimise rw comletion handlers
      (no commit info)
[4/8] io_uring: encapsulate rw state
      (no commit info)
[5/8] io_uring: optimise read/write iov state storing
      (no commit info)
[6/8] io_uring: optimise io_import_iovec nonblock passing
      (no commit info)
[7/8] io_uring: clean up io_import_iovec
      (no commit info)
[8/8] io_uring: rearrange io_read()/write()
      (no commit info)

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 8/8] io_uring: rearrange io_read()/write()
  2021-10-14 15:10 ` [PATCH 8/8] io_uring: rearrange io_read()/write() Pavel Begunkov
@ 2021-10-16 22:52   ` Noah Goldstein
  2021-10-16 23:25     ` Pavel Begunkov
  0 siblings, 1 reply; 14+ messages in thread
From: Noah Goldstein @ 2021-10-16 22:52 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: open list:IO_URING, Jens Axboe

On Thu, Oct 14, 2021 at 10:13 AM Pavel Begunkov <[email protected]> wrote:
>
> Combine force_nonblock branches (which is already optimised by
> compiler), flip branches so the most hot/common path is the first, e.g.
> as with non on-stack iov setup, and add extra likely/unlikely
> attributions for errror paths.
>
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
>  fs/io_uring.c | 75 +++++++++++++++++++++++++--------------------------
>  1 file changed, 37 insertions(+), 38 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index f9af54b10238..8bbbe7ccad54 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -3395,7 +3395,7 @@ static bool io_rw_should_retry(struct io_kiocb *req)
>
>  static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
>  {
> -       if (req->file->f_op->read_iter)
> +       if (likely(req->file->f_op->read_iter))
>                 return call_read_iter(req->file, &req->rw.kiocb, iter);
>         else if (req->file->f_op->read)
>                 return loop_rw_iter(READ, req, iter);
> @@ -3411,14 +3411,18 @@ static bool need_read_all(struct io_kiocb *req)
>
>  static int io_read(struct io_kiocb *req, unsigned int issue_flags)
>  {
> -       struct io_rw_state __s, *s;
> +       struct io_rw_state __s, *s = &__s;
>         struct iovec *iovec;
>         struct kiocb *kiocb = &req->rw.kiocb;
>         bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
>         struct io_async_rw *rw;
>         ssize_t ret, ret2;
>
> -       if (req_has_async_data(req)) {
> +       if (!req_has_async_data(req)) {
> +               ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
> +               if (unlikely(ret < 0))
> +                       return ret;
> +       } else {
>                 rw = req->async_data;
>                 s = &rw->s;
>                 /*
> @@ -3428,24 +3432,19 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
>                  */
>                 iov_iter_restore(&s->iter, &s->iter_state);
>                 iovec = NULL;
> -       } else {
> -               s = &__s;
> -               ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
> -               if (unlikely(ret < 0))
> -                       return ret;
>         }
>         req->result = iov_iter_count(&s->iter);
>
> -       /* Ensure we clear previously set non-block flag */
> -       if (!force_nonblock)
> -               kiocb->ki_flags &= ~IOCB_NOWAIT;
> -       else
> +       if (force_nonblock) {
> +               /* If the file doesn't support async, just async punt */
> +               if (unlikely(!io_file_supports_nowait(req, READ))) {
> +                       ret = io_setup_async_rw(req, iovec, s, true);
> +                       return ret ?: -EAGAIN;
> +               }
>                 kiocb->ki_flags |= IOCB_NOWAIT;
> -
> -       /* If the file doesn't support async, just async punt */
> -       if (force_nonblock && !io_file_supports_nowait(req, READ)) {
> -               ret = io_setup_async_rw(req, iovec, s, true);
> -               return ret ?: -EAGAIN;
> +       } else {
> +               /* Ensure we clear previously set non-block flag */
> +               kiocb->ki_flags &= ~IOCB_NOWAIT;
>         }
>
>         ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), req->result);
> @@ -3541,40 +3540,40 @@ static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>
>  static int io_write(struct io_kiocb *req, unsigned int issue_flags)
>  {
> -       struct io_rw_state __s, *s;
> -       struct io_async_rw *rw;
> +       struct io_rw_state __s, *s = &__s;
>         struct iovec *iovec;
>         struct kiocb *kiocb = &req->rw.kiocb;
>         bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
>         ssize_t ret, ret2;
>
> -       if (req_has_async_data(req)) {
> -               rw = req->async_data;
> -               s = &rw->s;
> -               iov_iter_restore(&s->iter, &s->iter_state);
> -               iovec = NULL;
> -       } else {
> -               s = &__s;
> +       if (!req_has_async_data(req)) {
>                 ret = io_import_iovec(WRITE, req, &iovec, s, issue_flags);
>                 if (unlikely(ret < 0))
>                         return ret;
> +       } else {
> +               struct io_async_rw *rw = req->async_data;
> +
> +               s = &rw->s;
> +               iov_iter_restore(&s->iter, &s->iter_state);
> +               iovec = NULL;
>         }
>         req->result = iov_iter_count(&s->iter);
>
> -       /* Ensure we clear previously set non-block flag */
> -       if (!force_nonblock)
> -               kiocb->ki_flags &= ~IOCB_NOWAIT;
> -       else
> -               kiocb->ki_flags |= IOCB_NOWAIT;
> +       if (force_nonblock) {
> +               /* If the file doesn't support async, just async punt */
> +               if (unlikely(!io_file_supports_nowait(req, WRITE)))
> +                       goto copy_iov;
>
> -       /* If the file doesn't support async, just async punt */
> -       if (force_nonblock && !io_file_supports_nowait(req, WRITE))
> -               goto copy_iov;
> +               /* file path doesn't support NOWAIT for non-direct_IO */
> +               if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&

You can drop this 'force_nonblock' no?

> +                   (req->flags & REQ_F_ISREG))
> +                       goto copy_iov;
>
> -       /* file path doesn't support NOWAIT for non-direct_IO */
> -       if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
> -           (req->flags & REQ_F_ISREG))
> -               goto copy_iov;
> +               kiocb->ki_flags |= IOCB_NOWAIT;
> +       } else {
> +               /* Ensure we clear previously set non-block flag */
> +               kiocb->ki_flags &= ~IOCB_NOWAIT;
> +       }
>
>         ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
>         if (unlikely(ret))

...

What swapping order of conditions below:
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)

The ret2 check will almost certainly be faster than 2x deref.
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 8/8] io_uring: rearrange io_read()/write()
  2021-10-16 22:52   ` Noah Goldstein
@ 2021-10-16 23:25     ` Pavel Begunkov
  2021-10-17  1:35       ` Noah Goldstein
  0 siblings, 1 reply; 14+ messages in thread
From: Pavel Begunkov @ 2021-10-16 23:25 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: open list:IO_URING, Jens Axboe

On 10/16/21 23:52, Noah Goldstein wrote:
> On Thu, Oct 14, 2021 at 10:13 AM Pavel Begunkov <[email protected]> wrote:
>> -       /* If the file doesn't support async, just async punt */
>> -       if (force_nonblock && !io_file_supports_nowait(req, WRITE))
>> -               goto copy_iov;
>> +               /* file path doesn't support NOWAIT for non-direct_IO */
>> +               if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
> 
> You can drop this 'force_nonblock' no?

Indeed

> 
>> +                   (req->flags & REQ_F_ISREG))
>> +                       goto copy_iov;
>>
>> -       /* file path doesn't support NOWAIT for non-direct_IO */
>> -       if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
>> -           (req->flags & REQ_F_ISREG))
>> -               goto copy_iov;
>> +               kiocb->ki_flags |= IOCB_NOWAIT;
>> +       } else {
>> +               /* Ensure we clear previously set non-block flag */
>> +               kiocb->ki_flags &= ~IOCB_NOWAIT;
>> +       }
>>
>>          ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
>>          if (unlikely(ret))
> 
> ...
> 
> What swapping order of conditions below:
> if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
> 
> The ret2 check will almost certainly be faster than 2x deref.

Makes sense. Want to send a patch?

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 8/8] io_uring: rearrange io_read()/write()
  2021-10-16 23:25     ` Pavel Begunkov
@ 2021-10-17  1:35       ` Noah Goldstein
  0 siblings, 0 replies; 14+ messages in thread
From: Noah Goldstein @ 2021-10-17  1:35 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: open list:IO_URING, Jens Axboe

On Sat, Oct 16, 2021 at 6:26 PM Pavel Begunkov <[email protected]> wrote:
>
> On 10/16/21 23:52, Noah Goldstein wrote:
> > On Thu, Oct 14, 2021 at 10:13 AM Pavel Begunkov <[email protected]> wrote:
> >> -       /* If the file doesn't support async, just async punt */
> >> -       if (force_nonblock && !io_file_supports_nowait(req, WRITE))
> >> -               goto copy_iov;
> >> +               /* file path doesn't support NOWAIT for non-direct_IO */
> >> +               if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
> >
> > You can drop this 'force_nonblock' no?
>
> Indeed
>
> >
> >> +                   (req->flags & REQ_F_ISREG))
> >> +                       goto copy_iov;
> >>
> >> -       /* file path doesn't support NOWAIT for non-direct_IO */
> >> -       if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
> >> -           (req->flags & REQ_F_ISREG))
> >> -               goto copy_iov;
> >> +               kiocb->ki_flags |= IOCB_NOWAIT;
> >> +       } else {
> >> +               /* Ensure we clear previously set non-block flag */
> >> +               kiocb->ki_flags &= ~IOCB_NOWAIT;
> >> +       }
> >>
> >>          ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
> >>          if (unlikely(ret))
> >
> > ...
> >
> > What swapping order of conditions below:
> > if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
> >
> > The ret2 check will almost certainly be faster than 2x deref.
>
> Makes sense. Want to send a patch?
Done.
>
> --
> Pavel Begunkov

As an aside regarding the reorganization of io_write/io_read, maybe
it's worth it
to add seperate versions of the function for w/w.o force_nonblock?
Isn't something
that will change during the function and seems to really just be
adding 4-5 branches
(of runtime and code complexity) to each where factoring it would just
be +1 branch.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/8] io_uring: encapsulate rw state
  2021-10-14 15:10 ` [PATCH 4/8] io_uring: encapsulate rw state Pavel Begunkov
@ 2021-10-18  6:06   ` Hao Xu
  0 siblings, 0 replies; 14+ messages in thread
From: Hao Xu @ 2021-10-18  6:06 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring; +Cc: Jens Axboe

在 2021/10/14 下午11:10, Pavel Begunkov 写道:
> Add a new struct io_rw_state storing all iov related bits: fast iov,
> iterator and iterator state. Not much changes here, simply convert
> struct io_async_rw to use it.
> 
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
>   fs/io_uring.c | 42 +++++++++++++++++++++++-------------------
>   1 file changed, 23 insertions(+), 19 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 2f193893cf1b..3447243805d9 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -693,11 +693,15 @@ struct io_async_msghdr {
>   	struct sockaddr_storage		addr;
>   };
>   
> -struct io_async_rw {
> +struct io_rw_state {
>   	struct iovec			fast_iov[UIO_FASTIOV];
> -	const struct iovec		*free_iovec;
>   	struct iov_iter			iter;
>   	struct iov_iter_state		iter_state;
> +};
> +
> +struct io_async_rw {
> +	struct io_rw_state		s;
This name seems a little bit too simple, when a reader reads the code, 
they may be confused until they see this definition.
> +	const struct iovec		*free_iovec;
>   	size_t				bytes_done;
>   	struct wait_page_queue		wpq;
>   };
> @@ -2550,7 +2554,7 @@ static bool io_resubmit_prep(struct io_kiocb *req)
>   
>   	if (!req_has_async_data(req))
>   		return !io_req_prep_async(req);
> -	iov_iter_restore(&rw->iter, &rw->iter_state);
> +	iov_iter_restore(&rw->s.iter, &rw->s.iter_state);
>   	return true;
>   }
>   
> @@ -3221,7 +3225,7 @@ static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
>   {
>   	struct io_async_rw *rw = req->async_data;
>   
> -	memcpy(&rw->iter, iter, sizeof(*iter));
> +	memcpy(&rw->s.iter, iter, sizeof(*iter));
>   	rw->free_iovec = iovec;
>   	rw->bytes_done = 0;
>   	/* can only be fixed buffers, no need to do anything */
> @@ -3230,13 +3234,13 @@ static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
>   	if (!iovec) {
>   		unsigned iov_off = 0;
>   
> -		rw->iter.iov = rw->fast_iov;
> +		rw->s.iter.iov = rw->s.fast_iov;
>   		if (iter->iov != fast_iov) {
>   			iov_off = iter->iov - fast_iov;
> -			rw->iter.iov += iov_off;
> +			rw->s.iter.iov += iov_off;
>   		}
> -		if (rw->fast_iov != fast_iov)
> -			memcpy(rw->fast_iov + iov_off, fast_iov + iov_off,
> +		if (rw->s.fast_iov != fast_iov)
> +			memcpy(rw->s.fast_iov + iov_off, fast_iov + iov_off,
>   			       sizeof(struct iovec) * iter->nr_segs);
>   	} else {
>   		req->flags |= REQ_F_NEED_CLEANUP;
> @@ -3271,7 +3275,7 @@ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
>   		io_req_map_rw(req, iovec, fast_iov, iter);
>   		iorw = req->async_data;
>   		/* we've copied and mapped the iter, ensure state is saved */
> -		iov_iter_save_state(&iorw->iter, &iorw->iter_state);
> +		iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
>   	}
>   	return 0;
>   }
> @@ -3279,10 +3283,10 @@ static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
>   static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
>   {
>   	struct io_async_rw *iorw = req->async_data;
> -	struct iovec *iov = iorw->fast_iov;
> +	struct iovec *iov = iorw->s.fast_iov;
>   	int ret;
>   
> -	ret = io_import_iovec(rw, req, &iov, &iorw->iter, false);
> +	ret = io_import_iovec(rw, req, &iov, &iorw->s.iter, false);
>   	if (unlikely(ret < 0))
>   		return ret;
>   
> @@ -3290,7 +3294,7 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
>   	iorw->free_iovec = iov;
>   	if (iov)
>   		req->flags |= REQ_F_NEED_CLEANUP;
> -	iov_iter_save_state(&iorw->iter, &iorw->iter_state);
> +	iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
>   	return 0;
>   }
>   
> @@ -3400,8 +3404,8 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
>   
>   	if (req_has_async_data(req)) {
>   		rw = req->async_data;
> -		iter = &rw->iter;
> -		state = &rw->iter_state;
> +		iter = &rw->s.iter;
> +		state = &rw->s.iter_state;
>   		/*
>   		 * We come here from an earlier attempt, restore our state to
>   		 * match in case it doesn't. It's cheap enough that we don't
> @@ -3472,9 +3476,9 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
>   	 * Now use our persistent iterator and state, if we aren't already.
>   	 * We've restored and mapped the iter to match.
>   	 */
> -	if (iter != &rw->iter) {
> -		iter = &rw->iter;
> -		state = &rw->iter_state;
> +	if (iter != &rw->s.iter) {
> +		iter = &rw->s.iter;
> +		state = &rw->s.iter_state;
>   	}
>   
>   	do {
> @@ -3536,8 +3540,8 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
>   
>   	if (req_has_async_data(req)) {
>   		rw = req->async_data;
> -		iter = &rw->iter;
> -		state = &rw->iter_state;
> +		iter = &rw->s.iter;
> +		state = &rw->s.iter_state;
>   		iov_iter_restore(iter, state);
>   		iovec = NULL;
>   	} else {
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-10-18  6:06 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-10-14 15:10 [PATCH for-next 0/8] read/write cleanup Pavel Begunkov
2021-10-14 15:10 ` [PATCH 1/8] io_uring: consistent typing for issue_flags Pavel Begunkov
2021-10-14 15:10 ` [PATCH 2/8] io_uring: prioritise read success path over fails Pavel Begunkov
2021-10-14 15:10 ` [PATCH 3/8] io_uring: optimise rw comletion handlers Pavel Begunkov
2021-10-14 15:10 ` [PATCH 4/8] io_uring: encapsulate rw state Pavel Begunkov
2021-10-18  6:06   ` Hao Xu
2021-10-14 15:10 ` [PATCH 5/8] io_uring: optimise read/write iov state storing Pavel Begunkov
2021-10-14 15:10 ` [PATCH 6/8] io_uring: optimise io_import_iovec nonblock passing Pavel Begunkov
2021-10-14 15:10 ` [PATCH 7/8] io_uring: clean up io_import_iovec Pavel Begunkov
2021-10-14 15:10 ` [PATCH 8/8] io_uring: rearrange io_read()/write() Pavel Begunkov
2021-10-16 22:52   ` Noah Goldstein
2021-10-16 23:25     ` Pavel Begunkov
2021-10-17  1:35       ` Noah Goldstein
2021-10-14 18:17 ` [PATCH for-next 0/8] read/write cleanup Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox