* [PATCH 5.13 00/11] yet another series of random 5.13 patches
@ 2021-03-22 1:58 Pavel Begunkov
2021-03-22 1:58 ` [PATCH 01/11] io_uring: don't clear REQ_F_LINK_TIMEOUT Pavel Begunkov
` (12 more replies)
0 siblings, 13 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
Random improvements for the most part, 8-11 are about optimising rw and
rw reissue.
Pavel Begunkov (11):
io_uring: don't clear REQ_F_LINK_TIMEOUT
io_uring: don't do extra EXITING cancellations
io_uring: optimise out task_work checks on enter
io_uring: remove tctx->sqpoll
io-wq: refactor *_get_acct()
io_uring: don't init req->work fully in advance
io_uring: kill unused REQ_F_NO_FILE_TABLE
io_uring: optimise kiocb_end_write for !ISREG
io_uring: don't alter iopoll reissue fail ret code
io_uring: hide iter revert in resubmit_prep
io_uring: optimise rw complete error handling
fs/io-wq.c | 17 +++----
fs/io_uring.c | 128 +++++++++++++++++++++++++-------------------------
2 files changed, 72 insertions(+), 73 deletions(-)
--
2.24.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 01/11] io_uring: don't clear REQ_F_LINK_TIMEOUT
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 02/11] io_uring: don't do extra EXITING cancellations Pavel Begunkov
` (11 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
REQ_F_LINK_TIMEOUT is a hint that to look for linked timeouts to cancel,
we're leaving it even when it's already fired. Hence don't care to clear
it in io_kill_linked_timeout(), it's safe and is called only once.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 67a463f1ccdb..5cba23347092 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1786,7 +1786,6 @@ static bool io_kill_linked_timeout(struct io_kiocb *req)
__must_hold(&req->ctx->completion_lock)
{
struct io_kiocb *link = req->link;
- bool cancelled = false;
/*
* Can happen if a linked timeout fired and link had been like
@@ -1802,11 +1801,10 @@ static bool io_kill_linked_timeout(struct io_kiocb *req)
if (ret != -1) {
io_cqring_fill_event(link, -ECANCELED);
io_put_req_deferred(link, 1);
- cancelled = true;
+ return true;
}
}
- req->flags &= ~REQ_F_LINK_TIMEOUT;
- return cancelled;
+ return false;
}
static void io_fail_links(struct io_kiocb *req)
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 02/11] io_uring: don't do extra EXITING cancellations
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
2021-03-22 1:58 ` [PATCH 01/11] io_uring: don't clear REQ_F_LINK_TIMEOUT Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 03/11] io_uring: optimise out task_work checks on enter Pavel Begunkov
` (10 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
io_match_task() matches all requests with PF_EXITING task, even though
those may be valid requests. It was necessary for SQPOLL cancellation,
but now it kills all requests before exiting via
io_uring_cancel_sqpoll(), so it's not needed.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5cba23347092..080fac4d4543 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1077,12 +1077,8 @@ static bool io_match_task(struct io_kiocb *head,
{
struct io_kiocb *req;
- if (task && head->task != task) {
- /* in terms of cancelation, always match if req task is dead */
- if (head->task->flags & PF_EXITING)
- return true;
+ if (task && head->task != task)
return false;
- }
if (!files)
return true;
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 03/11] io_uring: optimise out task_work checks on enter
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
2021-03-22 1:58 ` [PATCH 01/11] io_uring: don't clear REQ_F_LINK_TIMEOUT Pavel Begunkov
2021-03-22 1:58 ` [PATCH 02/11] io_uring: don't do extra EXITING cancellations Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 13:45 ` Jens Axboe
2021-03-22 1:58 ` [PATCH 04/11] io_uring: remove tctx->sqpoll Pavel Begunkov
` (9 subsequent siblings)
12 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
io_run_task_work() does some extra work for safety, that isn't actually
needed in many cases, particularly when it's known that current is not
exiting and in the TASK_RUNNING state, like in the beginning of a
syscall.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 080fac4d4543..45b49273df8b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2227,6 +2227,16 @@ static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
return io_put_kbuf(req, kbuf);
}
+/* only safe when TASK_RUNNING and it's not PF_EXITING */
+static inline bool __io_run_task_work(void)
+{
+ if (current->task_works) {
+ task_work_run();
+ return true;
+ }
+ return false;
+}
+
static inline bool io_run_task_work(void)
{
/*
@@ -6729,7 +6739,7 @@ static int io_sq_thread(void *data)
}
if (sqt_spin || !time_after(jiffies, timeout)) {
- io_run_task_work();
+ __io_run_task_work();
cond_resched();
if (sqt_spin)
timeout = jiffies + sqd->sq_thread_idle;
@@ -6870,7 +6880,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
io_cqring_overflow_flush(ctx, false, NULL, NULL);
if (io_cqring_events(ctx) >= min_events)
return 0;
- if (!io_run_task_work())
+ if (!__io_run_task_work())
break;
} while (1);
@@ -9117,7 +9127,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
struct fd f;
long ret;
- io_run_task_work();
+ __io_run_task_work();
if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG)))
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 04/11] io_uring: remove tctx->sqpoll
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (2 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 03/11] io_uring: optimise out task_work checks on enter Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 05/11] io-wq: refactor *_get_acct() Pavel Begunkov
` (8 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
struct io_uring_task::sqpoll is not used anymore, kill it
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 45b49273df8b..ff238dc503db 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -465,7 +465,6 @@ struct io_uring_task {
struct io_wq *io_wq;
struct percpu_counter inflight;
atomic_t in_idle;
- bool sqpoll;
spinlock_t task_lock;
struct io_wq_work_list task_list;
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 05/11] io-wq: refactor *_get_acct()
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (3 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 04/11] io_uring: remove tctx->sqpoll Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 06/11] io_uring: don't init req->work fully in advance Pavel Begunkov
` (7 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
Extract a helper for io_work_get_acct() and io_wqe_get_acct() to avoid
duplication.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io-wq.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/fs/io-wq.c b/fs/io-wq.c
index e05f996d088f..ba4e0ece3894 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -148,23 +148,20 @@ static void io_worker_release(struct io_worker *worker)
complete(&worker->ref_done);
}
+static inline struct io_wqe_acct *io_get_acct(struct io_wqe *wqe, bool bound)
+{
+ return &wqe->acct[bound ? IO_WQ_ACCT_BOUND : IO_WQ_ACCT_UNBOUND];
+}
+
static inline struct io_wqe_acct *io_work_get_acct(struct io_wqe *wqe,
struct io_wq_work *work)
{
- if (work->flags & IO_WQ_WORK_UNBOUND)
- return &wqe->acct[IO_WQ_ACCT_UNBOUND];
-
- return &wqe->acct[IO_WQ_ACCT_BOUND];
+ return io_get_acct(wqe, !(work->flags & IO_WQ_WORK_UNBOUND));
}
static inline struct io_wqe_acct *io_wqe_get_acct(struct io_worker *worker)
{
- struct io_wqe *wqe = worker->wqe;
-
- if (worker->flags & IO_WORKER_F_BOUND)
- return &wqe->acct[IO_WQ_ACCT_BOUND];
-
- return &wqe->acct[IO_WQ_ACCT_UNBOUND];
+ return io_get_acct(worker->wqe, worker->flags & IO_WORKER_F_BOUND);
}
static void io_worker_exit(struct io_worker *worker)
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 06/11] io_uring: don't init req->work fully in advance
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (4 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 05/11] io-wq: refactor *_get_acct() Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 07/11] io_uring: kill unused REQ_F_NO_FILE_TABLE Pavel Begunkov
` (6 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
req->work is mostly unused unless it's punted, and io_init_req() is too
hot for fully initialising it. Fortunately, we can skip init work.next
as it's controlled by io-wq, and can not touch work.flags by moving
everything related into io_prep_async_work(). The only field left is
req->work.creds, but there is nothing can be done, keep maintaining it.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ff238dc503db..8609ca962dea 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1199,6 +1199,8 @@ static void io_prep_async_work(struct io_kiocb *req)
if (!req->work.creds)
req->work.creds = get_current_cred();
+ req->work.list.next = NULL;
+ req->work.flags = 0;
if (req->flags & REQ_F_FORCE_ASYNC)
req->work.flags |= IO_WQ_WORK_CONCURRENT;
@@ -1209,6 +1211,18 @@ static void io_prep_async_work(struct io_kiocb *req)
if (def->unbound_nonreg_file)
req->work.flags |= IO_WQ_WORK_UNBOUND;
}
+
+ switch (req->opcode) {
+ case IORING_OP_SPLICE:
+ case IORING_OP_TEE:
+ /*
+ * Splice operation will be punted aync, and here need to
+ * modify io_wq_work.flags, so initialize io_wq_work firstly.
+ */
+ if (!S_ISREG(file_inode(req->splice.file_in)->i_mode))
+ req->work.flags |= IO_WQ_WORK_UNBOUND;
+ break;
+ }
}
static void io_prep_async_link(struct io_kiocb *req)
@@ -3601,15 +3615,6 @@ static int __io_splice_prep(struct io_kiocb *req,
if (!sp->file_in)
return -EBADF;
req->flags |= REQ_F_NEED_CLEANUP;
-
- if (!S_ISREG(file_inode(sp->file_in)->i_mode)) {
- /*
- * Splice operation will be punted aync, and here need to
- * modify io_wq_work.flags, so initialize io_wq_work firstly.
- */
- req->work.flags |= IO_WQ_WORK_UNBOUND;
- }
-
return 0;
}
@@ -6381,9 +6386,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
atomic_set(&req->refs, 2);
req->task = current;
req->result = 0;
- req->work.list.next = NULL;
req->work.creds = NULL;
- req->work.flags = 0;
/* enforce forwards compatibility on users */
if (unlikely(sqe_flags & ~SQE_VALID_FLAGS)) {
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 07/11] io_uring: kill unused REQ_F_NO_FILE_TABLE
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (5 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 06/11] io_uring: don't init req->work fully in advance Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 08/11] io_uring: optimise kiocb_end_write for !ISREG Pavel Begunkov
` (5 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
current->files are always valid now even for io-wq threads, so kill not
used anymore REQ_F_NO_FILE_TABLE.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8609ca962dea..b666453bc479 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -694,7 +694,6 @@ enum {
REQ_F_NEED_CLEANUP_BIT,
REQ_F_POLLED_BIT,
REQ_F_BUFFER_SELECTED_BIT,
- REQ_F_NO_FILE_TABLE_BIT,
REQ_F_LTIMEOUT_ACTIVE_BIT,
REQ_F_COMPLETE_INLINE_BIT,
/* keep async read/write and isreg together and in order */
@@ -736,8 +735,6 @@ enum {
REQ_F_POLLED = BIT(REQ_F_POLLED_BIT),
/* buffer already selected */
REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),
- /* doesn't need file table for this request */
- REQ_F_NO_FILE_TABLE = BIT(REQ_F_NO_FILE_TABLE_BIT),
/* linked timeout is active, i.e. prepared by link's head */
REQ_F_LTIMEOUT_ACTIVE = BIT(REQ_F_LTIMEOUT_ACTIVE_BIT),
/* completion is deferred through io_comp_state */
@@ -4188,12 +4185,8 @@ static int io_statx(struct io_kiocb *req, unsigned int issue_flags)
struct io_statx *ctx = &req->statx;
int ret;
- if (issue_flags & IO_URING_F_NONBLOCK) {
- /* only need file table for an actual valid fd */
- if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD)
- req->flags |= REQ_F_NO_FILE_TABLE;
+ if (issue_flags & IO_URING_F_NONBLOCK)
return -EAGAIN;
- }
ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
ctx->buffer);
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 08/11] io_uring: optimise kiocb_end_write for !ISREG
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (6 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 07/11] io_uring: kill unused REQ_F_NO_FILE_TABLE Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 09/11] io_uring: don't alter iopoll reissue fail ret code Pavel Begunkov
` (4 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
file_end_write() is only for regular files, so the function do a couple
of dereferences to get inode and check for it. However, we already have
REQ_F_ISREG at hand, just use it and inline file_end_write().
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index b666453bc479..c81f2db0fee5 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2461,11 +2461,11 @@ static void kiocb_end_write(struct io_kiocb *req)
* thread.
*/
if (req->flags & REQ_F_ISREG) {
- struct inode *inode = file_inode(req->file);
+ struct super_block *sb = file_inode(req->file)->i_sb;
- __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
+ __sb_writers_acquired(sb, SB_FREEZE_WRITE);
+ sb_end_write(sb);
}
- file_end_write(req->file);
}
#ifdef CONFIG_BLOCK
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 09/11] io_uring: don't alter iopoll reissue fail ret code
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (7 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 08/11] io_uring: optimise kiocb_end_write for !ISREG Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 10/11] io_uring: hide iter revert in resubmit_prep Pavel Begunkov
` (3 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
When reissue_prep failed in io_complete_rw_iopoll(), we change return
code to -EIO to prevent io_iopoll_complete() from doing resubmission.
Mark requests with a new flag (i.e. REQ_F_DONT_REISSUE) instead and
retain the original return value.
It also removes io_rw_reissue() from io_iopoll_complete() that will be
used later.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index c81f2db0fee5..cccd5fd582f2 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -696,6 +696,7 @@ enum {
REQ_F_BUFFER_SELECTED_BIT,
REQ_F_LTIMEOUT_ACTIVE_BIT,
REQ_F_COMPLETE_INLINE_BIT,
+ REQ_F_DONT_REISSUE_BIT,
/* keep async read/write and isreg together and in order */
REQ_F_ASYNC_READ_BIT,
REQ_F_ASYNC_WRITE_BIT,
@@ -739,6 +740,8 @@ enum {
REQ_F_LTIMEOUT_ACTIVE = BIT(REQ_F_LTIMEOUT_ACTIVE_BIT),
/* completion is deferred through io_comp_state */
REQ_F_COMPLETE_INLINE = BIT(REQ_F_COMPLETE_INLINE_BIT),
+ /* don't attempt request reissue, see io_rw_reissue() */
+ REQ_F_DONT_REISSUE = BIT(REQ_F_DONT_REISSUE_BIT),
/* supports async reads */
REQ_F_ASYNC_READ = BIT(REQ_F_ASYNC_READ_BIT),
/* supports async writes */
@@ -1014,7 +1017,6 @@ static struct fixed_rsrc_ref_node *alloc_fixed_rsrc_ref_node(
struct io_ring_ctx *ctx);
static void io_ring_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
-static bool io_rw_reissue(struct io_kiocb *req);
static void io_cqring_fill_event(struct io_kiocb *req, long res);
static void io_put_req(struct io_kiocb *req);
static void io_put_req_deferred(struct io_kiocb *req, int nr);
@@ -2283,10 +2285,12 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
req = list_first_entry(done, struct io_kiocb, inflight_entry);
list_del(&req->inflight_entry);
- if (READ_ONCE(req->result) == -EAGAIN) {
+ if (READ_ONCE(req->result) == -EAGAIN &&
+ !(req->flags & REQ_F_DONT_REISSUE)) {
req->iopoll_completed = 0;
- if (io_rw_reissue(req))
- continue;
+ req_ref_get(req);
+ io_queue_async_work(req);
+ continue;
}
if (req->flags & REQ_F_BUFFER_SELECTED)
@@ -2550,15 +2554,17 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
iov_iter_revert(&rw->iter,
req->result - iov_iter_count(&rw->iter));
else if (!io_resubmit_prep(req))
- res = -EIO;
+ req->flags |= REQ_F_DONT_REISSUE;
}
#endif
if (kiocb->ki_flags & IOCB_WRITE)
kiocb_end_write(req);
- if (res != -EAGAIN && res != req->result)
+ if (res != -EAGAIN && res != req->result) {
+ req->flags |= REQ_F_DONT_REISSUE;
req_set_fail_links(req);
+ }
WRITE_ONCE(req->result, res);
/* order with io_iopoll_complete() checking ->result */
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 10/11] io_uring: hide iter revert in resubmit_prep
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (8 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 09/11] io_uring: don't alter iopoll reissue fail ret code Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 1:58 ` [PATCH 11/11] io_uring: optimise rw complete error handling Pavel Begunkov
` (2 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
Move iov_iter_revert() resetting iterator in case of -EIOCBQUEUED into
io_resubmit_prep(), so we don't do heavy revert in hot path, also saves
a couple of checks.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 21 ++++++++-------------
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index cccd5fd582f2..775139e9d00f 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2475,8 +2475,13 @@ static void kiocb_end_write(struct io_kiocb *req)
#ifdef CONFIG_BLOCK
static bool io_resubmit_prep(struct io_kiocb *req)
{
- /* either already prepared or successfully done */
- return req->async_data || !io_req_prep_async(req);
+ struct io_async_rw *rw = req->async_data;
+
+ if (!rw)
+ return !io_req_prep_async(req);
+ /* may have left rw->iter inconsistent on -EIOCBQUEUED */
+ iov_iter_revert(&rw->iter, req->result - iov_iter_count(&rw->iter));
+ return true;
}
static bool io_rw_should_reissue(struct io_kiocb *req)
@@ -2546,14 +2551,8 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
#ifdef CONFIG_BLOCK
- /* Rewind iter, if we have one. iopoll path resubmits as usual */
if (res == -EAGAIN && io_rw_should_reissue(req)) {
- struct io_async_rw *rw = req->async_data;
-
- if (rw)
- iov_iter_revert(&rw->iter,
- req->result - iov_iter_count(&rw->iter));
- else if (!io_resubmit_prep(req))
+ if (!io_resubmit_prep(req))
req->flags |= REQ_F_DONT_REISSUE;
}
#endif
@@ -3310,8 +3309,6 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
ret = io_iter_do_read(req, iter);
if (ret == -EIOCBQUEUED) {
- if (req->async_data)
- iov_iter_revert(iter, io_size - iov_iter_count(iter));
goto out_free;
} else if (ret == -EAGAIN) {
/* IOPOLL retry should happen for io-wq threads */
@@ -3444,8 +3441,6 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
/* no retry on NONBLOCK nor RWF_NOWAIT */
if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT))
goto done;
- if (ret2 == -EIOCBQUEUED && req->async_data)
- iov_iter_revert(iter, io_size - iov_iter_count(iter));
if (!force_nonblock || ret2 != -EAGAIN) {
/* IOPOLL retry should happen for io-wq threads */
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 11/11] io_uring: optimise rw complete error handling
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (9 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 10/11] io_uring: hide iter revert in resubmit_prep Pavel Begunkov
@ 2021-03-22 1:58 ` Pavel Begunkov
2021-03-22 2:00 ` [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
2021-03-22 13:49 ` Jens Axboe
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 1:58 UTC (permalink / raw)
To: Jens Axboe, io-uring
Expect read/write to succeed and create a hot path for this case, in
particular hide all error handling with resubmission under a single
check with the desired result.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 775139e9d00f..481da674c9bf 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2530,10 +2530,11 @@ static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
if (req->rw.kiocb.ki_flags & IOCB_WRITE)
kiocb_end_write(req);
- if ((res == -EAGAIN || res == -EOPNOTSUPP) && io_rw_reissue(req))
- return;
- if (res != req->result)
+ if (unlikely(res != req->result)) {
+ if ((res == -EAGAIN || res == -EOPNOTSUPP) && io_rw_reissue(req))
+ return;
req_set_fail_links(req);
+ }
if (req->flags & REQ_F_BUFFER_SELECTED)
cflags = io_put_rw_kbuf(req);
__io_req_complete(req, issue_flags, res, cflags);
@@ -2550,19 +2551,20 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
{
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
-#ifdef CONFIG_BLOCK
- if (res == -EAGAIN && io_rw_should_reissue(req)) {
- if (!io_resubmit_prep(req))
- req->flags |= REQ_F_DONT_REISSUE;
- }
-#endif
-
if (kiocb->ki_flags & IOCB_WRITE)
kiocb_end_write(req);
+ if (unlikely(res != req->result)) {
+ bool fail = true;
- if (res != -EAGAIN && res != req->result) {
- req->flags |= REQ_F_DONT_REISSUE;
- req_set_fail_links(req);
+#ifdef CONFIG_BLOCK
+ if (res == -EAGAIN && io_rw_should_reissue(req) &&
+ io_resubmit_prep(req))
+ fail = false;
+#endif
+ if (fail) {
+ req_set_fail_links(req);
+ req->flags |= REQ_F_DONT_REISSUE;
+ }
}
WRITE_ONCE(req->result, res);
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 5.13 00/11] yet another series of random 5.13 patches
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (10 preceding siblings ...)
2021-03-22 1:58 ` [PATCH 11/11] io_uring: optimise rw complete error handling Pavel Begunkov
@ 2021-03-22 2:00 ` Pavel Begunkov
2021-03-22 13:49 ` Jens Axboe
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 2:00 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 22/03/2021 01:58, Pavel Begunkov wrote:
> Random improvements for the most part, 8-11 are about optimising rw and
> rw reissue.
note: depends on just sent "io_uring: don't skip file_end_write() on
reissue" for-5.12 patch
>
> Pavel Begunkov (11):
> io_uring: don't clear REQ_F_LINK_TIMEOUT
> io_uring: don't do extra EXITING cancellations
> io_uring: optimise out task_work checks on enter
> io_uring: remove tctx->sqpoll
> io-wq: refactor *_get_acct()
> io_uring: don't init req->work fully in advance
> io_uring: kill unused REQ_F_NO_FILE_TABLE
> io_uring: optimise kiocb_end_write for !ISREG
> io_uring: don't alter iopoll reissue fail ret code
> io_uring: hide iter revert in resubmit_prep
> io_uring: optimise rw complete error handling
>
> fs/io-wq.c | 17 +++----
> fs/io_uring.c | 128 +++++++++++++++++++++++++-------------------------
> 2 files changed, 72 insertions(+), 73 deletions(-)
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 03/11] io_uring: optimise out task_work checks on enter
2021-03-22 1:58 ` [PATCH 03/11] io_uring: optimise out task_work checks on enter Pavel Begunkov
@ 2021-03-22 13:45 ` Jens Axboe
2021-03-22 14:53 ` Pavel Begunkov
0 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2021-03-22 13:45 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 3/21/21 7:58 PM, Pavel Begunkov wrote:
> io_run_task_work() does some extra work for safety, that isn't actually
> needed in many cases, particularly when it's known that current is not
> exiting and in the TASK_RUNNING state, like in the beginning of a
> syscall.
Is this really worth it?
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 5.13 00/11] yet another series of random 5.13 patches
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
` (11 preceding siblings ...)
2021-03-22 2:00 ` [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
@ 2021-03-22 13:49 ` Jens Axboe
12 siblings, 0 replies; 17+ messages in thread
From: Jens Axboe @ 2021-03-22 13:49 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 3/21/21 7:58 PM, Pavel Begunkov wrote:
> Random improvements for the most part, 8-11 are about optimising rw and
> rw reissue.
>
> Pavel Begunkov (11):
> io_uring: don't clear REQ_F_LINK_TIMEOUT
> io_uring: don't do extra EXITING cancellations
> io_uring: optimise out task_work checks on enter
> io_uring: remove tctx->sqpoll
> io-wq: refactor *_get_acct()
> io_uring: don't init req->work fully in advance
> io_uring: kill unused REQ_F_NO_FILE_TABLE
> io_uring: optimise kiocb_end_write for !ISREG
> io_uring: don't alter iopoll reissue fail ret code
> io_uring: hide iter revert in resubmit_prep
> io_uring: optimise rw complete error handling
>
> fs/io-wq.c | 17 +++----
> fs/io_uring.c | 128 +++++++++++++++++++++++++-------------------------
> 2 files changed, 72 insertions(+), 73 deletions(-)
Applied - apart from 3/11, which I think is a bit silly.
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 03/11] io_uring: optimise out task_work checks on enter
2021-03-22 13:45 ` Jens Axboe
@ 2021-03-22 14:53 ` Pavel Begunkov
2021-03-22 14:58 ` Jens Axboe
0 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2021-03-22 14:53 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 22/03/2021 13:45, Jens Axboe wrote:
> On 3/21/21 7:58 PM, Pavel Begunkov wrote:
>> io_run_task_work() does some extra work for safety, that isn't actually
>> needed in many cases, particularly when it's known that current is not
>> exiting and in the TASK_RUNNING state, like in the beginning of a
>> syscall.
>
> Is this really worth it?
Not alone, but ifs tend to pile up, and I may argue that PF_EXITING in
the beginning of a syscall may be confusing. Are you afraid it will get
out of sync with time?
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 03/11] io_uring: optimise out task_work checks on enter
2021-03-22 14:53 ` Pavel Begunkov
@ 2021-03-22 14:58 ` Jens Axboe
0 siblings, 0 replies; 17+ messages in thread
From: Jens Axboe @ 2021-03-22 14:58 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 3/22/21 8:53 AM, Pavel Begunkov wrote:
> On 22/03/2021 13:45, Jens Axboe wrote:
>> On 3/21/21 7:58 PM, Pavel Begunkov wrote:
>>> io_run_task_work() does some extra work for safety, that isn't actually
>>> needed in many cases, particularly when it's known that current is not
>>> exiting and in the TASK_RUNNING state, like in the beginning of a
>>> syscall.
>>
>> Is this really worth it?
>
> Not alone, but ifs tend to pile up, and I may argue that PF_EXITING in
> the beginning of a syscall may be confusing. Are you afraid it will get
> out of sync with time?
Not really worried about it, just don't like that we end up with two
functions for running the task_work, where one has a check the other
one does not.
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2021-03-22 14:58 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-03-22 1:58 [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
2021-03-22 1:58 ` [PATCH 01/11] io_uring: don't clear REQ_F_LINK_TIMEOUT Pavel Begunkov
2021-03-22 1:58 ` [PATCH 02/11] io_uring: don't do extra EXITING cancellations Pavel Begunkov
2021-03-22 1:58 ` [PATCH 03/11] io_uring: optimise out task_work checks on enter Pavel Begunkov
2021-03-22 13:45 ` Jens Axboe
2021-03-22 14:53 ` Pavel Begunkov
2021-03-22 14:58 ` Jens Axboe
2021-03-22 1:58 ` [PATCH 04/11] io_uring: remove tctx->sqpoll Pavel Begunkov
2021-03-22 1:58 ` [PATCH 05/11] io-wq: refactor *_get_acct() Pavel Begunkov
2021-03-22 1:58 ` [PATCH 06/11] io_uring: don't init req->work fully in advance Pavel Begunkov
2021-03-22 1:58 ` [PATCH 07/11] io_uring: kill unused REQ_F_NO_FILE_TABLE Pavel Begunkov
2021-03-22 1:58 ` [PATCH 08/11] io_uring: optimise kiocb_end_write for !ISREG Pavel Begunkov
2021-03-22 1:58 ` [PATCH 09/11] io_uring: don't alter iopoll reissue fail ret code Pavel Begunkov
2021-03-22 1:58 ` [PATCH 10/11] io_uring: hide iter revert in resubmit_prep Pavel Begunkov
2021-03-22 1:58 ` [PATCH 11/11] io_uring: optimise rw complete error handling Pavel Begunkov
2021-03-22 2:00 ` [PATCH 5.13 00/11] yet another series of random 5.13 patches Pavel Begunkov
2021-03-22 13:49 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox