* [PATCH 1/6] io_uring: fix flush req->refs underflow
2020-05-26 17:34 [PATCH 0/6] random patches for 5.8 Pavel Begunkov
@ 2020-05-26 17:34 ` Pavel Begunkov
2020-05-26 18:04 ` Jens Axboe
2020-05-26 17:34 ` [PATCH 2/6] io_uring: simplify io_timeout locking Pavel Begunkov
` (5 subsequent siblings)
6 siblings, 1 reply; 11+ messages in thread
From: Pavel Begunkov @ 2020-05-26 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring, linux-kernel
In io_uring_cancel_files(), after refcount_sub_and_test() leaves 0
req->refs, it calls io_put_req(), which would also put a ref. Call
io_free_req() instead.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index bf75ac753b9d..42b5603ee410 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7534,7 +7534,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
* all we had, then we're done with this request.
*/
if (refcount_sub_and_test(2, &cancel_req->refs)) {
- io_put_req(cancel_req);
+ io_free_req(cancel_req);
finish_wait(&ctx->inflight_wait, &wait);
continue;
}
--
2.24.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/6] io_uring: simplify io_timeout locking
2020-05-26 17:34 [PATCH 0/6] random patches for 5.8 Pavel Begunkov
2020-05-26 17:34 ` [PATCH 1/6] io_uring: fix flush req->refs underflow Pavel Begunkov
@ 2020-05-26 17:34 ` Pavel Begunkov
2020-05-26 17:34 ` [PATCH 3/6] io_uring: don't re-read sqe->off in timeout_prep() Pavel Begunkov
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-05-26 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring, linux-kernel
Move spin_lock_irq() earlier to have only 1 call site of it in
io_timeout(). It makes the flow easier.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 42b5603ee410..e30fc17dd268 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4845,6 +4845,7 @@ static int io_timeout(struct io_kiocb *req)
u32 seq = req->sequence;
data = &req->io->timeout;
+ spin_lock_irq(&ctx->completion_lock);
/*
* sqe->off holds how many events that need to occur for this
@@ -4853,7 +4854,6 @@ static int io_timeout(struct io_kiocb *req)
*/
if (!count) {
req->flags |= REQ_F_TIMEOUT_NOSEQ;
- spin_lock_irq(&ctx->completion_lock);
entry = ctx->timeout_list.prev;
goto add;
}
@@ -4864,7 +4864,6 @@ static int io_timeout(struct io_kiocb *req)
* Insertion sort, ensuring the first entry in the list is always
* the one we need first.
*/
- spin_lock_irq(&ctx->completion_lock);
list_for_each_prev(entry, &ctx->timeout_list) {
struct io_kiocb *nxt = list_entry(entry, struct io_kiocb, list);
unsigned nxt_seq;
--
2.24.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/6] io_uring: don't re-read sqe->off in timeout_prep()
2020-05-26 17:34 [PATCH 0/6] random patches for 5.8 Pavel Begunkov
2020-05-26 17:34 ` [PATCH 1/6] io_uring: fix flush req->refs underflow Pavel Begunkov
2020-05-26 17:34 ` [PATCH 2/6] io_uring: simplify io_timeout locking Pavel Begunkov
@ 2020-05-26 17:34 ` Pavel Begunkov
2020-05-26 17:34 ` [PATCH 4/6] io_uring: separate DRAIN flushing into a cold path Pavel Begunkov
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-05-26 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring, linux-kernel
SQEs are user writable, don't read sqe->off twice in io_timeout_prep()
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index e30fc17dd268..067ebdeb1ba4 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4803,18 +4803,19 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
{
struct io_timeout_data *data;
unsigned flags;
+ u32 off = READ_ONCE(sqe->off);
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
if (sqe->ioprio || sqe->buf_index || sqe->len != 1)
return -EINVAL;
- if (sqe->off && is_timeout_link)
+ if (off && is_timeout_link)
return -EINVAL;
flags = READ_ONCE(sqe->timeout_flags);
if (flags & ~IORING_TIMEOUT_ABS)
return -EINVAL;
- req->timeout.count = READ_ONCE(sqe->off);
+ req->timeout.count = off;
if (!req->io && io_alloc_async_ctx(req))
return -ENOMEM;
--
2.24.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 4/6] io_uring: separate DRAIN flushing into a cold path
2020-05-26 17:34 [PATCH 0/6] random patches for 5.8 Pavel Begunkov
` (2 preceding siblings ...)
2020-05-26 17:34 ` [PATCH 3/6] io_uring: don't re-read sqe->off in timeout_prep() Pavel Begunkov
@ 2020-05-26 17:34 ` Pavel Begunkov
2020-05-26 17:34 ` [PATCH 5/6] io_uring: get rid of manual punting in io_close Pavel Begunkov
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-05-26 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring, linux-kernel
io_commit_cqring() assembly doesn't look good with extra code handling
drained requests. IOSQE_IO_DRAIN is slow and discouraged to be used in
a hot path, so try to minimise its impact by putting it into a helper
and doing a fast check.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 067ebdeb1ba4..acf6ce9eee68 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -982,19 +982,6 @@ static inline bool req_need_defer(struct io_kiocb *req)
return false;
}
-static struct io_kiocb *io_get_deferred_req(struct io_ring_ctx *ctx)
-{
- struct io_kiocb *req;
-
- req = list_first_entry_or_null(&ctx->defer_list, struct io_kiocb, list);
- if (req && !req_need_defer(req)) {
- list_del_init(&req->list);
- return req;
- }
-
- return NULL;
-}
-
static struct io_kiocb *io_get_timeout_req(struct io_ring_ctx *ctx)
{
struct io_kiocb *req;
@@ -1127,6 +1114,19 @@ static void io_kill_timeouts(struct io_ring_ctx *ctx)
spin_unlock_irq(&ctx->completion_lock);
}
+static void __io_queue_deferred(struct io_ring_ctx *ctx)
+{
+ do {
+ struct io_kiocb *req = list_first_entry(&ctx->defer_list,
+ struct io_kiocb, list);
+
+ if (req_need_defer(req))
+ break;
+ list_del_init(&req->list);
+ io_queue_async_work(req);
+ } while (!list_empty(&ctx->defer_list));
+}
+
static void io_commit_cqring(struct io_ring_ctx *ctx)
{
struct io_kiocb *req;
@@ -1136,8 +1136,8 @@ static void io_commit_cqring(struct io_ring_ctx *ctx)
__io_commit_cqring(ctx);
- while ((req = io_get_deferred_req(ctx)) != NULL)
- io_queue_async_work(req);
+ if (unlikely(!list_empty(&ctx->defer_list)))
+ __io_queue_deferred(ctx);
}
static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
--
2.24.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 5/6] io_uring: get rid of manual punting in io_close
2020-05-26 17:34 [PATCH 0/6] random patches for 5.8 Pavel Begunkov
` (3 preceding siblings ...)
2020-05-26 17:34 ` [PATCH 4/6] io_uring: separate DRAIN flushing into a cold path Pavel Begunkov
@ 2020-05-26 17:34 ` Pavel Begunkov
2020-05-26 17:34 ` [PATCH 6/6] io_uring: let io_req_aux_free() handle fixed files Pavel Begunkov
2020-05-26 19:32 ` [PATCH 0/6] random patches for 5.8 Jens Axboe
6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-05-26 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring, linux-kernel
io_close() was punting async manually to skip grabbing files. Use
REQ_F_NO_FILE_TABLE instead, and pass it through the generic path
with -EAGAIN.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 20 +++++---------------
1 file changed, 5 insertions(+), 15 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index acf6ce9eee68..ac1aa25f4a55 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3492,25 +3492,15 @@ static int io_close(struct io_kiocb *req, bool force_nonblock)
req->close.put_file = NULL;
ret = __close_fd_get_file(req->close.fd, &req->close.put_file);
- if (ret < 0) {
- if (ret == -ENOENT)
- ret = -EBADF;
- return ret;
- }
+ if (ret < 0)
+ return (ret == -ENOENT) ? -EBADF : ret;
/* if the file has a flush method, be safe and punt to async */
if (req->close.put_file->f_op->flush && force_nonblock) {
- /* submission ref will be dropped, take it for async */
- refcount_inc(&req->refs);
-
+ /* avoid grabbing files - we don't need the files */
+ req->flags |= REQ_F_NO_FILE_TABLE | REQ_F_MUST_PUNT;
req->work.func = io_close_finish;
- /*
- * Do manual async queue here to avoid grabbing files - we don't
- * need the files, and it'll cause io_close_finish() to close
- * the file again and cause a double CQE entry for this request
- */
- io_queue_async_work(req);
- return 0;
+ return -EAGAIN;
}
/*
--
2.24.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 6/6] io_uring: let io_req_aux_free() handle fixed files
2020-05-26 17:34 [PATCH 0/6] random patches for 5.8 Pavel Begunkov
` (4 preceding siblings ...)
2020-05-26 17:34 ` [PATCH 5/6] io_uring: get rid of manual punting in io_close Pavel Begunkov
@ 2020-05-26 17:34 ` Pavel Begunkov
2020-05-26 18:03 ` Jens Axboe
2020-05-26 19:32 ` [PATCH 0/6] random patches for 5.8 Jens Axboe
6 siblings, 1 reply; 11+ messages in thread
From: Pavel Begunkov @ 2020-05-26 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring, linux-kernel
Remove duplicated code putting fixed files in io_free_req_many(),
__io_req_aux_free() does the same thing, let it handle them.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ac1aa25f4a55..a3dbd5f40391 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1413,10 +1413,6 @@ static void io_free_req_many(struct io_ring_ctx *ctx, struct req_batch *rb)
for (i = 0; i < rb->to_free; i++) {
struct io_kiocb *req = rb->reqs[i];
- if (req->flags & REQ_F_FIXED_FILE) {
- req->file = NULL;
- percpu_ref_put(req->fixed_file_refs);
- }
if (req->flags & REQ_F_INFLIGHT)
inflight++;
__io_req_aux_free(req);
--
2.24.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 6/6] io_uring: let io_req_aux_free() handle fixed files
2020-05-26 17:34 ` [PATCH 6/6] io_uring: let io_req_aux_free() handle fixed files Pavel Begunkov
@ 2020-05-26 18:03 ` Jens Axboe
2020-05-26 18:11 ` Pavel Begunkov
0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2020-05-26 18:03 UTC (permalink / raw)
To: Pavel Begunkov, io-uring, linux-kernel
On 5/26/20 11:34 AM, Pavel Begunkov wrote:
> Remove duplicated code putting fixed files in io_free_req_many(),
> __io_req_aux_free() does the same thing, let it handle them.
This one is already changed in mainline:
> commit 9d9e88a24c1f20ebfc2f28b1762ce78c0b9e1cb3 (tag: io_uring-5.7-2020-05-15)
Author: Jens Axboe <[email protected]>
Date: Wed May 13 12:53:19 2020 -0600
io_uring: polled fixed file must go through free iteration
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 6/6] io_uring: let io_req_aux_free() handle fixed files
2020-05-26 18:03 ` Jens Axboe
@ 2020-05-26 18:11 ` Pavel Begunkov
0 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-05-26 18:11 UTC (permalink / raw)
To: Jens Axboe, io-uring, linux-kernel
On 26/05/2020 21:03, Jens Axboe wrote:
> On 5/26/20 11:34 AM, Pavel Begunkov wrote:
>> Remove duplicated code putting fixed files in io_free_req_many(),
>> __io_req_aux_free() does the same thing, let it handle them.
>
> This one is already changed in mainline:
>
>
>> commit 9d9e88a24c1f20ebfc2f28b1762ce78c0b9e1cb3 (tag: io_uring-5.7-2020-05-15)
> Author: Jens Axboe <[email protected]>
> Date: Wed May 13 12:53:19 2020 -0600
>
> io_uring: polled fixed file must go through free iteration
>
I see, missed it.
And thanks for adding the fixes tag for the other one.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/6] random patches for 5.8
2020-05-26 17:34 [PATCH 0/6] random patches for 5.8 Pavel Begunkov
` (5 preceding siblings ...)
2020-05-26 17:34 ` [PATCH 6/6] io_uring: let io_req_aux_free() handle fixed files Pavel Begunkov
@ 2020-05-26 19:32 ` Jens Axboe
6 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2020-05-26 19:32 UTC (permalink / raw)
To: Pavel Begunkov, io-uring, linux-kernel
On 5/26/20 11:34 AM, Pavel Begunkov wrote:
> Nothing insteresting in particular, just start flushing stashed patches.
> Ones in this series are pretty easy and short.
>
> Pavel Begunkov (6):
> io_uring: fix flush req->refs underflow
> io_uring: simplify io_timeout locking
> io_uring: don't re-read sqe->off in timeout_prep()
> io_uring: separate DRAIN flushing into a cold path
> io_uring: get rid of manual punting in io_close
> io_uring: let io_req_aux_free() handle fixed files
>
> fs/io_uring.c | 64 ++++++++++++++++++++-------------------------------
> 1 file changed, 25 insertions(+), 39 deletions(-)
Applied 1-5, thanks.
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread