* [PATCH 0/3] bunch of fixes
@ 2020-07-03 19:15 Pavel Begunkov
2020-07-03 19:15 ` [PATCH 1/3] io_uring: fix mis-refcounting linked timeouts Pavel Begunkov
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-03 19:15 UTC (permalink / raw)
To: Jens Axboe, io-uring
Just random patches for 5.9. [1,3] are fixes.
Pavel Begunkov (3):
io_uring: fix mis-refcounting linked timeouts
io_uring: keep queue_sqe()'s fail path separately
io_uring: fix lost cqe->flags
fs/io_uring.c | 59 +++++++++++++++++----------------------------------
1 file changed, 20 insertions(+), 39 deletions(-)
--
2.24.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/3] io_uring: fix mis-refcounting linked timeouts
2020-07-03 19:15 [PATCH 0/3] bunch of fixes Pavel Begunkov
@ 2020-07-03 19:15 ` Pavel Begunkov
2020-07-03 19:15 ` [PATCH 2/3] io_uring: keep queue_sqe()'s fail path separately Pavel Begunkov
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-03 19:15 UTC (permalink / raw)
To: Jens Axboe, io-uring
io_prep_linked_timeout() sets REQ_F_LINK_TIMEOUT altering refcounting of
the following linked request. After that someone should call
io_queue_linked_timeout(), otherwise a submission reference of the
linked timeout won't be ever dropped.
That's what happens in io_steal_work() if io-wq decides to postpone
linked request with io_wqe_enqueue(). io_queue_linked_timeout()
can also be potentially called twice without synchronisation during
re-submission, e.g. io_rw_resubmit().
There are the rules, whoever did io_prep_linked_timeout() must
also call io_queue_linked_timeout(). To not do it twice,
io_prep_linked_timeout() will return non NULL only for the first call.
That's controlled by REQ_F_LINK_TIMEOUT flag.
Also kill REQ_F_QUEUE_TIMEOUT.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 33 +++++++--------------------------
1 file changed, 7 insertions(+), 26 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 51132f9bdbcc..f0fed59122e8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -538,7 +538,6 @@ enum {
REQ_F_POLLED_BIT,
REQ_F_BUFFER_SELECTED_BIT,
REQ_F_NO_FILE_TABLE_BIT,
- REQ_F_QUEUE_TIMEOUT_BIT,
REQ_F_WORK_INITIALIZED_BIT,
REQ_F_TASK_PINNED_BIT,
@@ -586,8 +585,6 @@ enum {
REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),
/* doesn't need file table for this request */
REQ_F_NO_FILE_TABLE = BIT(REQ_F_NO_FILE_TABLE_BIT),
- /* needs to queue linked timeout */
- REQ_F_QUEUE_TIMEOUT = BIT(REQ_F_QUEUE_TIMEOUT_BIT),
/* io_wq_work is initialized */
REQ_F_WORK_INITIALIZED = BIT(REQ_F_WORK_INITIALIZED_BIT),
/* req->task is refcounted */
@@ -1835,7 +1832,7 @@ static void io_put_req(struct io_kiocb *req)
static struct io_wq_work *io_steal_work(struct io_kiocb *req)
{
- struct io_kiocb *timeout, *nxt = NULL;
+ struct io_kiocb *nxt;
/*
* A ref is owned by io-wq in which context we're. So, if that's the
@@ -1846,13 +1843,7 @@ static struct io_wq_work *io_steal_work(struct io_kiocb *req)
return NULL;
nxt = io_req_find_next(req);
- if (!nxt)
- return NULL;
-
- timeout = io_prep_linked_timeout(nxt);
- if (timeout)
- nxt->flags |= REQ_F_QUEUE_TIMEOUT;
- return &nxt->work;
+ return nxt ? &nxt->work : NULL;
}
/*
@@ -5695,24 +5686,15 @@ static int io_issue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
return 0;
}
-static void io_arm_async_linked_timeout(struct io_kiocb *req)
-{
- struct io_kiocb *link;
-
- /* link head's timeout is queued in io_queue_async_work() */
- if (!(req->flags & REQ_F_QUEUE_TIMEOUT))
- return;
-
- link = list_first_entry(&req->link_list, struct io_kiocb, link_list);
- io_queue_linked_timeout(link);
-}
-
static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
{
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+ struct io_kiocb *timeout;
int ret = 0;
- io_arm_async_linked_timeout(req);
+ timeout = io_prep_linked_timeout(req);
+ if (timeout)
+ io_queue_linked_timeout(timeout);
/* if NO_CANCEL is set, we must still run the work */
if ((work->flags & (IO_WQ_WORK_CANCEL|IO_WQ_WORK_NO_CANCEL)) ==
@@ -5886,8 +5868,7 @@ static struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
if (!(req->flags & REQ_F_LINK_HEAD))
return NULL;
- /* for polled retry, if flag is set, we already went through here */
- if (req->flags & REQ_F_POLLED)
+ if (req->flags & REQ_F_LINK_TIMEOUT)
return NULL;
nxt = list_first_entry_or_null(&req->link_list, struct io_kiocb,
--
2.24.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/3] io_uring: keep queue_sqe()'s fail path separately
2020-07-03 19:15 [PATCH 0/3] bunch of fixes Pavel Begunkov
2020-07-03 19:15 ` [PATCH 1/3] io_uring: fix mis-refcounting linked timeouts Pavel Begunkov
@ 2020-07-03 19:15 ` Pavel Begunkov
2020-07-03 19:15 ` [PATCH 3/3] io_uring: fix lost cqe->flags Pavel Begunkov
2020-07-03 19:46 ` [PATCH 0/3] bunch of fixes Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-03 19:15 UTC (permalink / raw)
To: Jens Axboe, io-uring
A preparataion path, extracts error path into a separate block.
It looks saner then calling req_set_fail_links() after
io_put_req_find_next(), even though it have been working well.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index f0fed59122e8..d61d8bc0cfc0 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5930,22 +5930,21 @@ static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
goto exit;
}
+ if (unlikely(ret)) {
err:
- /* drop submission reference */
- nxt = io_put_req_find_next(req);
-
- if (linked_timeout) {
- if (!ret)
- io_queue_linked_timeout(linked_timeout);
- else
- io_put_req(linked_timeout);
- }
-
- /* and drop final reference, if we failed */
- if (ret) {
+ /* un-prep timeout, so it'll be killed as any other linked */
+ req->flags &= ~REQ_F_LINK_TIMEOUT;
req_set_fail_links(req);
+ io_put_req(req);
io_req_complete(req, ret);
+ goto exit;
}
+
+ /* drop submission reference */
+ nxt = io_put_req_find_next(req);
+ if (linked_timeout)
+ io_queue_linked_timeout(linked_timeout);
+
if (nxt) {
req = nxt;
--
2.24.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 3/3] io_uring: fix lost cqe->flags
2020-07-03 19:15 [PATCH 0/3] bunch of fixes Pavel Begunkov
2020-07-03 19:15 ` [PATCH 1/3] io_uring: fix mis-refcounting linked timeouts Pavel Begunkov
2020-07-03 19:15 ` [PATCH 2/3] io_uring: keep queue_sqe()'s fail path separately Pavel Begunkov
@ 2020-07-03 19:15 ` Pavel Begunkov
2020-07-03 19:46 ` [PATCH 0/3] bunch of fixes Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-07-03 19:15 UTC (permalink / raw)
To: Jens Axboe, io-uring
Don't forget to fill cqe->flags properly in
io_submit_flush_completions()
Fixes: a1d7c393c4711 ("io_uring: enable READ/WRITE to use deferred completions")
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index d61d8bc0cfc0..a2459504b371 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1416,7 +1416,7 @@ static void io_submit_flush_completions(struct io_comp_state *cs)
req = list_first_entry(&cs->list, struct io_kiocb, list);
list_del(&req->list);
- io_cqring_fill_event(req, req->result);
+ __io_cqring_fill_event(req, req->result, req->cflags);
if (!(req->flags & REQ_F_LINK_HEAD)) {
req->flags |= REQ_F_COMP_LOCKED;
io_put_req(req);
@@ -1441,6 +1441,7 @@ static void __io_req_complete(struct io_kiocb *req, long res, unsigned cflags,
io_put_req(req);
} else {
req->result = res;
+ req->cflags = cflags;
list_add_tail(&req->list, &cs->list);
if (++cs->nr >= 32)
io_submit_flush_completions(cs);
--
2.24.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 0/3] bunch of fixes
2020-07-03 19:15 [PATCH 0/3] bunch of fixes Pavel Begunkov
` (2 preceding siblings ...)
2020-07-03 19:15 ` [PATCH 3/3] io_uring: fix lost cqe->flags Pavel Begunkov
@ 2020-07-03 19:46 ` Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2020-07-03 19:46 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 7/3/20 1:15 PM, Pavel Begunkov wrote:
> Just random patches for 5.9. [1,3] are fixes.
>
> Pavel Begunkov (3):
> io_uring: fix mis-refcounting linked timeouts
> io_uring: keep queue_sqe()'s fail path separately
> io_uring: fix lost cqe->flags
>
> fs/io_uring.c | 59 +++++++++++++++++----------------------------------
> 1 file changed, 20 insertions(+), 39 deletions(-)
LGTM, applied, thanks.
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-07-03 19:46 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-07-03 19:15 [PATCH 0/3] bunch of fixes Pavel Begunkov
2020-07-03 19:15 ` [PATCH 1/3] io_uring: fix mis-refcounting linked timeouts Pavel Begunkov
2020-07-03 19:15 ` [PATCH 2/3] io_uring: keep queue_sqe()'s fail path separately Pavel Begunkov
2020-07-03 19:15 ` [PATCH 3/3] io_uring: fix lost cqe->flags Pavel Begunkov
2020-07-03 19:46 ` [PATCH 0/3] bunch of fixes Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox