* [PATCH 0/3] Misc 5.13 fixes
@ 2021-04-16 1:25 Jens Axboe
2021-04-16 1:25 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Jens Axboe @ 2021-04-16 1:25 UTC (permalink / raw)
To: io-uring
Hi,
#1 disables multishot requests for double waitqueue users for now, it's
got a few corner cases that need hashing out.
#2 is a prep patch for #3, which ties the ->apoll lifetime with that of
the request instead of keeping it seperate. That's more logical and makes
it handled more like other dynamically allocated items.
fs/io_uring.c | 39 +++++++++++++++++++--------------------
1 file changed, 19 insertions(+), 20 deletions(-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/3] io_uring: disable multishot poll for double poll add cases
2021-04-16 1:25 [PATCH 0/3] Misc 5.13 fixes Jens Axboe
@ 2021-04-16 1:25 ` Jens Axboe
2021-04-16 1:25 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
2021-04-16 1:25 ` [PATCH 3/3] io_uring: tie req->apoll to request lifetime Jens Axboe
2 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2021-04-16 1:25 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
The re-add handling isn't correct for the multi wait case, so let's
just disable it for now explicitly until we can get that sorted out.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ab14692b05b4..87ce3dbcd4ca 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4976,6 +4976,11 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
pt->error = -EINVAL;
return;
}
+ /* don't allow double poll use cases with multishot */
+ if (!(req->poll.events & EPOLLONESHOT)) {
+ pt->error = -EINVAL;
+ return;
+ }
/* double add on the same waitqueue head, ignore */
if (poll->head == head)
return;
--
2.31.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot
2021-04-16 1:25 [PATCH 0/3] Misc 5.13 fixes Jens Axboe
2021-04-16 1:25 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
@ 2021-04-16 1:25 ` Jens Axboe
2021-04-16 13:25 ` Pavel Begunkov
2021-04-16 1:25 ` [PATCH 3/3] io_uring: tie req->apoll to request lifetime Jens Axboe
2 siblings, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2021-04-16 1:25 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
We have this in two spots right now, which is a bit fragile. In
preparation for moving REQ_F_POLLED cleanup into the same spot, move
the check into io_clean_op() itself so we only have it once.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 87ce3dbcd4ca..a668d6a3319c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1601,8 +1601,7 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
static void io_req_complete_state(struct io_kiocb *req, long res,
unsigned int cflags)
{
- if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED))
- io_clean_op(req);
+ io_clean_op(req);
req->result = res;
req->compl.cflags = cflags;
req->flags |= REQ_F_COMPLETE_INLINE;
@@ -1713,16 +1712,12 @@ static void io_dismantle_req(struct io_kiocb *req)
if (!(flags & REQ_F_FIXED_FILE))
io_put_file(req->file);
- if (flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED |
- REQ_F_INFLIGHT)) {
- io_clean_op(req);
+ io_clean_op(req);
+ if (req->flags & REQ_F_INFLIGHT) {
+ struct io_uring_task *tctx = req->task->io_uring;
- if (req->flags & REQ_F_INFLIGHT) {
- struct io_uring_task *tctx = req->task->io_uring;
-
- atomic_dec(&tctx->inflight_tracked);
- req->flags &= ~REQ_F_INFLIGHT;
- }
+ atomic_dec(&tctx->inflight_tracked);
+ req->flags &= ~REQ_F_INFLIGHT;
}
if (req->fixed_rsrc_refs)
percpu_ref_put(req->fixed_rsrc_refs);
@@ -5995,6 +5990,8 @@ static int io_req_defer(struct io_kiocb *req)
static void io_clean_op(struct io_kiocb *req)
{
+ if (!(req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP)))
+ return;
if (req->flags & REQ_F_BUFFER_SELECTED) {
switch (req->opcode) {
case IORING_OP_READV:
--
2.31.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot
2021-04-16 1:25 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
@ 2021-04-16 13:25 ` Pavel Begunkov
2021-04-16 15:42 ` Jens Axboe
0 siblings, 1 reply; 7+ messages in thread
From: Pavel Begunkov @ 2021-04-16 13:25 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 16/04/2021 02:25, Jens Axboe wrote:
> We have this in two spots right now, which is a bit fragile. In
> preparation for moving REQ_F_POLLED cleanup into the same spot, move
> the check into io_clean_op() itself so we only have it once.
>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
> fs/io_uring.c | 19 ++++++++-----------
> 1 file changed, 8 insertions(+), 11 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 87ce3dbcd4ca..a668d6a3319c 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -1601,8 +1601,7 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
> static void io_req_complete_state(struct io_kiocb *req, long res,
> unsigned int cflags)
> {
> - if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED))
> - io_clean_op(req);
> + io_clean_op(req);
> req->result = res;
> req->compl.cflags = cflags;
> req->flags |= REQ_F_COMPLETE_INLINE;
> @@ -1713,16 +1712,12 @@ static void io_dismantle_req(struct io_kiocb *req)
>
> if (!(flags & REQ_F_FIXED_FILE))
> io_put_file(req->file);
> - if (flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED |
> - REQ_F_INFLIGHT)) {
> - io_clean_op(req);
> + io_clean_op(req);
> + if (req->flags & REQ_F_INFLIGHT) {
> + struct io_uring_task *tctx = req->task->io_uring;
Not in particular happy about it.
1. adds extra if
2. adds extra function call
3. extra memory load in that function call.
Pushes us back in terms of performance. I'd suggest to have
a helper, which is pretty much optimisable and may be coalesced by a compiler with
adjacent flag checks.
static inline bool io_need_cleanup(unsigned flags)
{
return flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED);
}
if (io_need_cleanup(flags) || (flags & INFLIGHT)) {
io_clean_op();
if (INFLIGHT) {}
}
>
> - if (req->flags & REQ_F_INFLIGHT) {
> - struct io_uring_task *tctx = req->task->io_uring;
> -
> - atomic_dec(&tctx->inflight_tracked);
> - req->flags &= ~REQ_F_INFLIGHT;
> - }
> + atomic_dec(&tctx->inflight_tracked);
> + req->flags &= ~REQ_F_INFLIGHT;
> }
> if (req->fixed_rsrc_refs)
> percpu_ref_put(req->fixed_rsrc_refs);
> @@ -5995,6 +5990,8 @@ static int io_req_defer(struct io_kiocb *req)
>
> static void io_clean_op(struct io_kiocb *req)
> {
> + if (!(req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP)))
> + return;
> if (req->flags & REQ_F_BUFFER_SELECTED) {
> switch (req->opcode) {
> case IORING_OP_READV:
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot
2021-04-16 13:25 ` Pavel Begunkov
@ 2021-04-16 15:42 ` Jens Axboe
0 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2021-04-16 15:42 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 4/16/21 7:25 AM, Pavel Begunkov wrote:
> On 16/04/2021 02:25, Jens Axboe wrote:
>> We have this in two spots right now, which is a bit fragile. In
>> preparation for moving REQ_F_POLLED cleanup into the same spot, move
>> the check into io_clean_op() itself so we only have it once.
>>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>> fs/io_uring.c | 19 ++++++++-----------
>> 1 file changed, 8 insertions(+), 11 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 87ce3dbcd4ca..a668d6a3319c 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -1601,8 +1601,7 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
>> static void io_req_complete_state(struct io_kiocb *req, long res,
>> unsigned int cflags)
>> {
>> - if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED))
>> - io_clean_op(req);
>> + io_clean_op(req);
>> req->result = res;
>> req->compl.cflags = cflags;
>> req->flags |= REQ_F_COMPLETE_INLINE;
>> @@ -1713,16 +1712,12 @@ static void io_dismantle_req(struct io_kiocb *req)
>>
>> if (!(flags & REQ_F_FIXED_FILE))
>> io_put_file(req->file);
>> - if (flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED |
>> - REQ_F_INFLIGHT)) {
>> - io_clean_op(req);
>> + io_clean_op(req);
>> + if (req->flags & REQ_F_INFLIGHT) {
>> + struct io_uring_task *tctx = req->task->io_uring;
>
> Not in particular happy about it.
> 1. adds extra if
> 2. adds extra function call
> 3. extra memory load in that function call.
>
> Pushes us back in terms of performance. I'd suggest to have
> a helper, which is pretty much optimisable and may be coalesced by a compiler with
> adjacent flag checks.
>
> static inline bool io_need_cleanup(unsigned flags)
> {
> return flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED);
> }
>
> if (io_need_cleanup(flags) || (flags & INFLIGHT)) {
> io_clean_op();
> if (INFLIGHT) {}
> }
Sure, we can do that. Particularly because of needing to rearrange
to get it inlined, but no big deal. I'll fiddle it.
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 3/3] io_uring: tie req->apoll to request lifetime
2021-04-16 1:25 [PATCH 0/3] Misc 5.13 fixes Jens Axboe
2021-04-16 1:25 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
2021-04-16 1:25 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
@ 2021-04-16 1:25 ` Jens Axboe
2 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2021-04-16 1:25 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
We manage these separately right now, just tie it to the request lifetime
and make it be part of the usual REQ_F_NEED_CLEANUP logic.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index a668d6a3319c..2ea909ed2f49 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5029,9 +5029,6 @@ static void io_async_task_func(struct callback_head *cb)
__io_req_task_submit(req);
else
io_req_complete_failed(req, -ECANCELED);
-
- kfree(apoll->double_poll);
- kfree(apoll);
}
static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
@@ -5147,8 +5144,6 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
if (ret || ipt.error) {
io_poll_remove_double(req);
spin_unlock_irq(&ctx->completion_lock);
- kfree(apoll->double_poll);
- kfree(apoll);
return false;
}
spin_unlock_irq(&ctx->completion_lock);
@@ -5186,12 +5181,8 @@ static bool io_poll_remove_waitqs(struct io_kiocb *req)
do_complete = __io_poll_remove_one(req, io_poll_get_single(req), true);
if (req->opcode != IORING_OP_POLL_ADD && do_complete) {
- struct async_poll *apoll = req->apoll;
-
/* non-poll requests have submit ref still */
req_ref_put(req);
- kfree(apoll->double_poll);
- kfree(apoll);
}
return do_complete;
}
@@ -5990,7 +5981,8 @@ static int io_req_defer(struct io_kiocb *req)
static void io_clean_op(struct io_kiocb *req)
{
- if (!(req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP)))
+ if (!(req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP |
+ REQ_F_POLLED)))
return;
if (req->flags & REQ_F_BUFFER_SELECTED) {
switch (req->opcode) {
@@ -6047,6 +6039,11 @@ static void io_clean_op(struct io_kiocb *req)
}
req->flags &= ~REQ_F_NEED_CLEANUP;
}
+ if ((req->flags & REQ_F_POLLED) && req->apoll) {
+ kfree(req->apoll->double_poll);
+ kfree(req->apoll);
+ req->apoll = NULL;
+ }
}
static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
--
2.31.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 0/3] Misc 5.13 fixes
@ 2021-04-16 16:10 Jens Axboe
2021-04-16 16:10 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
0 siblings, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2021-04-16 16:10 UTC (permalink / raw)
To: io-uring
Hi,
#1 disables multishot requests for double waitqueue users for now, it's
got a few corner cases that need hashing out.
#2 is a prep patch for #3, which ties the ->apoll lifetime with that of
the request instead of keeping it seperate. That's more logical and makes
it handled more like other dynamically allocated items.
fs/io_uring.c | 32 +++++++++++++++++++-------------
1 file changed, 19 insertions(+), 13 deletions(-)
Since v1:
- Turn double wait multishot into single-shot. The application logic
won't change for that, it'll just be a bit less efficient as it'll
require a re-arm for each trigger. It effectively just turns
multishot into single shot mode for double wait use cases (which are
rare).
- Add flag checking helper and make the io_clean_op() call conditional
again.
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/3] io_uring: disable multishot poll for double poll add cases
2021-04-16 16:10 [PATCH v2 0/3] Misc 5.13 fixes Jens Axboe
@ 2021-04-16 16:10 ` Jens Axboe
0 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2021-04-16 16:10 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
The re-add handling isn't correct for the multi wait case, so let's
just disable it for now explicitly until we can get that sorted out. This
just turns it into a one-shot request. Since we pass back whether or not
a poll request terminates in multishot mode on completion, this should
not break properly behaving applications that check for IORING_CQE_F_MORE
on completion.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ab14692b05b4..4803e31e9301 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4976,6 +4976,12 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
pt->error = -EINVAL;
return;
}
+ /*
+ * Can't handle multishot for double wait for now, turn it
+ * into one-shot mode.
+ */
+ if (!(req->poll.events & EPOLLONESHOT))
+ req->poll.events |= EPOLLONESHOT;
/* double add on the same waitqueue head, ignore */
if (poll->head == head)
return;
--
2.31.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2021-04-16 16:10 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-04-16 1:25 [PATCH 0/3] Misc 5.13 fixes Jens Axboe
2021-04-16 1:25 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
2021-04-16 1:25 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
2021-04-16 13:25 ` Pavel Begunkov
2021-04-16 15:42 ` Jens Axboe
2021-04-16 1:25 ` [PATCH 3/3] io_uring: tie req->apoll to request lifetime Jens Axboe
-- strict thread matches above, loose matches on Subject: below --
2021-04-16 16:10 [PATCH v2 0/3] Misc 5.13 fixes Jens Axboe
2021-04-16 16:10 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox