public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH 1/3] io_uring: disable multishot poll for double poll add cases
  2021-04-16  1:25 [PATCH 0/3] Misc 5.13 fixes Jens Axboe
@ 2021-04-16  1:25 ` Jens Axboe
  0 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2021-04-16  1:25 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

The re-add handling isn't correct for the multi wait case, so let's
just disable it for now explicitly until we can get that sorted out.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ab14692b05b4..87ce3dbcd4ca 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4976,6 +4976,11 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
 			pt->error = -EINVAL;
 			return;
 		}
+		/* don't allow double poll use cases with multishot */
+		if (!(req->poll.events & EPOLLONESHOT)) {
+			pt->error = -EINVAL;
+			return;
+		}
 		/* double add on the same waitqueue head, ignore */
 		if (poll->head == head)
 			return;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 0/3] Misc 5.13 fixes
@ 2021-04-16 16:10 Jens Axboe
  2021-04-16 16:10 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Jens Axboe @ 2021-04-16 16:10 UTC (permalink / raw)
  To: io-uring

Hi,

#1 disables multishot requests for double waitqueue users for now, it's
got a few corner cases that need hashing out.

#2 is a prep patch for #3, which ties the ->apoll lifetime with that of
the request instead of keeping it seperate. That's more logical and makes
it handled more like other dynamically allocated items.

 fs/io_uring.c | 32 +++++++++++++++++++-------------
 1 file changed, 19 insertions(+), 13 deletions(-)

Since v1:

- Turn double wait multishot into single-shot. The application logic
  won't change for that, it'll just be a bit less efficient as it'll
  require a re-arm for each trigger. It effectively just turns
  multishot into single shot mode for double wait use cases (which are
  rare).
- Add flag checking helper and make the io_clean_op() call conditional
  again.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] io_uring: disable multishot poll for double poll add cases
  2021-04-16 16:10 [PATCH v2 0/3] Misc 5.13 fixes Jens Axboe
@ 2021-04-16 16:10 ` Jens Axboe
  2021-04-16 16:10 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
  2021-04-16 16:10 ` [PATCH 3/3] io_uring: tie req->apoll to request lifetime Jens Axboe
  2 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2021-04-16 16:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

The re-add handling isn't correct for the multi wait case, so let's
just disable it for now explicitly until we can get that sorted out. This
just turns it into a one-shot request. Since we pass back whether or not
a poll request terminates in multishot mode on completion, this should
not break properly behaving applications that check for IORING_CQE_F_MORE
on completion.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ab14692b05b4..4803e31e9301 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4976,6 +4976,12 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
 			pt->error = -EINVAL;
 			return;
 		}
+		/*
+		 * Can't handle multishot for double wait for now, turn it
+		 * into one-shot mode.
+		 */
+		if (!(req->poll.events & EPOLLONESHOT))
+			req->poll.events |= EPOLLONESHOT;
 		/* double add on the same waitqueue head, ignore */
 		if (poll->head == head)
 			return;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot
  2021-04-16 16:10 [PATCH v2 0/3] Misc 5.13 fixes Jens Axboe
  2021-04-16 16:10 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
@ 2021-04-16 16:10 ` Jens Axboe
  2021-04-16 23:37   ` Pavel Begunkov
  2021-04-16 16:10 ` [PATCH 3/3] io_uring: tie req->apoll to request lifetime Jens Axboe
  2 siblings, 1 reply; 6+ messages in thread
From: Jens Axboe @ 2021-04-16 16:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We have this in two spots right now, which is a bit fragile. In
preparation for moving REQ_F_POLLED cleanup into the same spot, move
the check into a separate helper so we only have it once.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4803e31e9301..8e6dcb69f3e9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1598,10 +1598,15 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
 	}
 }
 
+static inline bool io_req_needs_clean(struct io_kiocb *req)
+{
+	return req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP);
+}
+
 static void io_req_complete_state(struct io_kiocb *req, long res,
 				  unsigned int cflags)
 {
-	if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED))
+	if (io_req_needs_clean(req))
 		io_clean_op(req);
 	req->result = res;
 	req->compl.cflags = cflags;
@@ -1713,10 +1718,8 @@ static void io_dismantle_req(struct io_kiocb *req)
 
 	if (!(flags & REQ_F_FIXED_FILE))
 		io_put_file(req->file);
-	if (flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED |
-		     REQ_F_INFLIGHT)) {
+	if (io_req_needs_clean(req) || (req->flags & REQ_F_INFLIGHT)) {
 		io_clean_op(req);
-
 		if (req->flags & REQ_F_INFLIGHT) {
 			struct io_uring_task *tctx = req->task->io_uring;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] io_uring: tie req->apoll to request lifetime
  2021-04-16 16:10 [PATCH v2 0/3] Misc 5.13 fixes Jens Axboe
  2021-04-16 16:10 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
  2021-04-16 16:10 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
@ 2021-04-16 16:10 ` Jens Axboe
  2 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2021-04-16 16:10 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We manage these separately right now, just tie it to the request lifetime
and make it be part of the usual REQ_F_NEED_CLEANUP logic.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8e6dcb69f3e9..10b2367138be 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1600,7 +1600,8 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
 
 static inline bool io_req_needs_clean(struct io_kiocb *req)
 {
-	return req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP);
+	return req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP |
+				REQ_F_POLLED);
 }
 
 static void io_req_complete_state(struct io_kiocb *req, long res,
@@ -5038,9 +5039,6 @@ static void io_async_task_func(struct callback_head *cb)
 		__io_req_task_submit(req);
 	else
 		io_req_complete_failed(req, -ECANCELED);
-
-	kfree(apoll->double_poll);
-	kfree(apoll);
 }
 
 static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
@@ -5156,8 +5154,6 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
 	if (ret || ipt.error) {
 		io_poll_remove_double(req);
 		spin_unlock_irq(&ctx->completion_lock);
-		kfree(apoll->double_poll);
-		kfree(apoll);
 		return false;
 	}
 	spin_unlock_irq(&ctx->completion_lock);
@@ -5195,12 +5191,8 @@ static bool io_poll_remove_waitqs(struct io_kiocb *req)
 	do_complete = __io_poll_remove_one(req, io_poll_get_single(req), true);
 
 	if (req->opcode != IORING_OP_POLL_ADD && do_complete) {
-		struct async_poll *apoll = req->apoll;
-
 		/* non-poll requests have submit ref still */
 		req_ref_put(req);
-		kfree(apoll->double_poll);
-		kfree(apoll);
 	}
 	return do_complete;
 }
@@ -6054,6 +6046,11 @@ static void io_clean_op(struct io_kiocb *req)
 		}
 		req->flags &= ~REQ_F_NEED_CLEANUP;
 	}
+	if ((req->flags & REQ_F_POLLED) && req->apoll) {
+		kfree(req->apoll->double_poll);
+		kfree(req->apoll);
+		req->apoll = NULL;
+	}
 }
 
 static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot
  2021-04-16 16:10 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
@ 2021-04-16 23:37   ` Pavel Begunkov
  0 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2021-04-16 23:37 UTC (permalink / raw)
  To: Jens Axboe, io-uring

On 16/04/2021 17:10, Jens Axboe wrote:
> We have this in two spots right now, which is a bit fragile. In
> preparation for moving REQ_F_POLLED cleanup into the same spot, move
> the check into a separate helper so we only have it once.

Perfect, others look good to me as well

> 
> Signed-off-by: Jens Axboe <[email protected]>
> ---
>  fs/io_uring.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 4803e31e9301..8e6dcb69f3e9 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -1598,10 +1598,15 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
>  	}
>  }
>  
> +static inline bool io_req_needs_clean(struct io_kiocb *req)
> +{
> +	return req->flags & (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP);
> +}
> +
>  static void io_req_complete_state(struct io_kiocb *req, long res,
>  				  unsigned int cflags)
>  {
> -	if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED))
> +	if (io_req_needs_clean(req))
>  		io_clean_op(req);
>  	req->result = res;
>  	req->compl.cflags = cflags;
> @@ -1713,10 +1718,8 @@ static void io_dismantle_req(struct io_kiocb *req)
>  
>  	if (!(flags & REQ_F_FIXED_FILE))
>  		io_put_file(req->file);
> -	if (flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED |
> -		     REQ_F_INFLIGHT)) {
> +	if (io_req_needs_clean(req) || (req->flags & REQ_F_INFLIGHT)) {
>  		io_clean_op(req);
> -
>  		if (req->flags & REQ_F_INFLIGHT) {
>  			struct io_uring_task *tctx = req->task->io_uring;
>  
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-04-16 23:42 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-04-16 16:10 [PATCH v2 0/3] Misc 5.13 fixes Jens Axboe
2021-04-16 16:10 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe
2021-04-16 16:10 ` [PATCH 2/3] io_uring: put flag checking for needing req cleanup in one spot Jens Axboe
2021-04-16 23:37   ` Pavel Begunkov
2021-04-16 16:10 ` [PATCH 3/3] io_uring: tie req->apoll to request lifetime Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2021-04-16  1:25 [PATCH 0/3] Misc 5.13 fixes Jens Axboe
2021-04-16  1:25 ` [PATCH 1/3] io_uring: disable multishot poll for double poll add cases Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox