public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: [email protected]
Cc: Jens Axboe <[email protected]>, Lin Horse <[email protected]>
Subject: Re: [PATCH for-6.1 1/1] io_uring: make poll refs more robust
Date: Fri, 18 Nov 2022 18:46:28 +0000	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <47cac5a4cb6b2e8d2f72f8f94adb088dbfd9308c.1668796727.git.asml.silence@gmail.com>

On 11/18/22 18:39, Pavel Begunkov wrote:
> poll_refs carry two functions, the first is ownership over the request.
> The second is notifying the io_poll_check_events() that there was an
> event but wake up couldn't grab the ownership, so io_poll_check_events()
> should retry.
> 
> We want to make poll_refs more robust against overflows. Instead of
> always incrementing it, which covers two purposes with one atomic, check
> if poll_refs is large and if so set a retry flag without attempts to
> grab ownership. The gap between the bias check and following atomics may
> seem racy, but we don't need it to be strict. Moreover there might only
> be maximum 4 parallel updates: by the first and the second poll entries,
> __io_arm_poll_handler() and cancellation. From those four, only poll wake
> ups may be executed multiple times, but they're protected by a spin.

Hmm, a second though, it may lose some events, I need to come up
with something better.


> Cc: [email protected]
> Reported-by: Lin Horse <[email protected]>
> Fixes: aa43477b04025 ("io_uring: poll rework")
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
>   io_uring/poll.c | 33 +++++++++++++++++++++++++++++----
>   1 file changed, 29 insertions(+), 4 deletions(-)
> 
> diff --git a/io_uring/poll.c b/io_uring/poll.c
> index 055632e9092a..63f152332bf6 100644
> --- a/io_uring/poll.c
> +++ b/io_uring/poll.c
> @@ -40,7 +40,15 @@ struct io_poll_table {
>   };
>   
>   #define IO_POLL_CANCEL_FLAG	BIT(31)
> -#define IO_POLL_REF_MASK	GENMASK(30, 0)
> +#define IO_POLL_RETRY_FLAG	BIT(30)
> +#define IO_POLL_REF_MASK	GENMASK(29, 0)
> +#define IO_POLL_RETRY_MASK	(IO_POLL_REF_MASK | IO_POLL_RETRY_FLAG)
> +
> +/*
> + * We usually have 1-2 refs taken, 128 is more than enough and we want to
> + * maximise the margin between this amount and the moment when it overflows.
> + */
> +#define IO_POLL_REF_BIAS	128
>   
>   #define IO_WQE_F_DOUBLE		1
>   
> @@ -58,6 +66,21 @@ static inline bool wqe_is_double(struct wait_queue_entry *wqe)
>   	return priv & IO_WQE_F_DOUBLE;
>   }
>   
> +static bool io_poll_get_ownership_slowpath(struct io_kiocb *req)
> +{
> +	int v;
> +
> +	/*
> +	 * poll_refs are already elevated and we don't have much hope for
> +	 * grabbing the ownership. Instead of incrementing set a retry flag
> +	 * to notify the loop that there might have been some change.
> +	 */
> +	v = atomic_fetch_or(IO_POLL_RETRY_FLAG, &req->poll_refs);
> +	if (!(v & IO_POLL_REF_MASK))
> +		return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
> +	return false;
> +}
> +
>   /*
>    * If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can
>    * bump it and acquire ownership. It's disallowed to modify requests while not
> @@ -66,6 +89,8 @@ static inline bool wqe_is_double(struct wait_queue_entry *wqe)
>    */
>   static inline bool io_poll_get_ownership(struct io_kiocb *req)
>   {
> +	if (unlikely(atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS))
> +		return io_poll_get_ownership_slowpath(req);
>   	return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
>   }
>   
> @@ -233,7 +258,7 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked)
>   		 * and all others are be lost. Redo vfs_poll() to get
>   		 * up to date state.
>   		 */
> -		if ((v & IO_POLL_REF_MASK) != 1)
> +		if ((v & IO_POLL_RETRY_MASK) != 1)
>   			req->cqe.res = 0;
>   
>   		/* the mask was stashed in __io_poll_execute */
> @@ -274,7 +299,7 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked)
>   		 * Release all references, retry if someone tried to restart
>   		 * task_work while we were executing it.
>   		 */
> -	} while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs));
> +	} while (atomic_sub_return(v & IO_POLL_RETRY_MASK, &req->poll_refs));
>   
>   	return IOU_POLL_NO_ACTION;
>   }
> @@ -590,7 +615,7 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
>   		 * locked, kick it off for them.
>   		 */
>   		v = atomic_dec_return(&req->poll_refs);
> -		if (unlikely(v & IO_POLL_REF_MASK))
> +		if (unlikely(v & IO_POLL_RETRY_MASK))
>   			__io_poll_execute(req, 0);
>   	}
>   	return 0;

-- 
Pavel Begunkov

      reply	other threads:[~2022-11-18 18:46 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-18 18:39 [PATCH for-6.1 1/1] io_uring: make poll refs more robust Pavel Begunkov
2022-11-18 18:46 ` Pavel Begunkov [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox