public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches
@ 2022-11-11 16:51 Pavel Begunkov
  2022-11-11 16:51 ` [PATCH for-6.1 1/2] io_uring/poll: fix double poll req->flags races Pavel Begunkov
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Pavel Begunkov @ 2022-11-11 16:51 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

1/2 is a fix following with a small patch adding a lockdep annotation
in one place for io_uring poll.

Pavel Begunkov (2):
  io_uring/poll: fix double poll req->flags races
  io_uring/poll: lockdep annote io_poll_req_insert_locked

 io_uring/poll.c | 31 +++++++++++++++++++------------
 1 file changed, 19 insertions(+), 12 deletions(-)

-- 
2.38.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH for-6.1 1/2] io_uring/poll: fix double poll req->flags races
  2022-11-11 16:51 [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches Pavel Begunkov
@ 2022-11-11 16:51 ` Pavel Begunkov
  2022-11-11 16:51 ` [PATCH for-6.1 2/2] io_uring/poll: lockdep annote io_poll_req_insert_locked Pavel Begunkov
  2022-11-11 22:59 ` [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Pavel Begunkov @ 2022-11-11 16:51 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

io_poll_double_prepare()            | io_poll_wake()
                                    | poll->head = NULL
smp_load(&poll->head); /* NULL */   |
flags = req->flags;                 |
                                    | req->flags &= ~SINGLE_POLL;
req->flags = flags | DOUBLE_POLL    |

The idea behind io_poll_double_prepare() is to serialise with the
first poll entry by taking the wq lock. However, it's not safe to assume
that io_poll_wake() is not running when we can't grab the lock and so we
may race modifying req->flags.

Skip double poll setup if that happens. It's ok because the first poll
entry will only be removed when it's definitely completing, e.g.
pollfree or oneshot with a valid mask.

Fixes: 49f1c68e048f1 ("io_uring: optimise submission side poll_refs")
Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/poll.c | 29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/io_uring/poll.c b/io_uring/poll.c
index 0d9f49c575e0..97c214aa688c 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -394,7 +394,8 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
 	return 1;
 }
 
-static void io_poll_double_prepare(struct io_kiocb *req)
+/* fails only when polling is already completing by the first entry */
+static bool io_poll_double_prepare(struct io_kiocb *req)
 {
 	struct wait_queue_head *head;
 	struct io_poll *poll = io_poll_get_single(req);
@@ -403,20 +404,20 @@ static void io_poll_double_prepare(struct io_kiocb *req)
 	rcu_read_lock();
 	head = smp_load_acquire(&poll->head);
 	/*
-	 * poll arm may not hold ownership and so race with
-	 * io_poll_wake() by modifying req->flags. There is only one
-	 * poll entry queued, serialise with it by taking its head lock.
+	 * poll arm might not hold ownership and so race for req->flags with
+	 * io_poll_wake(). There is only one poll entry queued, serialise with
+	 * it by taking its head lock. As we're still arming the tw hanlder
+	 * is not going to be run, so there are no races with it.
 	 */
-	if (head)
+	if (head) {
 		spin_lock_irq(&head->lock);
-
-	req->flags |= REQ_F_DOUBLE_POLL;
-	if (req->opcode == IORING_OP_POLL_ADD)
-		req->flags |= REQ_F_ASYNC_DATA;
-
-	if (head)
+		req->flags |= REQ_F_DOUBLE_POLL;
+		if (req->opcode == IORING_OP_POLL_ADD)
+			req->flags |= REQ_F_ASYNC_DATA;
 		spin_unlock_irq(&head->lock);
+	}
 	rcu_read_unlock();
+	return !!head;
 }
 
 static void __io_queue_proc(struct io_poll *poll, struct io_poll_table *pt,
@@ -454,7 +455,11 @@ static void __io_queue_proc(struct io_poll *poll, struct io_poll_table *pt,
 		/* mark as double wq entry */
 		wqe_private |= IO_WQE_F_DOUBLE;
 		io_init_poll_iocb(poll, first->events, first->wait.func);
-		io_poll_double_prepare(req);
+		if (!io_poll_double_prepare(req)) {
+			/* the request is completing, just back off */
+			kfree(poll);
+			return;
+		}
 		*poll_ptr = poll;
 	} else {
 		/* fine to modify, there is no poll queued to race with us */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH for-6.1 2/2] io_uring/poll: lockdep annote io_poll_req_insert_locked
  2022-11-11 16:51 [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches Pavel Begunkov
  2022-11-11 16:51 ` [PATCH for-6.1 1/2] io_uring/poll: fix double poll req->flags races Pavel Begunkov
@ 2022-11-11 16:51 ` Pavel Begunkov
  2022-11-11 22:59 ` [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Pavel Begunkov @ 2022-11-11 16:51 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Add a lockdep annotation in io_poll_req_insert_locked().

Signed-off-by: Pavel Begunkov <[email protected]>
---
 io_uring/poll.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/io_uring/poll.c b/io_uring/poll.c
index 97c214aa688c..f500506984ec 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -116,6 +116,8 @@ static void io_poll_req_insert_locked(struct io_kiocb *req)
 	struct io_hash_table *table = &req->ctx->cancel_table_locked;
 	u32 index = hash_long(req->cqe.user_data, table->hash_bits);
 
+	lockdep_assert_held(&req->ctx->uring_lock);
+
 	hlist_add_head(&req->hash_node, &table->hbs[index].list);
 }
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches
  2022-11-11 16:51 [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches Pavel Begunkov
  2022-11-11 16:51 ` [PATCH for-6.1 1/2] io_uring/poll: fix double poll req->flags races Pavel Begunkov
  2022-11-11 16:51 ` [PATCH for-6.1 2/2] io_uring/poll: lockdep annote io_poll_req_insert_locked Pavel Begunkov
@ 2022-11-11 22:59 ` Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-11-11 22:59 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov

On Fri, 11 Nov 2022 16:51:28 +0000, Pavel Begunkov wrote:
> 1/2 is a fix following with a small patch adding a lockdep annotation
> in one place for io_uring poll.
> 
> Pavel Begunkov (2):
>   io_uring/poll: fix double poll req->flags races
>   io_uring/poll: lockdep annote io_poll_req_insert_locked
> 
> [...]

Applied, thanks!

[1/2] io_uring/poll: fix double poll req->flags races
      commit: 30a33669fa21cd3dc7d92a00ba736358059014b7
[2/2] io_uring/poll: lockdep annote io_poll_req_insert_locked
      commit: 5576035f15dfcc6cb1cec236db40c2c0733b0ba4

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-11-11 22:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-11 16:51 [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches Pavel Begunkov
2022-11-11 16:51 ` [PATCH for-6.1 1/2] io_uring/poll: fix double poll req->flags races Pavel Begunkov
2022-11-11 16:51 ` [PATCH for-6.1 2/2] io_uring/poll: lockdep annote io_poll_req_insert_locked Pavel Begunkov
2022-11-11 22:59 ` [PATCH for-6.1 0/2] Subject: [PATCH for-6.1 0/2] 6.1 poll patches Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox