* [PATCH for-next 0/4] poll locking fixes
@ 2022-07-07 14:13 Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 1/4] io_uring: don't miss setting REQ_F_DOUBLE_POLL Pavel Begunkov
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Pavel Begunkov @ 2022-07-07 14:13 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Fix a stupid bug with recent poll locking optimisations and also add two
clean ups on top.
Pavel Begunkov (4):
io_uring: don't miss setting REQ_F_DOUBLE_POLL
io_uring: don't race double poll setting REQ_F_ASYNC_DATA
io_uring: clear REQ_F_HASH_LOCKED on hash removal
io_uring: consolidate hash_locked io-wq handling
io_uring/poll.c | 41 +++++++++++++++++++++--------------------
1 file changed, 21 insertions(+), 20 deletions(-)
--
2.36.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH for-next 1/4] io_uring: don't miss setting REQ_F_DOUBLE_POLL
2022-07-07 14:13 [PATCH for-next 0/4] poll locking fixes Pavel Begunkov
@ 2022-07-07 14:13 ` Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 2/4] io_uring: don't race double poll setting REQ_F_ASYNC_DATA Pavel Begunkov
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2022-07-07 14:13 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence, syzbot+49950ba66096b1f0209b
When adding a second poll entry we should set REQ_F_DOUBLE_POLL
unconditionally. We might race with the first entry removal but that
doesn't change the rule.
Fixes: a18427bb2d9b ("io_uring: optimise submission side poll_refs")
Reported-and-tested-by: [email protected]
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/poll.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 57747d92bba4..3710a0a46a87 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -401,16 +401,18 @@ static void io_poll_double_prepare(struct io_kiocb *req)
/* head is RCU protected, see io_poll_remove_entries() comments */
rcu_read_lock();
head = smp_load_acquire(&poll->head);
- if (head) {
- /*
- * poll arm may not hold ownership and so race with
- * io_poll_wake() by modifying req->flags. There is only one
- * poll entry queued, serialise with it by taking its head lock.
- */
+ /*
+ * poll arm may not hold ownership and so race with
+ * io_poll_wake() by modifying req->flags. There is only one
+ * poll entry queued, serialise with it by taking its head lock.
+ */
+ if (head)
spin_lock_irq(&head->lock);
- req->flags |= REQ_F_DOUBLE_POLL;
+
+ req->flags |= REQ_F_DOUBLE_POLL;
+
+ if (head)
spin_unlock_irq(&head->lock);
- }
rcu_read_unlock();
}
--
2.36.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH for-next 2/4] io_uring: don't race double poll setting REQ_F_ASYNC_DATA
2022-07-07 14:13 [PATCH for-next 0/4] poll locking fixes Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 1/4] io_uring: don't miss setting REQ_F_DOUBLE_POLL Pavel Begunkov
@ 2022-07-07 14:13 ` Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 3/4] io_uring: clear REQ_F_HASH_LOCKED on hash removal Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 4/4] io_uring: consolidate hash_locked io-wq handling Pavel Begunkov
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2022-07-07 14:13 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Just as with io_poll_double_prepare() setting REQ_F_DOUBLE_POLL, we can
race with the first poll entry when setting REQ_F_ASYNC_DATA. Move it
under io_poll_double_prepare().
Fixes: a18427bb2d9b ("io_uring: optimise submission side poll_refs")
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/poll.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 3710a0a46a87..c1359d45a396 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -410,6 +410,8 @@ static void io_poll_double_prepare(struct io_kiocb *req)
spin_lock_irq(&head->lock);
req->flags |= REQ_F_DOUBLE_POLL;
+ if (req->opcode == IORING_OP_POLL_ADD)
+ req->flags |= REQ_F_ASYNC_DATA;
if (head)
spin_unlock_irq(&head->lock);
@@ -448,13 +450,11 @@ static void __io_queue_proc(struct io_poll *poll, struct io_poll_table *pt,
return;
}
- io_poll_double_prepare(req);
/* mark as double wq entry */
wqe_private |= IO_WQE_F_DOUBLE;
io_init_poll_iocb(poll, first->events, first->wait.func);
+ io_poll_double_prepare(req);
*poll_ptr = poll;
- if (req->opcode == IORING_OP_POLL_ADD)
- req->flags |= REQ_F_ASYNC_DATA;
} else {
/* fine to modify, there is no poll queued to race with us */
req->flags |= REQ_F_SINGLE_POLL;
--
2.36.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH for-next 3/4] io_uring: clear REQ_F_HASH_LOCKED on hash removal
2022-07-07 14:13 [PATCH for-next 0/4] poll locking fixes Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 1/4] io_uring: don't miss setting REQ_F_DOUBLE_POLL Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 2/4] io_uring: don't race double poll setting REQ_F_ASYNC_DATA Pavel Begunkov
@ 2022-07-07 14:13 ` Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 4/4] io_uring: consolidate hash_locked io-wq handling Pavel Begunkov
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2022-07-07 14:13 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Instead of clearing REQ_F_HASH_LOCKED while arming a poll, unset the bit
when we're removing the entry from the table in io_poll_tw_hash_eject().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/poll.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c
index c1359d45a396..77b669b06046 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -132,6 +132,7 @@ static void io_poll_tw_hash_eject(struct io_kiocb *req, bool *locked)
*/
io_tw_lock(ctx, locked);
hash_del(&req->hash_node);
+ req->flags &= ~REQ_F_HASH_LOCKED;
} else {
io_poll_req_delete(req, ctx);
}
@@ -617,9 +618,7 @@ int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
* apoll requests already grab the mutex to complete in the tw handler,
* so removal from the mutex-backed hash is free, use it by default.
*/
- if (issue_flags & IO_URING_F_UNLOCKED)
- req->flags &= ~REQ_F_HASH_LOCKED;
- else
+ if (!(issue_flags & IO_URING_F_UNLOCKED))
req->flags |= REQ_F_HASH_LOCKED;
if (!def->pollin && !def->pollout)
@@ -880,8 +879,6 @@ int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
if (!(issue_flags & IO_URING_F_UNLOCKED) &&
(req->ctx->flags & (IORING_SETUP_SQPOLL | IORING_SETUP_SINGLE_ISSUER)))
req->flags |= REQ_F_HASH_LOCKED;
- else
- req->flags &= ~REQ_F_HASH_LOCKED;
ret = __io_arm_poll_handler(req, poll, &ipt, poll->events, issue_flags);
if (ret > 0) {
--
2.36.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH for-next 4/4] io_uring: consolidate hash_locked io-wq handling
2022-07-07 14:13 [PATCH for-next 0/4] poll locking fixes Pavel Begunkov
` (2 preceding siblings ...)
2022-07-07 14:13 ` [PATCH for-next 3/4] io_uring: clear REQ_F_HASH_LOCKED on hash removal Pavel Begunkov
@ 2022-07-07 14:13 ` Pavel Begunkov
3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2022-07-07 14:13 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Don't duplicate code disabling REQ_F_HASH_LOCKED for IO_URING_F_UNLOCKED
(i.e. io-wq), move the handling into __io_arm_poll_handler().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/poll.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 77b669b06046..76592063abe7 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -523,8 +523,12 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
* io_poll_can_finish_inline() tries to deal with that.
*/
ipt->owning = issue_flags & IO_URING_F_UNLOCKED;
-
atomic_set(&req->poll_refs, (int)ipt->owning);
+
+ /* io-wq doesn't hold uring_lock */
+ if (issue_flags & IO_URING_F_UNLOCKED)
+ req->flags &= ~REQ_F_HASH_LOCKED;
+
mask = vfs_poll(req->file, &ipt->pt) & poll->events;
if (unlikely(ipt->error || !ipt->nr_entries)) {
@@ -618,8 +622,7 @@ int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
* apoll requests already grab the mutex to complete in the tw handler,
* so removal from the mutex-backed hash is free, use it by default.
*/
- if (!(issue_flags & IO_URING_F_UNLOCKED))
- req->flags |= REQ_F_HASH_LOCKED;
+ req->flags |= REQ_F_HASH_LOCKED;
if (!def->pollin && !def->pollout)
return IO_APOLL_ABORTED;
@@ -876,8 +879,7 @@ int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
* If sqpoll or single issuer, there is no contention for ->uring_lock
* and we'll end up holding it in tw handlers anyway.
*/
- if (!(issue_flags & IO_URING_F_UNLOCKED) &&
- (req->ctx->flags & (IORING_SETUP_SQPOLL | IORING_SETUP_SINGLE_ISSUER)))
+ if (req->ctx->flags & (IORING_SETUP_SQPOLL|IORING_SETUP_SINGLE_ISSUER))
req->flags |= REQ_F_HASH_LOCKED;
ret = __io_arm_poll_handler(req, poll, &ipt, poll->events, issue_flags);
--
2.36.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-07-07 14:14 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-07 14:13 [PATCH for-next 0/4] poll locking fixes Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 1/4] io_uring: don't miss setting REQ_F_DOUBLE_POLL Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 2/4] io_uring: don't race double poll setting REQ_F_ASYNC_DATA Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 3/4] io_uring: clear REQ_F_HASH_LOCKED on hash removal Pavel Begunkov
2022-07-07 14:13 ` [PATCH for-next 4/4] io_uring: consolidate hash_locked io-wq handling Pavel Begunkov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox