* [PATCH] io_uring: fix readiness race with poll based retry
@ 2020-05-29 1:51 Jens Axboe
2020-05-29 2:25 ` Jens Axboe
0 siblings, 1 reply; 2+ messages in thread
From: Jens Axboe @ 2020-05-29 1:51 UTC (permalink / raw)
To: io-uring
The poll based retry handler uses the same logic as the normal poll
requests, but the latter triggers a completion if we hit the slim
race of:
1a) data/space isn't available
2a) data/space becomes available
1b) arm poll handler (returns success, callback not armed)
This isn't the case for the task_work based retry, where we need to
take action if the event triggered in the short time between trying
and arming the poll handler.
Catch this case in __io_arm_poll_handler(), and queue the task_work
upfront instead of depending on the waitq handler triggering it. The
latter isn't armed at this point.
Fixes: d7718a9d25a6 ("io_uring: use poll driven retry for files that support it")
Reported-by: Dan Melnic <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
diff --git a/fs/io_uring.c b/fs/io_uring.c
index bb25e3997d41..7368e5f2ac79 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4094,6 +4094,7 @@ struct io_poll_table {
struct poll_table_struct pt;
struct io_kiocb *req;
int error;
+ bool ready_now;
};
static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
@@ -4117,22 +4118,13 @@ static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
__io_queue_proc(&pt->req->apoll->poll, pt, head);
}
-static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
- __poll_t mask, task_work_func_t func)
+static void io_queue_task_work(struct io_kiocb *req, struct io_poll_iocb *poll,
+ task_work_func_t func)
{
struct task_struct *tsk;
int ret;
- /* for instances that support it check for an event match first: */
- if (mask && !(mask & poll->events))
- return 0;
-
- trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
-
- list_del_init(&poll->wait.entry);
-
tsk = req->task;
- req->result = mask;
init_task_work(&req->task_work, func);
/*
* If this fails, then the task is exiting. When a task exits, the
@@ -4147,6 +4139,20 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
task_work_add(tsk, &req->task_work, true);
}
wake_up_process(tsk);
+}
+
+static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ __poll_t mask, task_work_func_t func)
+{
+ /* for instances that support it check for an event match first: */
+ if (mask && !(mask & poll->events))
+ return 0;
+
+ trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
+
+ list_del_init(&poll->wait.entry);
+ req->result = mask;
+ io_queue_task_work(req, poll, func);
return 1;
}
@@ -4265,6 +4271,8 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
if (unlikely(list_empty(&poll->wait.entry))) {
if (ipt->error)
cancel = true;
+ else if (mask)
+ ipt->ready_now = true;
ipt->error = 0;
mask = 0;
}
@@ -4315,6 +4323,7 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
mask |= POLLERR | POLLPRI;
ipt.pt._qproc = io_async_queue_proc;
+ ipt.ready_now = false;
ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
io_async_wake);
@@ -4329,6 +4338,8 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
spin_unlock_irq(&ctx->completion_lock);
trace_io_uring_poll_arm(ctx, req->opcode, req->user_data, mask,
apoll->poll.events);
+ if (ipt.ready_now)
+ io_queue_task_work(req, &apoll->poll, io_async_task_func);
return true;
}
@@ -4544,6 +4555,7 @@ static int io_poll_add(struct io_kiocb *req)
INIT_HLIST_NODE(&req->hash_node);
INIT_LIST_HEAD(&req->list);
ipt.pt._qproc = io_poll_queue_proc;
+ ipt.ready_now = false;
mask = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events,
io_poll_wake);
--
Jens Axboe
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] io_uring: fix readiness race with poll based retry
2020-05-29 1:51 [PATCH] io_uring: fix readiness race with poll based retry Jens Axboe
@ 2020-05-29 2:25 ` Jens Axboe
0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2020-05-29 2:25 UTC (permalink / raw)
To: io-uring
On 5/28/20 7:51 PM, Jens Axboe wrote:
> The poll based retry handler uses the same logic as the normal poll
> requests, but the latter triggers a completion if we hit the slim
> race of:
>
> 1a) data/space isn't available
> 2a) data/space becomes available
> 1b) arm poll handler (returns success, callback not armed)
>
> This isn't the case for the task_work based retry, where we need to
> take action if the event triggered in the short time between trying
> and arming the poll handler.
>
> Catch this case in __io_arm_poll_handler(), and queue the task_work
> upfront instead of depending on the waitq handler triggering it. The
> latter isn't armed at this point.
Disregard this one, I don't think this race exists. If we hit the
poll->head != NULL case, then we definitely added the waitq. And if
the list is empty once we're under the lock, we already triggered
the callback and queued the task_work.
--
Jens Axboe
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-05-29 2:25 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-05-29 1:51 [PATCH] io_uring: fix readiness race with poll based retry Jens Axboe
2020-05-29 2:25 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox