* [PATCH 5.12] io_uring: cancel SQPOLL reqs acress exec
@ 2021-02-07 22:34 Pavel Begunkov
2021-02-08 15:27 ` Jens Axboe
0 siblings, 1 reply; 2+ messages in thread
From: Pavel Begunkov @ 2021-02-07 22:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
For SQPOLL rings tctx_inflight() always returns zero, so it might skip
doing full cancellion. It's fine because we jam all sqpoll submissions
in any case and do go through files cancel for them, but not nice.
Do the intended full cancellation, by mimicing __io_uring_task_cancel()
waiting but impersonating SQPOLL task.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 57 ++++++++++++++++++++++++++++++++-------------------
1 file changed, 36 insertions(+), 21 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index af3ac85d11cc..90d566e0fc89 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -9094,29 +9094,39 @@ void __io_uring_files_cancel(struct files_struct *files)
static s64 tctx_inflight(struct io_uring_task *tctx)
{
- unsigned long index;
- struct file *file;
- s64 inflight;
-
- inflight = percpu_counter_sum(&tctx->inflight);
- if (!tctx->sqpoll)
- return inflight;
+ return percpu_counter_sum(&tctx->inflight);
+}
- /*
- * If we have SQPOLL rings, then we need to iterate and find them, and
- * add the pending count for those.
- */
- xa_for_each(&tctx->xa, index, file) {
- struct io_ring_ctx *ctx = file->private_data;
+static void io_uring_cancel_sqpoll(struct io_ring_ctx *ctx)
+{
+ struct io_uring_task *tctx;
+ s64 inflight;
+ DEFINE_WAIT(wait);
- if (ctx->flags & IORING_SETUP_SQPOLL) {
- struct io_uring_task *__tctx = ctx->sqo_task->io_uring;
+ if (!ctx->sq_data)
+ return;
+ tctx = ctx->sq_data->thread->io_uring;
+ io_disable_sqo_submit(ctx);
- inflight += percpu_counter_sum(&__tctx->inflight);
- }
- }
+ atomic_inc(&tctx->in_idle);
+ do {
+ /* read completions before cancelations */
+ inflight = tctx_inflight(tctx);
+ if (!inflight)
+ break;
+ io_uring_cancel_task_requests(ctx, NULL);
- return inflight;
+ prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
+ /*
+ * If we've seen completions, retry without waiting. This
+ * avoids a race where a completion comes in before we did
+ * prepare_to_wait().
+ */
+ if (inflight == tctx_inflight(tctx))
+ schedule();
+ finish_wait(&tctx->wait, &wait);
+ } while (1);
+ atomic_dec(&tctx->in_idle);
}
/*
@@ -9133,8 +9143,13 @@ void __io_uring_task_cancel(void)
atomic_inc(&tctx->in_idle);
/* trigger io_disable_sqo_submit() */
- if (tctx->sqpoll)
- __io_uring_files_cancel(NULL);
+ if (tctx->sqpoll) {
+ struct file *file;
+ unsigned long index;
+
+ xa_for_each(&tctx->xa, index, file)
+ io_uring_cancel_sqpoll(file->private_data);
+ }
do {
/* read completions before cancelations */
--
2.24.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-02-08 15:31 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-02-07 22:34 [PATCH 5.12] io_uring: cancel SQPOLL reqs acress exec Pavel Begunkov
2021-02-08 15:27 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox