public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH] io_uring: allow conditional reschedule for intensive iterators
@ 2021-09-24 14:34 Jens Axboe
  0 siblings, 0 replies; only message in thread
From: Jens Axboe @ 2021-09-24 14:34 UTC (permalink / raw)
  To: io-uring

If we have a lot of threads and rings, the tctx list can get quite big.
This is especially true if we keep creating new threads and rings.
Likewise for the provided buffers list. Be nice and insert a conditional
reschedule point while iterating the nodes for deletion.

Link: https://lore.kernel.org/io-uring/[email protected]/
Reported-by: [email protected]
Signed-off-by: Jens Axboe <[email protected]>

---

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fe5e613b960f..ee33d79f9758 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -9137,8 +9137,10 @@ static void io_destroy_buffers(struct io_ring_ctx *ctx)
 	struct io_buffer *buf;
 	unsigned long index;
 
-	xa_for_each(&ctx->io_buffers, index, buf)
+	xa_for_each(&ctx->io_buffers, index, buf) {
 		__io_remove_buffers(ctx, buf, index, -1U);
+		cond_resched();
+	}
 }
 
 static void io_req_cache_free(struct list_head *list)
@@ -9636,8 +9638,10 @@ static void io_uring_clean_tctx(struct io_uring_task *tctx)
 	struct io_tctx_node *node;
 	unsigned long index;
 
-	xa_for_each(&tctx->xa, index, node)
+	xa_for_each(&tctx->xa, index, node) {
 		io_uring_del_tctx_node(index);
+		cond_resched();
+	}
 	if (wq) {
 		/*
 		 * Must be after io_uring_del_task_file() (removes nodes under

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2021-09-24 14:34 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-09-24 14:34 [PATCH] io_uring: allow conditional reschedule for intensive iterators Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox