From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73154C433F5 for ; Sun, 12 Sep 2021 16:23:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 563CB610CB for ; Sun, 12 Sep 2021 16:23:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229643AbhILQZH (ORCPT ); Sun, 12 Sep 2021 12:25:07 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:43625 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229597AbhILQZH (ORCPT ); Sun, 12 Sep 2021 12:25:07 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R971e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=haoxu@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0Uo4H3xv_1631463825; Received: from e18g09479.et15sqa.tbsite.net(mailfrom:haoxu@linux.alibaba.com fp:SMTPD_---0Uo4H3xv_1631463825) by smtp.aliyun-inc.com(127.0.0.1); Mon, 13 Sep 2021 00:23:51 +0800 From: Hao Xu To: Jens Axboe Cc: io-uring@vger.kernel.org, Pavel Begunkov , Joseph Qi Subject: [PATCH 1/2] io_uring: fix tw list mess-up by adding tw while it's already in tw list Date: Mon, 13 Sep 2021 00:23:44 +0800 Message-Id: <20210912162345.51651-2-haoxu@linux.alibaba.com> X-Mailer: git-send-email 2.24.4 In-Reply-To: <20210912162345.51651-1-haoxu@linux.alibaba.com> References: <20210912162345.51651-1-haoxu@linux.alibaba.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org For multishot mode, there may be cases like: io_poll_task_func() -> add_wait_queue() async_wake() ->io_req_task_work_add() this one mess up the running task_work list since req->io_task_work.node is in use. similar situation for req->io_task_work.fallback_node. Fix it by set node->next = NULL before we run the tw, so that when we add req back to the wait queue in middle of tw running, we can safely re-add it to the tw list. Fixes: 7cbf1722d5fc ("io_uring: provide FIFO ordering for task_work") Signed-off-by: Hao Xu --- fs/io_uring.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 30d959416eba..c16f6be3d46b 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1216,13 +1216,17 @@ static void io_fallback_req_func(struct work_struct *work) struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, fallback_work.work); struct llist_node *node = llist_del_all(&ctx->fallback_llist); - struct io_kiocb *req, *tmp; + struct io_kiocb *req; bool locked = false; percpu_ref_get(&ctx->refs); - llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node) + req = llist_entry(node, struct io_kiocb, io_task_work.fallback_node); + while (member_address_is_nonnull(req, io_task_work.fallback_node)) { + node = req->io_task_work.fallback_node.next; + req->io_task_work.fallback_node.next = NULL; req->io_task_work.func(req, &locked); - + req = llist_entry(node, struct io_kiocb, io_task_work.fallback_node); + } if (locked) { if (ctx->submit_state.compl_nr) io_submit_flush_completions(ctx); @@ -2126,6 +2130,7 @@ static void tctx_task_work(struct callback_head *cb) locked = mutex_trylock(&ctx->uring_lock); percpu_ref_get(&ctx->refs); } + node->next = NULL; req->io_task_work.func(req, &locked); node = next; } while (node); -- 2.24.4