From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24A1CC77B7C for ; Thu, 4 May 2023 16:06:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230492AbjEDQGw (ORCPT ); Thu, 4 May 2023 12:06:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230326AbjEDQGv (ORCPT ); Thu, 4 May 2023 12:06:51 -0400 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2F1146B9 for ; Thu, 4 May 2023 09:06:49 -0700 (PDT) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 6B0D55C014D; Thu, 4 May 2023 12:06:49 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 04 May 2023 12:06:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=devkernel.io; h= cc:cc:content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm1; t=1683216409; x=1683302809; bh=9n OyOsinE+S1ed9oE/Z09j55E/bxtTYDr12jQDg0Y0s=; b=IlJvfcg2SExqjT4lzu AD0KQa6BmuKwFql8ddfLdvZbScqcXtwt7ZljD8K5PLaB5QV7Am6FpYNQ4PdvL0Vk EEAqaPHIuHhmddc9NvZ4SNGi7sqWer+uH5QKfBYttvpxImrrgSe2TxeY3sgitMN2 /KSDKBcmIC3G/Ph/sIlSGrsvhF/g/r37hFRsFWdBr1ABMKBNZ6cAX3bz3LBWtGSo nOaTlLii+uIImqwkh0FRXOIdGv3ocwIOXbFCJg/bGhZ6t0W2HHzjJG6rBibAKwlv ydd8iGSUWkAPHaHJJb8tqQMy3xMjHYLMo/W3HL1hRsqEwNOoaH0wllIKohxr1Elr BdgA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; t=1683216409; x=1683302809; bh=9nOyOsinE+S1e d9oE/Z09j55E/bxtTYDr12jQDg0Y0s=; b=hOnmefJrmIbdzLnki71EY4At4Uv/O 5bEiGbuGXXKjeZ9wlvZELx5nLNCeR8rsiT6lGo+mpCjy1Xcxkew71CAsgWGEBY6W CoLHWRE2K5wrt7QLup+Ex6ZsSHD+zkPlaNsXPaXwv7IV/DEkbYiwjwvrtjeCzvlJ j38eYUqao0Fh64i18xK05m0l8AfXTvqpcX9eE1FN46GUMZ2IiPG8CO2i8zH9MyoE 1ho2vRNAG4g3iWdIonGkSWEj6bEFmMI1KzmgEqQaUxoMY9EkDXIPtx9CZD3zEp6u m/9fxiP8gFqhLa1nLRHmQxp1uzsY8Ulm7Y+Zk6hmYe02P8vlU2kvzkN5Q== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeftddgleejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfhgfhffvvefuffgjkfggtgesthdtredttdertdenucfhrhhomhepufhtvghf rghnucftohgvshgthhcuoehshhhrseguvghvkhgvrhhnvghlrdhioheqnecuggftrfgrth htvghrnhepveelgffghfehudeitdehjeevhedthfetvdfhledutedvgeeikeeggefgudeg uedtnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsh hhrhesuggvvhhkvghrnhgvlhdrihho X-ME-Proxy: Feedback-ID: i84614614:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 4 May 2023 12:06:47 -0400 (EDT) References: <20230502165332.2075091-1-shr@devkernel.io> <20230502165332.2075091-3-shr@devkernel.io> User-agent: mu4e 1.10.1; emacs 28.2.50 From: Stefan Roesch To: Jens Axboe Cc: io-uring@vger.kernel.org, kernel-team@fb.com, ammarfaizi2@gnuweeb.org, Olivier Langlois , Jakub Kicinski Subject: Re: [PATCH v12 2/5] io-uring: add napi busy poll support Date: Thu, 04 May 2023 09:06:10 -0700 In-reply-to: Message-ID: MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Jens Axboe writes: > > I mentioned this in our out-of-band discussions on this patch set, and > we cannot call napi_busy_loop() under rcu_read_lock() if loop_end and > loop_end_arg is set AND loop_end() doesn't always return true. Because > otherwise we can end up with napi_busy_loop() doing: > > if (unlikely(need_resched())) { > if (napi_poll) > busy_poll_stop(napi, have_poll_lock, prefer_busy_poll, budget); > preempt_enable(); > rcu_read_unlock(); > cond_resched(); > if (loop_end(loop_end_arg, start_time)) > return; > goto restart; > } > > and hence we're now scheduling with rcu read locking disabled. So we > need to handle that case appropriately as well. I'll have a look.