From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59959C77B79 for ; Fri, 14 Apr 2023 14:01:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230179AbjDNOBs (ORCPT ); Fri, 14 Apr 2023 10:01:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230160AbjDNOBn (ORCPT ); Fri, 14 Apr 2023 10:01:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4574180 for ; Fri, 14 Apr 2023 07:00:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681480819; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=27YyBi9hssfCHBO7/PKix6vU7A+oXZmvyq6RpUloX40=; b=ad+ymqr2mVkSFT6MjwYZwRgtmKhYqMS4HECR3Nwwjs3swj1EoaqvNykpPkS7JuwzpL+k3K ZdHriSr8Bd6Rgk+Jvpl4OmEQ30V3jtzqze9Kz4YwMOpozoMneU1AT8qSqLQIUQvXstfzZ3 aRoz9eaO6JxNZaYhBBIfd3nRXKi8Utg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-630-BcRvrHRHMtmqZL31AExtOw-1; Fri, 14 Apr 2023 10:00:14 -0400 X-MC-Unique: BcRvrHRHMtmqZL31AExtOw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3F8F41C0691B; Fri, 14 Apr 2023 14:00:13 +0000 (UTC) Received: from ovpn-8-21.pek2.redhat.com (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0A3E6404DC40; Fri, 14 Apr 2023 14:00:02 +0000 (UTC) Date: Fri, 14 Apr 2023 21:59:57 +0800 From: Ming Lei To: Pavel Begunkov Cc: Breno Leitao , axboe@kernel.dk, davem@davemloft.net, dccp@vger.kernel.org, dsahern@kernel.org, edumazet@google.com, io-uring@vger.kernel.org, kuba@kernel.org, leit@fb.com, linux-kernel@vger.kernel.org, marcelo.leitner@gmail.com, matthieu.baerts@tessares.net, mptcp@lists.linux.dev, netdev@vger.kernel.org, pabeni@redhat.com, willemdebruijn.kernel@gmail.com, ming.lei@redhat.com Subject: Re: [PATCH RFC] io_uring: Pass whole sqe to commands Message-ID: References: <20230406144330.1932798-1-leitao@debian.org> <20230406165705.3161734-1-leitao@debian.org> <44420e92-f629-f56e-f930-475be6f6a83a@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <44420e92-f629-f56e-f930-475be6f6a83a@gmail.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org On Fri, Apr 14, 2023 at 02:12:10PM +0100, Pavel Begunkov wrote: > On 4/14/23 03:12, Ming Lei wrote: > > On Thu, Apr 13, 2023 at 09:47:56AM -0700, Breno Leitao wrote: > > > Hello Ming, > > > > > > On Thu, Apr 13, 2023 at 10:56:49AM +0800, Ming Lei wrote: > > > > On Thu, Apr 06, 2023 at 09:57:05AM -0700, Breno Leitao wrote: > > > > > Currently uring CMD operation relies on having large SQEs, but future > > > > > operations might want to use normal SQE. > > > > > > > > > > The io_uring_cmd currently only saves the payload (cmd) part of the SQE, > > > > > but, for commands that use normal SQE size, it might be necessary to > > > > > access the initial SQE fields outside of the payload/cmd block. So, > > > > > saves the whole SQE other than just the pdu. > > > > > > > > > > This changes slighlty how the io_uring_cmd works, since the cmd > > > > > structures and callbacks are not opaque to io_uring anymore. I.e, the > > > > > callbacks can look at the SQE entries, not only, in the cmd structure. > > > > > > > > > > The main advantage is that we don't need to create custom structures for > > > > > simple commands. > > > > > > > > > > Suggested-by: Pavel Begunkov > > > > > Signed-off-by: Breno Leitao > > > > > --- > > > > > > > > ... > > > > > > > > > diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c > > > > > index 2e4c483075d3..9648134ccae1 100644 > > > > > --- a/io_uring/uring_cmd.c > > > > > +++ b/io_uring/uring_cmd.c > > > > > @@ -63,14 +63,15 @@ EXPORT_SYMBOL_GPL(io_uring_cmd_done); > > > > > int io_uring_cmd_prep_async(struct io_kiocb *req) > > > > > { > > > > > struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); > > > > > - size_t cmd_size; > > > > > + size_t size = sizeof(struct io_uring_sqe); > > > > > BUILD_BUG_ON(uring_cmd_pdu_size(0) != 16); > > > > > BUILD_BUG_ON(uring_cmd_pdu_size(1) != 80); > > > > > - cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128); > > > > > + if (req->ctx->flags & IORING_SETUP_SQE128) > > > > > + size <<= 1; > > > > > - memcpy(req->async_data, ioucmd->cmd, cmd_size); > > > > > + memcpy(req->async_data, ioucmd->sqe, size); > > > > > > > > The copy will make some fields of sqe become READ TWICE, and driver may see > > > > different sqe field value compared with the one observed in io_init_req(). > > > > > > This copy only happens if the operation goes to the async path > > > (calling io_uring_cmd_prep_async()). This only happens if > > > f_op->uring_cmd() returns -EAGAIN. > > > > > > ret = file->f_op->uring_cmd(ioucmd, issue_flags); > > > if (ret == -EAGAIN) { > > > if (!req_has_async_data(req)) { > > > if (io_alloc_async_data(req)) > > > return -ENOMEM; > > > io_uring_cmd_prep_async(req); > > > } > > > return -EAGAIN; > > > } > > > > > > Are you saying that after this copy, the operation is still reading from > > > sqe instead of req->async_data? > > > > I meant that the 2nd read is on the sqe copy(req->aync_data), but same > > fields can become different between the two READs(first is done on original > > SQE during io_init_req(), and second is done on sqe copy in driver). > > > > Will this kind of inconsistency cause trouble for driver? Cause READ > > TWICE becomes possible with this patch. > > Right it might happen, and I was keeping that in mind, but it's not > specific to this patch. It won't reload core io_uring bits, and all It depends if driver reloads core bits or not, anyway the patch exports all fields and opens the window. > fields cmds use already have this problem. driver is supposed to load cmds field just once too, right? > > Unless there is a better option, the direction we'll be moving in is > adding a preparation step that should read and stash parts of SQE > it cares about, which should also make full SQE copy not > needed / optional. Sounds good. Thanks, Ming