From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08B40FA372C for ; Fri, 8 Nov 2019 09:10:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AE6D721D6C for ; Fri, 8 Nov 2019 09:10:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="ab0BZF/e" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728513AbfKHJKy (ORCPT ); Fri, 8 Nov 2019 04:10:54 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:44590 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726987AbfKHJKy (ORCPT ); Fri, 8 Nov 2019 04:10:54 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id xA898ZvS144725; Fri, 8 Nov 2019 09:10:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2019-08-05; bh=oQuTkTXX80XBsQMNgp/LV4KLdcW77SU+bw3d5CCCV5c=; b=ab0BZF/eBiXoCaBdllVWOj6omVT1x+qNv3vv0egk6opYvVo+sGeSJuxFyX6FZQfQrp2x pDFsguDbvNyel1kshclNcBNx5FIFrmp3UYvwNICkZT4m5FY/C+NXyv8yTySsSBUxEBTm yzaJqYtS0AK5CAzHUSC12iR/oP1EDIg2qUrr3fwY30zwIa90C0uBEnNj32kxLoanskPK 4Pc1ll8zRgCxfkNJhlhaM91k9pimJZaNtESvtWCFHWrs2AGKwvuE2fa32UFpH1g55pKl /R6lzvoaCGDXCLksQymhF0sZ7l6OOo5V1A3Uhu2Mtv2IkdbUTTgYx3p++hICasZDmuWd kQ== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2120.oracle.com with ESMTP id 2w41w140dx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 08 Nov 2019 09:10:43 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id xA898eAp084708; Fri, 8 Nov 2019 09:10:43 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3030.oracle.com with ESMTP id 2w41wh8tdq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 08 Nov 2019 09:10:43 +0000 Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id xA89Ac2n003793; Fri, 8 Nov 2019 09:10:38 GMT Received: from [192.168.1.14] (/114.88.246.185) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 08 Nov 2019 01:10:37 -0800 Subject: Re: [PATCH v2] io_uring: reduced function parameter ctx if possible To: Jackie Liu , Jens Axboe Cc: io-uring@vger.kernel.org References: <1573198177-177651-1-git-send-email-liuyun01@kylinos.cn> From: Bob Liu Message-ID: <34c0b6a3-5bc8-8b83-dc96-e513e78bef03@oracle.com> Date: Fri, 8 Nov 2019 17:10:31 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9434 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=48 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1910280000 definitions=main-1911080091 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9434 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=48 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1910280000 definitions=main-1911080091 Sender: io-uring-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org On 11/8/19 5:00 PM, Jackie Liu wrote: > oops, please ignore this, it's crash with commit '2665abfd757fb35a241c6f0b1ebf620e3ffb36fb' > I need test more, and resend later. > Hmm.. io_link_cancel_timeout() may free req.. >> 2019年11月8日 15:29,Jackie Liu 写道: >> >> Many times, the core of the function is req, and req has already set >> req->ctx at initialization time, so there is no need to pass in from >> outside. >> >> Cleanup, no function change. >> >> Signed-off-by: Jackie Liu >> --- >> V2: >> - rebase to branch for-5.5/io_uring: 2665abfd757fb35a241c6f0b1ebf620e3ffb36fb >> >> fs/io_uring.c | 109 +++++++++++++++++++++++++++++++--------------------------- >> 1 file changed, 58 insertions(+), 51 deletions(-) >> >> diff --git a/fs/io_uring.c b/fs/io_uring.c >> index eadd19a..2cc53e3 100644 >> --- a/fs/io_uring.c >> +++ b/fs/io_uring.c >> @@ -429,20 +429,20 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) >> return ctx; >> } >> >> -static inline bool __io_sequence_defer(struct io_ring_ctx *ctx, >> - struct io_kiocb *req) >> +static inline bool __io_sequence_defer(struct io_kiocb *req) >> { >> + struct io_ring_ctx *ctx = req->ctx; >> + >> return req->sequence != ctx->cached_cq_tail + ctx->cached_sq_dropped >> + atomic_read(&ctx->cached_cq_overflow); >> } >> >> -static inline bool io_sequence_defer(struct io_ring_ctx *ctx, >> - struct io_kiocb *req) >> +static inline bool io_sequence_defer(struct io_kiocb *req) >> { >> if ((req->flags & (REQ_F_IO_DRAIN|REQ_F_IO_DRAINED)) != REQ_F_IO_DRAIN) >> return false; >> >> - return __io_sequence_defer(ctx, req); >> + return __io_sequence_defer(req); >> } >> >> static struct io_kiocb *io_get_deferred_req(struct io_ring_ctx *ctx) >> @@ -450,7 +450,7 @@ static struct io_kiocb *io_get_deferred_req(struct io_ring_ctx *ctx) >> struct io_kiocb *req; >> >> req = list_first_entry_or_null(&ctx->defer_list, struct io_kiocb, list); >> - if (req && !io_sequence_defer(ctx, req)) { >> + if (req && !io_sequence_defer(req)) { >> list_del_init(&req->list); >> return req; >> } >> @@ -463,7 +463,7 @@ static struct io_kiocb *io_get_timeout_req(struct io_ring_ctx *ctx) >> struct io_kiocb *req; >> >> req = list_first_entry_or_null(&ctx->timeout_list, struct io_kiocb, list); >> - if (req && !__io_sequence_defer(ctx, req)) { >> + if (req && !__io_sequence_defer(req)) { >> list_del_init(&req->list); >> return req; >> } >> @@ -512,10 +512,10 @@ static inline bool io_prep_async_work(struct io_kiocb *req) >> return do_hashed; >> } >> >> -static inline void io_queue_async_work(struct io_ring_ctx *ctx, >> - struct io_kiocb *req) >> +static inline void io_queue_async_work(struct io_kiocb *req) >> { >> bool do_hashed = io_prep_async_work(req); >> + struct io_ring_ctx *ctx = req->ctx; >> >> trace_io_uring_queue_async_work(ctx, do_hashed, req, &req->work, >> req->flags); >> @@ -566,7 +566,7 @@ static void io_commit_cqring(struct io_ring_ctx *ctx) >> continue; >> } >> req->flags |= REQ_F_IO_DRAINED; >> - io_queue_async_work(ctx, req); >> + io_queue_async_work(req); >> } >> } >> >> @@ -714,9 +714,9 @@ static void __io_free_req(struct io_kiocb *req) >> kmem_cache_free(req_cachep, req); >> } >> >> -static bool io_link_cancel_timeout(struct io_ring_ctx *ctx, >> - struct io_kiocb *req) >> +static bool io_link_cancel_timeout(struct io_kiocb *req) >> { >> + struct io_ring_ctx *ctx = req->ctx; >> int ret; >> >> ret = hrtimer_try_to_cancel(&req->timeout.timer); >> @@ -756,7 +756,7 @@ static void io_req_link_next(struct io_kiocb *req, struct io_kiocb **nxtptr) >> * in this context instead of having to queue up new async work. >> */ >> if (req->flags & REQ_F_LINK_TIMEOUT) { >> - wake_ev = io_link_cancel_timeout(ctx, nxt); >> + wake_ev = io_link_cancel_timeout(nxt); >> >> /* we dropped this link, get next */ >> nxt = list_first_entry_or_null(&req->link_list, >> @@ -765,7 +765,7 @@ static void io_req_link_next(struct io_kiocb *req, struct io_kiocb **nxtptr) >> *nxtptr = nxt; >> break; >> } else { >> - io_queue_async_work(req->ctx, nxt); >> + io_queue_async_work(nxt); >> break; >> } >> } >> @@ -793,7 +793,7 @@ static void io_fail_links(struct io_kiocb *req) >> >> if ((req->flags & REQ_F_LINK_TIMEOUT) && >> link->submit.sqe->opcode == IORING_OP_LINK_TIMEOUT) { >> - io_link_cancel_timeout(ctx, link); >> + io_link_cancel_timeout(link); >> } else { >> io_cqring_fill_event(ctx, link->user_data, -ECANCELED); >> __io_free_req(link); >> @@ -862,7 +862,7 @@ static void io_put_req(struct io_kiocb *req, struct io_kiocb **nxtptr) >> if (nxtptr) >> *nxtptr = nxt; >> else >> - io_queue_async_work(nxt->ctx, nxt); >> + io_queue_async_work(nxt); >> } >> } >> >> @@ -1803,7 +1803,7 @@ static void io_poll_remove_one(struct io_kiocb *req) >> WRITE_ONCE(poll->canceled, true); >> if (!list_empty(&poll->wait.entry)) { >> list_del_init(&poll->wait.entry); >> - io_queue_async_work(req->ctx, req); >> + io_queue_async_work(req); >> } >> spin_unlock(&poll->head->lock); >> >> @@ -1855,9 +1855,10 @@ static int io_poll_remove(struct io_kiocb *req, const struct io_uring_sqe *sqe) >> return 0; >> } >> >> -static void io_poll_complete(struct io_ring_ctx *ctx, struct io_kiocb *req, >> - __poll_t mask) >> +static void io_poll_complete(struct io_kiocb *req, __poll_t mask) >> { >> + struct io_ring_ctx *ctx = req->ctx; >> + >> req->poll.done = true; >> io_cqring_fill_event(ctx, req->user_data, mangle_poll(mask)); >> io_commit_cqring(ctx); >> @@ -1893,7 +1894,7 @@ static void io_poll_complete_work(struct io_wq_work **workptr) >> return; >> } >> list_del_init(&req->list); >> - io_poll_complete(ctx, req, mask); >> + io_poll_complete(req, mask); >> spin_unlock_irq(&ctx->completion_lock); >> >> io_cqring_ev_posted(ctx); >> @@ -1921,13 +1922,13 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync, >> >> if (mask && spin_trylock_irqsave(&ctx->completion_lock, flags)) { >> list_del(&req->list); >> - io_poll_complete(ctx, req, mask); >> + io_poll_complete(req, mask); >> spin_unlock_irqrestore(&ctx->completion_lock, flags); >> >> io_cqring_ev_posted(ctx); >> io_put_req(req, NULL); >> } else { >> - io_queue_async_work(ctx, req); >> + io_queue_async_work(req); >> } >> >> return 1; >> @@ -2012,7 +2013,7 @@ static int io_poll_add(struct io_kiocb *req, const struct io_uring_sqe *sqe, >> } >> if (mask) { /* no async, we'd stolen it */ >> ipt.error = 0; >> - io_poll_complete(ctx, req, mask); >> + io_poll_complete(req, mask); >> } >> spin_unlock_irq(&ctx->completion_lock); >> >> @@ -2259,12 +2260,13 @@ static int io_async_cancel(struct io_kiocb *req, const struct io_uring_sqe *sqe, >> return 0; >> } >> >> -static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req) >> +static int io_req_defer(struct io_kiocb *req) >> { >> const struct io_uring_sqe *sqe = req->submit.sqe; >> struct io_uring_sqe *sqe_copy; >> + struct io_ring_ctx *ctx = req->ctx; >> >> - if (!io_sequence_defer(ctx, req) && list_empty(&ctx->defer_list)) >> + if (!io_sequence_defer(req) && list_empty(&ctx->defer_list)) >> return 0; >> >> sqe_copy = kmalloc(sizeof(*sqe_copy), GFP_KERNEL); >> @@ -2272,7 +2274,7 @@ static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req) >> return -EAGAIN; >> >> spin_lock_irq(&ctx->completion_lock); >> - if (!io_sequence_defer(ctx, req) && list_empty(&ctx->defer_list)) { >> + if (!io_sequence_defer(req) && list_empty(&ctx->defer_list)) { >> spin_unlock_irq(&ctx->completion_lock); >> kfree(sqe_copy); >> return 0; >> @@ -2287,11 +2289,12 @@ static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req) >> return -EIOCBQUEUED; >> } >> >> -static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, >> - struct io_kiocb **nxt, bool force_nonblock) >> +static int __io_submit_sqe(struct io_kiocb *req, struct io_kiocb **nxt, >> + bool force_nonblock) >> { >> int ret, opcode; >> struct sqe_submit *s = &req->submit; >> + struct io_ring_ctx *ctx = req->ctx; >> >> req->user_data = READ_ONCE(s->sqe->user_data); >> >> @@ -2389,7 +2392,7 @@ static void io_wq_submit_work(struct io_wq_work **workptr) >> s->has_user = (work->flags & IO_WQ_WORK_HAS_MM) != 0; >> s->in_async = true; >> do { >> - ret = __io_submit_sqe(ctx, req, &nxt, false); >> + ret = __io_submit_sqe(req, &nxt, false); >> /* >> * We can get EAGAIN for polled IO even though we're >> * forcing a sync submission from here, since we can't >> @@ -2443,10 +2446,10 @@ static inline struct file *io_file_from_index(struct io_ring_ctx *ctx, >> return table->files[index & IORING_FILE_TABLE_MASK]; >> } >> >> -static int io_req_set_file(struct io_ring_ctx *ctx, >> - struct io_submit_state *state, struct io_kiocb *req) >> +static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req) >> { >> struct sqe_submit *s = &req->submit; >> + struct io_ring_ctx *ctx = req->ctx; >> unsigned flags; >> int fd; >> >> @@ -2486,9 +2489,10 @@ static int io_req_set_file(struct io_ring_ctx *ctx, >> return 0; >> } >> >> -static int io_grab_files(struct io_ring_ctx *ctx, struct io_kiocb *req) >> +static int io_grab_files(struct io_kiocb *req) >> { >> int ret = -EBADF; >> + struct io_ring_ctx *ctx = req->ctx; >> >> rcu_read_lock(); >> spin_lock_irq(&ctx->inflight_lock); >> @@ -2604,8 +2608,9 @@ static inline struct io_kiocb *io_get_linked_timeout(struct io_kiocb *req) >> return NULL; >> } >> >> -static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) >> +static int __io_queue_sqe(struct io_kiocb *req) >> { >> + struct io_ring_ctx *ctx = req->ctx; >> struct io_kiocb *nxt; >> int ret; >> >> @@ -2616,7 +2621,7 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) >> goto err; >> } >> >> - ret = __io_submit_sqe(ctx, req, NULL, true); >> + ret = __io_submit_sqe(req, NULL, true); >> >> /* >> * We async punt it if the file wasn't marked NOWAIT, or if the file >> @@ -2631,7 +2636,7 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) >> if (sqe_copy) { >> s->sqe = sqe_copy; >> if (req->work.flags & IO_WQ_WORK_NEEDS_FILES) { >> - ret = io_grab_files(ctx, req); >> + ret = io_grab_files(req); >> if (ret) { >> kfree(sqe_copy); >> goto err; >> @@ -2642,7 +2647,7 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) >> * Queued up for async execution, worker will release >> * submit reference when the iocb is actually submitted. >> */ >> - io_queue_async_work(ctx, req); >> + io_queue_async_work(req); >> return 0; >> } >> } >> @@ -2662,11 +2667,12 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) >> return ret; >> } >> >> -static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) >> +static int io_queue_sqe(struct io_kiocb *req) >> { >> + struct io_ring_ctx *ctx = req->ctx; >> int ret; >> >> - ret = io_req_defer(ctx, req); >> + ret = io_req_defer(req); >> if (ret) { >> if (ret != -EIOCBQUEUED) { >> io_cqring_add_event(ctx, req->submit.sqe->user_data, ret); >> @@ -2675,17 +2681,17 @@ static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req) >> return 0; >> } >> >> - return __io_queue_sqe(ctx, req); >> + return __io_queue_sqe(req); >> } >> >> -static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, >> - struct io_kiocb *shadow) >> +static int io_queue_link_head(struct io_kiocb *req, struct io_kiocb *shadow) >> { >> int ret; >> int need_submit = false; >> + struct io_ring_ctx *ctx = req->ctx; >> >> if (!shadow) >> - return io_queue_sqe(ctx, req); >> + return io_queue_sqe(req); >> >> /* >> * Mark the first IO in link list as DRAIN, let all the following >> @@ -2693,7 +2699,7 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, >> * list. >> */ >> req->flags |= REQ_F_IO_DRAIN; >> - ret = io_req_defer(ctx, req); >> + ret = io_req_defer(req); >> if (ret) { >> if (ret != -EIOCBQUEUED) { >> io_cqring_add_event(ctx, req->submit.sqe->user_data, ret); >> @@ -2716,18 +2722,19 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req, >> spin_unlock_irq(&ctx->completion_lock); >> >> if (need_submit) >> - return __io_queue_sqe(ctx, req); >> + return __io_queue_sqe(req); >> >> return 0; >> } >> >> #define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK) >> >> -static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, >> - struct io_submit_state *state, struct io_kiocb **link) >> +static void io_submit_sqe(struct io_kiocb *req, struct io_submit_state *state, >> + struct io_kiocb **link) >> { >> struct io_uring_sqe *sqe_copy; >> struct sqe_submit *s = &req->submit; >> + struct io_ring_ctx *ctx = req->ctx; >> int ret; >> >> /* enforce forwards compatibility on users */ >> @@ -2736,7 +2743,7 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, >> goto err_req; >> } >> >> - ret = io_req_set_file(ctx, state, req); >> + ret = io_req_set_file(state, req); >> if (unlikely(ret)) { >> err_req: >> io_cqring_add_event(ctx, s->sqe->user_data, ret); >> @@ -2775,7 +2782,7 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req, >> ret = -EINVAL; >> goto err_req; >> } else { >> - io_queue_sqe(ctx, req); >> + io_queue_sqe(req); >> } >> } >> >> @@ -2919,7 +2926,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, >> req->submit.needs_fixed_file = async; >> trace_io_uring_submit_sqe(ctx, req->submit.sqe->user_data, >> true, async); >> - io_submit_sqe(ctx, req, statep, &link); >> + io_submit_sqe(req, statep, &link); >> submitted++; >> >> /* >> @@ -2927,14 +2934,14 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, >> * that's the end of the chain. Submit the previous link. >> */ >> if (!(sqe_flags & IOSQE_IO_LINK) && link) { >> - io_queue_link_head(ctx, link, shadow_req); >> + io_queue_link_head(link, shadow_req); >> link = NULL; >> shadow_req = NULL; >> } >> } >> >> if (link) >> - io_queue_link_head(ctx, link, shadow_req); >> + io_queue_link_head(link, shadow_req); >> if (statep) >> io_submit_state_end(&state); >> >> -- >> 2.7.4 >> >> >> > > > -- > BR, Jackie Liu > > >