From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qv1-f100.google.com (mail-qv1-f100.google.com [209.85.219.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C2B334D39C for ; Thu, 19 Feb 2026 17:23:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.100 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771521788; cv=none; b=gZrKC6GXQISrl13lq29xqwZ2owKKX6vGGPvwTnMK1JArB86WgbUBA0eL5UuPh4jAAtXAzy8qReppMSOF+Q7mMYUSW2rJpqicrIwIhHpeB5RILK+2rLU04rFgdh44pCpH7Uts9MWvoBT0q2kADqS79DdqNWsLLo4kFFAuo0mg6Hk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771521788; c=relaxed/simple; bh=b2plIQMDCjpErORTlUkJYgO8/T5ZJZ547eb0xAFDohU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cCUlrK2l5kyuXzZGyHsi7c2hxSOON613BDuvr9knTnkkG94QQEuJ1X+4o0P6fT7FfOuEyxrryNlkCB97bbvq//vQYzbYJmoNnyYFp2W40wRP6S8ZfzSjexqWZfWIyVPdFG0fgIrjNLOOxqUvl6ZWzJmCq8+Qi6jZ+nukr1VdMCU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=VV/e35D4; arc=none smtp.client-ip=209.85.219.100 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="VV/e35D4" Received: by mail-qv1-f100.google.com with SMTP id 6a1803df08f44-896d82bc48bso1100886d6.2 for ; Thu, 19 Feb 2026 09:23:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1771521785; x=1772126585; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8C1D3sUvdBYYZCkJc+065BogBWdk8oAZ6MrxwjeZhTM=; b=VV/e35D4U5D6pUMMq72RjtK1lmJzJfcYYpDZdiSVAuF6AR5hBuNodrDARjCNT/6Y9+ akpu4GIHTbPRe7ZQXCtqu3f1cbr4Tw087ILhIfHq2iySr9gwlyswK+u6DfDUMMUesZ4X GE2LoO+MYFmrmtD5rkXZ/xHFeYpM8ehRCckNlvtJAmHHzf+c4zBhxHvEAikIirZP7A/+ QoRInYLFJtnt/F7YPkittpxUHYsNexIuzu9MKO5Q4wWgWYeDUD0IJJcsuIkZCHKYesiE Fp9cF2SfvX7Zkpv7abs60hfz9VcAO9txx7fvqBj4SuZ9nASJFCBk4L1hHSHUE/Ecvbdh dYkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771521785; x=1772126585; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=8C1D3sUvdBYYZCkJc+065BogBWdk8oAZ6MrxwjeZhTM=; b=fcvw0dBKl70EU4djBZBA/42ZPN4cAId4i4LWn6ygoRVv7tij3YXNn/o4vIPgykJO8U lxp8ExYWT4bEZc/CpgxHXOOhaYOKsxbqW5ZrtGIUoqjp6n+AD6VOJ5j41hKa7GmiI8+3 3g4TEs9toTJ/Uyzj6NDUXZVIU6Hm6BqiQ2geaSKv8Ua99NS4lbELa1D5WHBUUbpej/QU t70Wsj46nAHqA/IB3SKOoAmIlprTZL0gxzJ+3l1dbpBwCy4biQ5rUEG/Vm63GiNQdO6N WjgyOL1GKa+geLBHKu0BGCIE/dUMMydQ3tCOn2s8JZHmopTgGHOiNWei5w0Hw1pNod4S hJoQ== X-Gm-Message-State: AOJu0YyO9covv1zYyyTOffUogQIGXG0wlg40G1ECE4hhjJ8PRvvtuUQ1 hkJtZ88M+lfq/EhmLOBNp2lIiBlyBDEMaa3lqkOK0gUcSR2jF47+Jne/8lUrMJHpXUXaf97oRJI 383M5d4/pxjYjNm4j75Q6R7f3gFKQKFpE5JiQGiK34iX0rXHFGmNf X-Gm-Gg: AZuq6aJJso5Sw3UwIecbdAy+rmP+B5j2hFfyXFLXKRjVOqNQiCCa21zOxLHZjKBAEyP n8xJb5DMmwKcBpBLEGy0VEHGbNF4IRihvALVBiMwOEPThilkoVd/7zleL4rSnXNJwotaPiams0r 4MKiE7mr0dE5XH3xavPiv1M5XM8iexxbCsOPlAkK+2SWWBLzT2RReR150dI6IKxls840VtIJ9S1 rEQ8Tl4sIyZhVuXxE91y2PUkmXKmetOrsDvLMFm42hqp2+R9ceNMTLes3MBiMxKcZ+yI+xY1sSc njl/F1+9O8e2XwIdViqfbL1xgfO9iRWzDK/V29kJGk5GhhO+vNhJqXYe7L0o8/Y/6+zzzZKRmZa 4OvXXfTOxLD4c90m6Hw73FF5fqHjnMPUGbtDbe2Q= X-Received: by 2002:a0c:e002:0:b0:895:3b2c:7708 with SMTP id 6a1803df08f44-897346241b8mr220248196d6.0.1771521785373; Thu, 19 Feb 2026 09:23:05 -0800 (PST) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id 6a1803df08f44-8971cd75df8sm31539626d6.30.2026.02.19.09.23.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Feb 2026 09:23:05 -0800 (PST) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.112.29.101]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id B2C07341F2A; Thu, 19 Feb 2026 10:23:04 -0700 (MST) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id AC84CE41AE3; Thu, 19 Feb 2026 10:23:04 -0700 (MST) From: Caleb Sander Mateos To: Jens Axboe , Christoph Hellwig , Keith Busch , Sagi Grimberg Cc: io-uring@vger.kernel.org, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, Anuj gupta , Kanchan Joshi , Caleb Sander Mateos Subject: [PATCH v3 1/4] io_uring: add REQ_F_IOPOLL Date: Thu, 19 Feb 2026 10:22:24 -0700 Message-ID: <20260219172228.429479-2-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20260219172228.429479-1-csander@purestorage.com> References: <20260219172228.429479-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit A subsequent commit will allow uring_cmds to commands that don't implement ->uring_cmd_iopoll() to be issued to IORING_SETUP_IOPOLL io_urings. This means the ctx's IORING_SETUP_IOPOLL flag isn't sufficient to determine whether a given request needs to be iopolled. Introduce a request flag REQ_F_IOPOLL set in ->issue() if a request needs to be iopolled to completion. Set the flag in io_rw_init_file() and io_uring_cmd() for requests issued to IORING_SETUP_IOPOLL ctxs. Use the request flag instead of IORING_SETUP_IOPOLL in places dealing with a specific request. A future possibility would be to add an option to enable/disable iopoll in the io_uring SQE instead of determining it from IORING_SETUP_IOPOLL. Signed-off-by: Caleb Sander Mateos --- include/linux/io_uring_types.h | 3 +++ io_uring/io_uring.c | 9 ++++----- io_uring/rw.c | 11 ++++++----- io_uring/uring_cmd.c | 5 +++-- 4 files changed, 16 insertions(+), 12 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 3e4a82a6f817..d74b2a8c7305 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -541,10 +541,11 @@ enum { REQ_F_BUFFERS_COMMIT_BIT, REQ_F_BUF_NODE_BIT, REQ_F_HAS_METADATA_BIT, REQ_F_IMPORT_BUFFER_BIT, REQ_F_SQE_COPIED_BIT, + REQ_F_IOPOLL_BIT, /* not a real bit, just to check we're not overflowing the space */ __REQ_F_LAST_BIT, }; @@ -632,10 +633,12 @@ enum { * For SEND_ZC, whether to import buffers (i.e. the first issue). */ REQ_F_IMPORT_BUFFER = IO_REQ_FLAG(REQ_F_IMPORT_BUFFER_BIT), /* ->sqe_copy() has been called, if necessary */ REQ_F_SQE_COPIED = IO_REQ_FLAG(REQ_F_SQE_COPIED_BIT), + /* request must be iopolled to completion (set in ->issue()) */ + REQ_F_IOPOLL = IO_REQ_FLAG(REQ_F_IOPOLL_BIT), }; struct io_tw_req { struct io_kiocb *req; }; diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index ccab8562d273..43059f6e10e0 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -354,11 +354,10 @@ static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req) } static void io_prep_async_work(struct io_kiocb *req) { const struct io_issue_def *def = &io_issue_defs[req->opcode]; - struct io_ring_ctx *ctx = req->ctx; if (!(req->flags & REQ_F_CREDS)) { req->flags |= REQ_F_CREDS; req->creds = get_current_cred(); } @@ -376,11 +375,11 @@ static void io_prep_async_work(struct io_kiocb *req) /* don't serialize this request if the fs doesn't need it */ if (should_hash && (req->file->f_flags & O_DIRECT) && (req->file->f_op->fop_flags & FOP_DIO_PARALLEL_WRITE)) should_hash = false; - if (should_hash || (ctx->flags & IORING_SETUP_IOPOLL)) + if (should_hash || (req->flags & REQ_F_IOPOLL)) io_wq_hash_work(&req->work, file_inode(req->file)); } else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) { if (def->unbound_nonreg_file) atomic_or(IO_WQ_WORK_UNBOUND, &req->work.flags); } @@ -1417,11 +1416,11 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags) if (ret == IOU_ISSUE_SKIP_COMPLETE) { ret = 0; /* If the op doesn't have a file, we're not polling for it */ - if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) + if ((req->flags & REQ_F_IOPOLL) && def->iopoll_queue) io_iopoll_req_issued(req, issue_flags); } return ret; } @@ -1433,11 +1432,11 @@ int io_poll_issue(struct io_kiocb *req, io_tw_token_t tw) int ret; io_tw_lock(req->ctx, tw); WARN_ON_ONCE(!req->file); - if (WARN_ON_ONCE(req->ctx->flags & IORING_SETUP_IOPOLL)) + if (WARN_ON_ONCE(req->flags & REQ_F_IOPOLL)) return -EFAULT; ret = __io_issue_sqe(req, issue_flags, &io_issue_defs[req->opcode]); WARN_ON_ONCE(ret == IOU_ISSUE_SKIP_COMPLETE); @@ -1531,11 +1530,11 @@ void io_wq_submit_work(struct io_wq_work *work) * We can get EAGAIN for iopolled IO even though we're * forcing a sync submission from here, since we can't * wait for request slots on the block side. */ if (!needs_poll) { - if (!(req->ctx->flags & IORING_SETUP_IOPOLL)) + if (!(req->flags & REQ_F_IOPOLL)) break; if (io_wq_worker_stopped()) break; cond_resched(); continue; diff --git a/io_uring/rw.c b/io_uring/rw.c index 1a5f262734e8..3bdb9914e673 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -502,11 +502,11 @@ static bool io_rw_should_reissue(struct io_kiocb *req) struct io_ring_ctx *ctx = req->ctx; if (!S_ISBLK(mode) && !S_ISREG(mode)) return false; if ((req->flags & REQ_F_NOWAIT) || (io_wq_current_is_worker() && - !(ctx->flags & IORING_SETUP_IOPOLL))) + !(req->flags & REQ_F_IOPOLL))) return false; /* * If ref is dying, we might be running poll reap from the exit work. * Don't attempt to reissue from that path, just let it fail with * -EAGAIN. @@ -638,11 +638,11 @@ static inline void io_rw_done(struct io_kiocb *req, ssize_t ret) ret = -EINTR; break; } } - if (req->ctx->flags & IORING_SETUP_IOPOLL) + if (req->flags & REQ_F_IOPOLL) io_complete_rw_iopoll(&rw->kiocb, ret); else io_complete_rw(&rw->kiocb, ret); } @@ -652,11 +652,11 @@ static int kiocb_done(struct io_kiocb *req, ssize_t ret, struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); unsigned final_ret = io_fixup_rw_res(req, ret); if (ret >= 0 && req->flags & REQ_F_CUR_POS) req->file->f_pos = rw->kiocb.ki_pos; - if (ret >= 0 && !(req->ctx->flags & IORING_SETUP_IOPOLL)) { + if (ret >= 0 && !(req->flags & REQ_F_IOPOLL)) { u32 cflags = 0; __io_complete_rw_common(req, ret); /* * Safe to call io_end from here as we're inline @@ -874,10 +874,11 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type) req->flags |= REQ_F_NOWAIT; if (ctx->flags & IORING_SETUP_IOPOLL) { if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll) return -EOPNOTSUPP; + req->flags |= REQ_F_IOPOLL; kiocb->private = NULL; kiocb->ki_flags |= IOCB_HIPRI; req->iopoll_completed = 0; if (ctx->flags & IORING_SETUP_HYBRID_IOPOLL) { /* make sure every req only blocks once*/ @@ -961,11 +962,11 @@ static int __io_read(struct io_kiocb *req, struct io_br_sel *sel, if (ret == -EAGAIN) { /* If we can poll, just do that. */ if (io_file_can_poll(req)) return -EAGAIN; /* IOPOLL retry should happen for io-wq threads */ - if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL)) + if (!force_nonblock && !(req->flags & REQ_F_IOPOLL)) goto done; /* no retry on NONBLOCK nor RWF_NOWAIT */ if (req->flags & REQ_F_NOWAIT) goto done; ret = 0; @@ -1186,11 +1187,11 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) /* no retry on NONBLOCK nor RWF_NOWAIT */ if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT)) goto done; if (!force_nonblock || ret2 != -EAGAIN) { /* IOPOLL retry should happen for io-wq threads */ - if (ret2 == -EAGAIN && (req->ctx->flags & IORING_SETUP_IOPOLL)) + if (ret2 == -EAGAIN && (req->flags & REQ_F_IOPOLL)) goto ret_eagain; if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)) { trace_io_uring_short_write(req->ctx, kiocb->ki_pos - ret2, req->cqe.res, ret2); diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index ee7b49f47cb5..b651c63f6e20 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -108,11 +108,11 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd, * Doing cancelations on IOPOLL requests are not supported. Both * because they can't get canceled in the block stack, but also * because iopoll completion data overlaps with the hash_node used * for tracking. */ - if (ctx->flags & IORING_SETUP_IOPOLL) + if (req->flags & REQ_F_IOPOLL) return; if (!(cmd->flags & IORING_URING_CMD_CANCELABLE)) { cmd->flags |= IORING_URING_CMD_CANCELABLE; io_ring_submit_lock(ctx, issue_flags); @@ -165,11 +165,11 @@ void __io_uring_cmd_done(struct io_uring_cmd *ioucmd, s32 ret, u64 res2, if (req->ctx->flags & IORING_SETUP_CQE_MIXED) req->cqe.flags |= IORING_CQE_F_32; io_req_set_cqe32_extra(req, res2, 0); } io_req_uring_cleanup(req, issue_flags); - if (req->ctx->flags & IORING_SETUP_IOPOLL) { + if (req->flags & REQ_F_IOPOLL) { /* order with io_iopoll_req_issued() checking ->iopoll_complete */ smp_store_release(&req->iopoll_completed, 1); } else if (issue_flags & IO_URING_F_COMPLETE_DEFER) { if (WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED)) return; @@ -258,10 +258,11 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) if (io_is_compat(ctx)) issue_flags |= IO_URING_F_COMPAT; if (ctx->flags & IORING_SETUP_IOPOLL) { if (!file->f_op->uring_cmd_iopoll) return -EOPNOTSUPP; + req->flags |= REQ_F_IOPOLL; issue_flags |= IO_URING_F_IOPOLL; req->iopoll_completed = 0; if (ctx->flags & IORING_SETUP_HYBRID_IOPOLL) { /* make sure every req only blocks once */ req->flags &= ~REQ_F_IOPOLL_STATE; -- 2.45.2