* enable iomap dio write completions from interrupt context
@ 2025-11-12 7:21 Christoph Hellwig
2025-11-12 7:21 ` [PATCH 1/5] fs, iomap: remove IOCB_DIO_CALLER_COMP Christoph Hellwig
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Christoph Hellwig @ 2025-11-12 7:21 UTC (permalink / raw)
To: Christian Brauner
Cc: Alexander Viro, Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity,
Damien Le Moal, Naohiro Aota, Johannes Thumshirn, linux-xfs,
linux-fsdevel, io-uring
Hi all,
Currently iomap defers all write completions to interrupt context. This
was based on my assumption that no one cares about the latency of those
to simplify the code vs the old direct-io.c. It turns out someone cared,
as Avi reported a lot of context switches with ScyllaDB, which at least
in older kernels with workqueue scheduling issues caused really high
tail latencies.
Fortunately allowing the direct completions is pretty easy with all the
other iomap changes we had since.
While doing this I've also found dead code which gets removed (patch 1)
and an incorrect assumption in zonefs that read completions are called
in user context, which it assumes for it's error handling. Fix this by
always calling error completions from user context (patch 2).
Against the vfs/vfs-6.19.iomap branch.
Diffstat:
Documentation/filesystems/iomap/operations.rst | 4
fs/backing-file.c | 6 -
fs/iomap/direct-io.c | 149 +++++++++++--------------
include/linux/fs.h | 43 +------
io_uring/rw.c | 16 --
5 files changed, 81 insertions(+), 137 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/5] fs, iomap: remove IOCB_DIO_CALLER_COMP
2025-11-12 7:21 enable iomap dio write completions from interrupt context Christoph Hellwig
@ 2025-11-12 7:21 ` Christoph Hellwig
2025-11-12 7:21 ` [PATCH 2/5] iomap: always run error completions in user context Christoph Hellwig
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2025-11-12 7:21 UTC (permalink / raw)
To: Christian Brauner
Cc: Alexander Viro, Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity,
Damien Le Moal, Naohiro Aota, Johannes Thumshirn, linux-xfs,
linux-fsdevel, io-uring
This was added by commit 099ada2c8726 ("io_uring/rw: add write support
for IOCB_DIO_CALLER_COMP") and disabled a little later by commit
838b35bb6a89 ("io_uring/rw: disable IOCB_DIO_CALLER_COMP") because it
didn't work. Remove all the related code that sat unused for 2 years.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
.../filesystems/iomap/operations.rst | 4 --
fs/backing-file.c | 6 --
fs/iomap/direct-io.c | 56 +------------------
include/linux/fs.h | 43 +++-----------
io_uring/rw.c | 16 +-----
5 files changed, 13 insertions(+), 112 deletions(-)
diff --git a/Documentation/filesystems/iomap/operations.rst b/Documentation/filesystems/iomap/operations.rst
index c88205132039..5558a44891bb 100644
--- a/Documentation/filesystems/iomap/operations.rst
+++ b/Documentation/filesystems/iomap/operations.rst
@@ -488,10 +488,6 @@ These ``struct kiocb`` flags are significant for direct I/O with iomap:
Only meaningful for asynchronous I/O, and only if the entire I/O can
be issued as a single ``struct bio``.
- * ``IOCB_DIO_CALLER_COMP``: Try to run I/O completion from the caller's
- process context.
- See ``linux/fs.h`` for more details.
-
Filesystems should call ``iomap_dio_rw`` from ``->read_iter`` and
``->write_iter``, and set ``FMODE_CAN_ODIRECT`` in the ``->open``
function for the file.
diff --git a/fs/backing-file.c b/fs/backing-file.c
index 15a7f8031084..2a86bb6fcd13 100644
--- a/fs/backing-file.c
+++ b/fs/backing-file.c
@@ -227,12 +227,6 @@ ssize_t backing_file_write_iter(struct file *file, struct iov_iter *iter,
!(file->f_mode & FMODE_CAN_ODIRECT))
return -EINVAL;
- /*
- * Stacked filesystems don't support deferred completions, don't copy
- * this property in case it is set by the issuer.
- */
- flags &= ~IOCB_DIO_CALLER_COMP;
-
old_cred = override_creds(ctx->cred);
if (is_sync_kiocb(iocb)) {
rwf_t rwf = iocb_to_rw_flags(flags);
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 8b2f9fb89eb3..7659db85083a 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -16,8 +16,7 @@
* Private flags for iomap_dio, must not overlap with the public ones in
* iomap.h:
*/
-#define IOMAP_DIO_NO_INVALIDATE (1U << 25)
-#define IOMAP_DIO_CALLER_COMP (1U << 26)
+#define IOMAP_DIO_NO_INVALIDATE (1U << 26)
#define IOMAP_DIO_INLINE_COMP (1U << 27)
#define IOMAP_DIO_WRITE_THROUGH (1U << 28)
#define IOMAP_DIO_NEED_SYNC (1U << 29)
@@ -140,11 +139,6 @@ ssize_t iomap_dio_complete(struct iomap_dio *dio)
}
EXPORT_SYMBOL_GPL(iomap_dio_complete);
-static ssize_t iomap_dio_deferred_complete(void *data)
-{
- return iomap_dio_complete(data);
-}
-
static void iomap_dio_complete_work(struct work_struct *work)
{
struct iomap_dio *dio = container_of(work, struct iomap_dio, aio.work);
@@ -182,29 +176,6 @@ static void iomap_dio_done(struct iomap_dio *dio)
} else if (dio->flags & IOMAP_DIO_INLINE_COMP) {
WRITE_ONCE(iocb->private, NULL);
iomap_dio_complete_work(&dio->aio.work);
- } else if (dio->flags & IOMAP_DIO_CALLER_COMP) {
- /*
- * If this dio is flagged with IOMAP_DIO_CALLER_COMP, then
- * schedule our completion that way to avoid an async punt to a
- * workqueue.
- */
- /* only polled IO cares about private cleared */
- iocb->private = dio;
- iocb->dio_complete = iomap_dio_deferred_complete;
-
- /*
- * Invoke ->ki_complete() directly. We've assigned our
- * dio_complete callback handler, and since the issuer set
- * IOCB_DIO_CALLER_COMP, we know their ki_complete handler will
- * notice ->dio_complete being set and will defer calling that
- * handler until it can be done from a safe task context.
- *
- * Note that the 'res' being passed in here is not important
- * for this case. The actual completion value of the request
- * will be gotten from dio_complete when that is run by the
- * issuer.
- */
- iocb->ki_complete(iocb, 0);
} else {
struct inode *inode = file_inode(iocb->ki_filp);
@@ -261,7 +232,6 @@ u32 iomap_finish_ioend_direct(struct iomap_ioend *ioend)
dio->flags |= IOMAP_DIO_INLINE_COMP;
dio->flags |= IOMAP_DIO_NO_INVALIDATE;
}
- dio->flags &= ~IOMAP_DIO_CALLER_COMP;
iomap_dio_done(dio);
}
@@ -380,19 +350,6 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
if (!(bio_opf & REQ_FUA))
dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;
-
- /*
- * We can only do deferred completion for pure overwrites that
- * don't require additional I/O at completion time.
- *
- * This rules out writes that need zeroing or extent conversion,
- * extend the file size, or issue metadata I/O or cache flushes
- * during completion processing.
- */
- if (need_zeroout || (pos >= i_size_read(inode)) ||
- ((dio->flags & IOMAP_DIO_NEED_SYNC) &&
- !(bio_opf & REQ_FUA)))
- dio->flags &= ~IOMAP_DIO_CALLER_COMP;
} else {
bio_opf |= REQ_OP_READ;
}
@@ -413,7 +370,7 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
* ones we set for inline and deferred completions. If none of those
* are available for this IO, clear the polled flag.
*/
- if (!(dio->flags & (IOMAP_DIO_INLINE_COMP|IOMAP_DIO_CALLER_COMP)))
+ if (!(dio->flags & IOMAP_DIO_INLINE_COMP))
dio->iocb->ki_flags &= ~IOCB_HIPRI;
if (need_zeroout) {
@@ -669,15 +626,6 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
iomi.flags |= IOMAP_WRITE;
dio->flags |= IOMAP_DIO_WRITE;
- /*
- * Flag as supporting deferred completions, if the issuer
- * groks it. This can avoid a workqueue punt for writes.
- * We may later clear this flag if we need to do other IO
- * as part of this IO completion.
- */
- if (iocb->ki_flags & IOCB_DIO_CALLER_COMP)
- dio->flags |= IOMAP_DIO_CALLER_COMP;
-
if (dio_flags & IOMAP_DIO_OVERWRITE_ONLY) {
ret = -EAGAIN;
if (iomi.pos >= dio->i_size ||
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c895146c1444..e210d2d8af53 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -367,23 +367,9 @@ struct readahead_control;
#define IOCB_NOIO (1 << 20)
/* can use bio alloc cache */
#define IOCB_ALLOC_CACHE (1 << 21)
-/*
- * IOCB_DIO_CALLER_COMP can be set by the iocb owner, to indicate that the
- * iocb completion can be passed back to the owner for execution from a safe
- * context rather than needing to be punted through a workqueue. If this
- * flag is set, the bio completion handling may set iocb->dio_complete to a
- * handler function and iocb->private to context information for that handler.
- * The issuer should call the handler with that context information from task
- * context to complete the processing of the iocb. Note that while this
- * provides a task context for the dio_complete() callback, it should only be
- * used on the completion side for non-IO generating completions. It's fine to
- * call blocking functions from this callback, but they should not wait for
- * unrelated IO (like cache flushing, new IO generation, etc).
- */
-#define IOCB_DIO_CALLER_COMP (1 << 22)
/* kiocb is a read or write operation submitted by fs/aio.c. */
-#define IOCB_AIO_RW (1 << 23)
-#define IOCB_HAS_METADATA (1 << 24)
+#define IOCB_AIO_RW (1 << 22)
+#define IOCB_HAS_METADATA (1 << 23)
/* for use in trace events */
#define TRACE_IOCB_STRINGS \
@@ -400,7 +386,6 @@ struct readahead_control;
{ IOCB_WAITQ, "WAITQ" }, \
{ IOCB_NOIO, "NOIO" }, \
{ IOCB_ALLOC_CACHE, "ALLOC_CACHE" }, \
- { IOCB_DIO_CALLER_COMP, "CALLER_COMP" }, \
{ IOCB_AIO_RW, "AIO_RW" }, \
{ IOCB_HAS_METADATA, "AIO_HAS_METADATA" }
@@ -412,23 +397,13 @@ struct kiocb {
int ki_flags;
u16 ki_ioprio; /* See linux/ioprio.h */
u8 ki_write_stream;
- union {
- /*
- * Only used for async buffered reads, where it denotes the
- * page waitqueue associated with completing the read. Valid
- * IFF IOCB_WAITQ is set.
- */
- struct wait_page_queue *ki_waitq;
- /*
- * Can be used for O_DIRECT IO, where the completion handling
- * is punted back to the issuer of the IO. May only be set
- * if IOCB_DIO_CALLER_COMP is set by the issuer, and the issuer
- * must then check for presence of this handler when ki_complete
- * is invoked. The data passed in to this handler must be
- * assigned to ->private when dio_complete is assigned.
- */
- ssize_t (*dio_complete)(void *data);
- };
+
+ /*
+ * Only used for async buffered reads, where it denotes the page
+ * waitqueue associated with completing the read.
+ * Valid IFF IOCB_WAITQ is set.
+ */
+ struct wait_page_queue *ki_waitq;
};
static inline bool is_sync_kiocb(struct kiocb *kiocb)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 08882648d569..4d0ab8f50d14 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -277,7 +277,6 @@ static int __io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
} else {
rw->kiocb.ki_ioprio = get_current_ioprio();
}
- rw->kiocb.dio_complete = NULL;
rw->kiocb.ki_flags = 0;
rw->kiocb.ki_write_stream = READ_ONCE(sqe->write_stream);
@@ -566,15 +565,6 @@ static inline int io_fixup_rw_res(struct io_kiocb *req, long res)
void io_req_rw_complete(struct io_kiocb *req, io_tw_token_t tw)
{
- struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
- struct kiocb *kiocb = &rw->kiocb;
-
- if ((kiocb->ki_flags & IOCB_DIO_CALLER_COMP) && kiocb->dio_complete) {
- long res = kiocb->dio_complete(rw->kiocb.private);
-
- io_req_set_res(req, io_fixup_rw_res(req, res), 0);
- }
-
io_req_io_end(req);
if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING))
@@ -589,10 +579,8 @@ static void io_complete_rw(struct kiocb *kiocb, long res)
struct io_rw *rw = container_of(kiocb, struct io_rw, kiocb);
struct io_kiocb *req = cmd_to_io_kiocb(rw);
- if (!kiocb->dio_complete || !(kiocb->ki_flags & IOCB_DIO_CALLER_COMP)) {
- __io_complete_rw_common(req, res);
- io_req_set_res(req, io_fixup_rw_res(req, res), 0);
- }
+ __io_complete_rw_common(req, res);
+ io_req_set_res(req, io_fixup_rw_res(req, res), 0);
req->io_task_work.func = io_req_rw_complete;
__io_req_task_work_add(req, IOU_F_TWQ_LAZY_WAKE);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/5] iomap: always run error completions in user context
2025-11-12 7:21 enable iomap dio write completions from interrupt context Christoph Hellwig
2025-11-12 7:21 ` [PATCH 1/5] fs, iomap: remove IOCB_DIO_CALLER_COMP Christoph Hellwig
@ 2025-11-12 7:21 ` Christoph Hellwig
2025-11-12 7:21 ` [PATCH 3/5] iomap: rework REQ_FUA selection Christoph Hellwig
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2025-11-12 7:21 UTC (permalink / raw)
To: Christian Brauner
Cc: Alexander Viro, Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity,
Damien Le Moal, Naohiro Aota, Johannes Thumshirn, linux-xfs,
linux-fsdevel, io-uring
At least zonefs expects error completions to be able to sleep. Because
error completions aren't performance critical, just defer them to workqueue
context unconditionally.
Fixes: 8dcc1a9d90c1 ("fs: New zonefs file system")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/direct-io.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 7659db85083a..765ab6dd6637 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -173,7 +173,18 @@ static void iomap_dio_done(struct iomap_dio *dio)
WRITE_ONCE(dio->submit.waiter, NULL);
blk_wake_io_task(waiter);
- } else if (dio->flags & IOMAP_DIO_INLINE_COMP) {
+ return;
+ }
+
+ /*
+ * Always run error completions in user context. These are not
+ * performance critical and some code relies on taking sleeping locks
+ * for error handling.
+ */
+ if (dio->error)
+ dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+
+ if (dio->flags & IOMAP_DIO_INLINE_COMP) {
WRITE_ONCE(iocb->private, NULL);
iomap_dio_complete_work(&dio->aio.work);
} else {
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/5] iomap: rework REQ_FUA selection
2025-11-12 7:21 enable iomap dio write completions from interrupt context Christoph Hellwig
2025-11-12 7:21 ` [PATCH 1/5] fs, iomap: remove IOCB_DIO_CALLER_COMP Christoph Hellwig
2025-11-12 7:21 ` [PATCH 2/5] iomap: always run error completions in user context Christoph Hellwig
@ 2025-11-12 7:21 ` Christoph Hellwig
2025-11-12 7:21 ` [PATCH 4/5] iomap: support write completions from interrupt context Christoph Hellwig
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2025-11-12 7:21 UTC (permalink / raw)
To: Christian Brauner
Cc: Alexander Viro, Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity,
Damien Le Moal, Naohiro Aota, Johannes Thumshirn, linux-xfs,
linux-fsdevel, io-uring
The way how iomap_dio_can_use_fua and the caller is structured is
a bit confusing, as the main guarding condition is hidden in the
helper, and the secondary conditions are split between caller and
callee.
Refactor the code, so that there is a main IOMAP_DIO_WRITE_THROUGH
guard in iomap_dio_bio_iter, which is directly tied to clearing it
when not supported, and a helper that just checks if the I/O is a
pure overwrite.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/direct-io.c | 36 ++++++++++++++++++++----------------
1 file changed, 20 insertions(+), 16 deletions(-)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 765ab6dd6637..c4a883fa8ea5 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -288,20 +288,14 @@ static int iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio,
}
/*
- * Use a FUA write if we need datasync semantics and this is a pure data I/O
- * that doesn't require any metadata updates (including after I/O completion
- * such as unwritten extent conversion) and the underlying device either
- * doesn't have a volatile write cache or supports FUA.
- * This allows us to avoid cache flushes on I/O completion.
+ * Check if this mapping is a pure overwrite that does not need metadata updates
+ * at I/O completion time.
*/
-static inline bool iomap_dio_can_use_fua(const struct iomap *iomap,
- struct iomap_dio *dio)
+static inline bool iomap_dio_is_overwrite(const struct iomap *iomap)
{
- if (iomap->flags & (IOMAP_F_SHARED | IOMAP_F_DIRTY))
+ if (iomap->type != IOMAP_MAPPED)
return false;
- if (!(dio->flags & IOMAP_DIO_WRITE_THROUGH))
- return false;
- return !bdev_write_cache(iomap->bdev) || bdev_fua(iomap->bdev);
+ return !(iomap->flags & (IOMAP_F_NEW | IOMAP_F_SHARED));
}
static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
@@ -355,12 +349,22 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
if (iomap->flags & IOMAP_F_NEW)
need_zeroout = true;
- else if (iomap->type == IOMAP_MAPPED &&
- iomap_dio_can_use_fua(iomap, dio))
- bio_opf |= REQ_FUA;
- if (!(bio_opf & REQ_FUA))
- dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;
+ /*
+ * Use a FUA write if we need datasync semantics and this is a
+ * pure overwrite that doesn't require any metadata updates.
+ *
+ * This allows us to avoid cache flushes on I/O completion.
+ */
+ if (dio->flags & IOMAP_DIO_WRITE_THROUGH) {
+ if (iomap_dio_is_overwrite(iomap) &&
+ !(iomap->flags & IOMAP_F_DIRTY) &&
+ (!bdev_write_cache(iomap->bdev) ||
+ bdev_fua(iomap->bdev)))
+ bio_opf |= REQ_FUA;
+ else
+ dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;
+ }
} else {
bio_opf |= REQ_OP_READ;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/5] iomap: support write completions from interrupt context
2025-11-12 7:21 enable iomap dio write completions from interrupt context Christoph Hellwig
` (2 preceding siblings ...)
2025-11-12 7:21 ` [PATCH 3/5] iomap: rework REQ_FUA selection Christoph Hellwig
@ 2025-11-12 7:21 ` Christoph Hellwig
2025-11-12 7:21 ` [PATCH 5/5] iomap: invert the polarity of IOMAP_DIO_INLINE_COMP Christoph Hellwig
2025-11-12 8:43 ` enable iomap dio write completions from interrupt context Damien Le Moal
5 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2025-11-12 7:21 UTC (permalink / raw)
To: Christian Brauner
Cc: Alexander Viro, Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity,
Damien Le Moal, Naohiro Aota, Johannes Thumshirn, linux-xfs,
linux-fsdevel, io-uring
Completions for pure overwrites don't need to be deferred to a workqueue
as there is no work to be done, or at least no work that needs a user
context. Set the IOMAP_DIO_INLINE_COMP by default for writes like we
already do for reads, and the clear it for all the cases that actually
do need a user context for completions to update the inode size or
record updates to the logical to physical mapping.
I've audited all users of the ->end_io callback, and they only require
user context for I/O that involves unwritten extents, COW, size
extensions, or error handling and all those are still run from workqueue
context.
This restores the behavior of the old pre-iomap direct I/O code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/direct-io.c | 55 +++++++++++++++++++++++++++++++++++---------
1 file changed, 44 insertions(+), 11 deletions(-)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index c4a883fa8ea5..df313232f422 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -184,6 +184,21 @@ static void iomap_dio_done(struct iomap_dio *dio)
if (dio->error)
dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+ /*
+ * Never invalidate pages from this context to avoid deadlocks with
+ * buffered I/O completions when called from the ioend workqueue,
+ * or avoid sleeping when called directly from ->bi_end_io.
+ * Tough luck if you hit the tiny race with someone dirtying the range
+ * right between this check and the actual completion.
+ */
+ if ((dio->flags & IOMAP_DIO_WRITE) &&
+ (dio->flags & IOMAP_DIO_INLINE_COMP)) {
+ if (dio->iocb->ki_filp->f_mapping->nrpages)
+ dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+ else
+ dio->flags |= IOMAP_DIO_NO_INVALIDATE;
+ }
+
if (dio->flags & IOMAP_DIO_INLINE_COMP) {
WRITE_ONCE(iocb->private, NULL);
iomap_dio_complete_work(&dio->aio.work);
@@ -234,15 +249,9 @@ u32 iomap_finish_ioend_direct(struct iomap_ioend *ioend)
/*
* Try to avoid another context switch for the completion given
* that we are already called from the ioend completion
- * workqueue, but never invalidate pages from this thread to
- * avoid deadlocks with buffered I/O completions. Tough luck if
- * you hit the tiny race with someone dirtying the range now
- * between this check and the actual completion.
+ * workqueue.
*/
- if (!dio->iocb->ki_filp->f_mapping->nrpages) {
- dio->flags |= IOMAP_DIO_INLINE_COMP;
- dio->flags |= IOMAP_DIO_NO_INVALIDATE;
- }
+ dio->flags |= IOMAP_DIO_INLINE_COMP;
iomap_dio_done(dio);
}
@@ -365,6 +374,16 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
else
dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;
}
+
+ /*
+ * We can only do inline completion for pure overwrites that
+ * don't require additional I/O at completion time.
+ *
+ * This rules out writes that need zeroing or extent conversion,
+ * or extend the file size.
+ */
+ if (!iomap_dio_is_overwrite(iomap))
+ dio->flags &= ~IOMAP_DIO_INLINE_COMP;
} else {
bio_opf |= REQ_OP_READ;
}
@@ -624,10 +643,13 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
if (dio_flags & IOMAP_DIO_FSBLOCK_ALIGNED)
dio->flags |= IOMAP_DIO_FSBLOCK_ALIGNED;
- if (iov_iter_rw(iter) == READ) {
- /* reads can always complete inline */
- dio->flags |= IOMAP_DIO_INLINE_COMP;
+ /*
+ * Try to complete inline if we can. For reads this is always possible,
+ * but for writes we'll end up clearing this more often than not.
+ */
+ dio->flags |= IOMAP_DIO_INLINE_COMP;
+ if (iov_iter_rw(iter) == READ) {
if (iomi.pos >= dio->i_size)
goto out_free_dio;
@@ -669,6 +691,12 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
dio->flags |= IOMAP_DIO_WRITE_THROUGH;
}
+ /*
+ * Inode size updates must to happen from process context.
+ */
+ if (iomi.pos + iomi.len > dio->i_size)
+ dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+
/*
* Try to invalidate cache pages for the range we are writing.
* If this invalidation fails, let the caller fall back to
@@ -741,9 +769,14 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
* If all the writes we issued were already written through to the
* media, we don't need to flush the cache on IO completion. Clear the
* sync flag for this case.
+ *
+ * Otherwise clear the inline completion flag if any sync work is
+ * needed, as that needs to be performed from process context.
*/
if (dio->flags & IOMAP_DIO_WRITE_THROUGH)
dio->flags &= ~IOMAP_DIO_NEED_SYNC;
+ else if (dio->flags & IOMAP_DIO_NEED_SYNC)
+ dio->flags &= ~IOMAP_DIO_INLINE_COMP;
/*
* We are about to drop our additional submission reference, which
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/5] iomap: invert the polarity of IOMAP_DIO_INLINE_COMP
2025-11-12 7:21 enable iomap dio write completions from interrupt context Christoph Hellwig
` (3 preceding siblings ...)
2025-11-12 7:21 ` [PATCH 4/5] iomap: support write completions from interrupt context Christoph Hellwig
@ 2025-11-12 7:21 ` Christoph Hellwig
2025-11-12 8:43 ` enable iomap dio write completions from interrupt context Damien Le Moal
5 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2025-11-12 7:21 UTC (permalink / raw)
To: Christian Brauner
Cc: Alexander Viro, Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity,
Damien Le Moal, Naohiro Aota, Johannes Thumshirn, linux-xfs,
linux-fsdevel, io-uring
Replace IOMAP_DIO_INLINE_COMP with a flag to indicate that the
completion should be offloaded. This removes a tiny bit of boilerplate
code, but more importantly just makes the code easier to follow as this
new flag gets set most of the time and only cleared in one place, while
it was the inverse for the old version.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/direct-io.c | 33 ++++++++++++++-------------------
1 file changed, 14 insertions(+), 19 deletions(-)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index df313232f422..80ec3ff4e5dd 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -17,7 +17,7 @@
* iomap.h:
*/
#define IOMAP_DIO_NO_INVALIDATE (1U << 26)
-#define IOMAP_DIO_INLINE_COMP (1U << 27)
+#define IOMAP_DIO_COMP_WORK (1U << 27)
#define IOMAP_DIO_WRITE_THROUGH (1U << 28)
#define IOMAP_DIO_NEED_SYNC (1U << 29)
#define IOMAP_DIO_WRITE (1U << 30)
@@ -182,7 +182,7 @@ static void iomap_dio_done(struct iomap_dio *dio)
* for error handling.
*/
if (dio->error)
- dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+ dio->flags |= IOMAP_DIO_COMP_WORK;
/*
* Never invalidate pages from this context to avoid deadlocks with
@@ -192,17 +192,14 @@ static void iomap_dio_done(struct iomap_dio *dio)
* right between this check and the actual completion.
*/
if ((dio->flags & IOMAP_DIO_WRITE) &&
- (dio->flags & IOMAP_DIO_INLINE_COMP)) {
+ !(dio->flags & IOMAP_DIO_COMP_WORK)) {
if (dio->iocb->ki_filp->f_mapping->nrpages)
- dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+ dio->flags |= IOMAP_DIO_COMP_WORK;
else
dio->flags |= IOMAP_DIO_NO_INVALIDATE;
}
- if (dio->flags & IOMAP_DIO_INLINE_COMP) {
- WRITE_ONCE(iocb->private, NULL);
- iomap_dio_complete_work(&dio->aio.work);
- } else {
+ if (dio->flags & IOMAP_DIO_COMP_WORK) {
struct inode *inode = file_inode(iocb->ki_filp);
/*
@@ -213,7 +210,11 @@ static void iomap_dio_done(struct iomap_dio *dio)
*/
INIT_WORK(&dio->aio.work, iomap_dio_complete_work);
queue_work(inode->i_sb->s_dio_done_wq, &dio->aio.work);
+ return;
}
+
+ WRITE_ONCE(iocb->private, NULL);
+ iomap_dio_complete_work(&dio->aio.work);
}
void iomap_dio_bio_end_io(struct bio *bio)
@@ -251,7 +252,7 @@ u32 iomap_finish_ioend_direct(struct iomap_ioend *ioend)
* that we are already called from the ioend completion
* workqueue.
*/
- dio->flags |= IOMAP_DIO_INLINE_COMP;
+ dio->flags &= ~IOMAP_DIO_COMP_WORK;
iomap_dio_done(dio);
}
@@ -383,7 +384,7 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
* or extend the file size.
*/
if (!iomap_dio_is_overwrite(iomap))
- dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+ dio->flags |= IOMAP_DIO_COMP_WORK;
} else {
bio_opf |= REQ_OP_READ;
}
@@ -404,7 +405,7 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
* ones we set for inline and deferred completions. If none of those
* are available for this IO, clear the polled flag.
*/
- if (!(dio->flags & IOMAP_DIO_INLINE_COMP))
+ if (dio->flags & IOMAP_DIO_COMP_WORK)
dio->iocb->ki_flags &= ~IOCB_HIPRI;
if (need_zeroout) {
@@ -643,12 +644,6 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
if (dio_flags & IOMAP_DIO_FSBLOCK_ALIGNED)
dio->flags |= IOMAP_DIO_FSBLOCK_ALIGNED;
- /*
- * Try to complete inline if we can. For reads this is always possible,
- * but for writes we'll end up clearing this more often than not.
- */
- dio->flags |= IOMAP_DIO_INLINE_COMP;
-
if (iov_iter_rw(iter) == READ) {
if (iomi.pos >= dio->i_size)
goto out_free_dio;
@@ -695,7 +690,7 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
* Inode size updates must to happen from process context.
*/
if (iomi.pos + iomi.len > dio->i_size)
- dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+ dio->flags |= IOMAP_DIO_COMP_WORK;
/*
* Try to invalidate cache pages for the range we are writing.
@@ -776,7 +771,7 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
if (dio->flags & IOMAP_DIO_WRITE_THROUGH)
dio->flags &= ~IOMAP_DIO_NEED_SYNC;
else if (dio->flags & IOMAP_DIO_NEED_SYNC)
- dio->flags &= ~IOMAP_DIO_INLINE_COMP;
+ dio->flags |= IOMAP_DIO_COMP_WORK;
/*
* We are about to drop our additional submission reference, which
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: enable iomap dio write completions from interrupt context
2025-11-12 7:21 enable iomap dio write completions from interrupt context Christoph Hellwig
` (4 preceding siblings ...)
2025-11-12 7:21 ` [PATCH 5/5] iomap: invert the polarity of IOMAP_DIO_INLINE_COMP Christoph Hellwig
@ 2025-11-12 8:43 ` Damien Le Moal
2025-11-12 8:44 ` Christoph Hellwig
5 siblings, 1 reply; 9+ messages in thread
From: Damien Le Moal @ 2025-11-12 8:43 UTC (permalink / raw)
To: Christoph Hellwig, Christian Brauner
Cc: Alexander Viro, Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity,
Naohiro Aota, Johannes Thumshirn, linux-xfs, linux-fsdevel,
io-uring
On 11/12/25 16:21, Christoph Hellwig wrote:
> Hi all,
>
> Currently iomap defers all write completions to interrupt context. This
> was based on my assumption that no one cares about the latency of those
> to simplify the code vs the old direct-io.c. It turns out someone cared,
> as Avi reported a lot of context switches with ScyllaDB, which at least
> in older kernels with workqueue scheduling issues caused really high
> tail latencies.
>
> Fortunately allowing the direct completions is pretty easy with all the
> other iomap changes we had since.
>
> While doing this I've also found dead code which gets removed (patch 1)
> and an incorrect assumption in zonefs that read completions are called
> in user context, which it assumes for it's error handling. Fix this by
> always calling error completions from user context (patch 2).
>
> Against the vfs/vfs-6.19.iomap branch.
>
> Diffstat:
> Documentation/filesystems/iomap/operations.rst | 4
> fs/backing-file.c | 6 -
> fs/iomap/direct-io.c | 149 +++++++++++--------------
> include/linux/fs.h | 43 +------
> io_uring/rw.c | 16 --
> 5 files changed, 81 insertions(+), 137 deletions(-)
Where is the zonefs change ? Missing a patch ?
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: enable iomap dio write completions from interrupt context
2025-11-12 8:43 ` enable iomap dio write completions from interrupt context Damien Le Moal
@ 2025-11-12 8:44 ` Christoph Hellwig
2025-11-12 8:46 ` Damien Le Moal
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2025-11-12 8:44 UTC (permalink / raw)
To: Damien Le Moal
Cc: Christoph Hellwig, Christian Brauner, Alexander Viro,
Darrick J. Wong, Jan Kara, Jens Axboe, Avi Kivity, Naohiro Aota,
Johannes Thumshirn, linux-xfs, linux-fsdevel, io-uring
On Wed, Nov 12, 2025 at 05:43:05PM +0900, Damien Le Moal wrote:
> Where is the zonefs change ? Missing a patch ?
There's not zonefs change. The zonefs fix is the core code now calling
the read ->end_io handler from workqueue context after an I/O error, and
thus now doing what zonefs incorrectly assumed before.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: enable iomap dio write completions from interrupt context
2025-11-12 8:44 ` Christoph Hellwig
@ 2025-11-12 8:46 ` Damien Le Moal
0 siblings, 0 replies; 9+ messages in thread
From: Damien Le Moal @ 2025-11-12 8:46 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Christian Brauner, Alexander Viro, Darrick J. Wong, Jan Kara,
Jens Axboe, Avi Kivity, Naohiro Aota, Johannes Thumshirn,
linux-xfs, linux-fsdevel, io-uring
On 11/12/25 17:44, Christoph Hellwig wrote:
> On Wed, Nov 12, 2025 at 05:43:05PM +0900, Damien Le Moal wrote:
>> Where is the zonefs change ? Missing a patch ?
>
> There's not zonefs change. The zonefs fix is the core code now calling
> the read ->end_io handler from workqueue context after an I/O error, and
> thus now doing what zonefs incorrectly assumed before.
OK. Thanks for the clarification. I was thinking the other way around :)
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-11-12 8:46 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-12 7:21 enable iomap dio write completions from interrupt context Christoph Hellwig
2025-11-12 7:21 ` [PATCH 1/5] fs, iomap: remove IOCB_DIO_CALLER_COMP Christoph Hellwig
2025-11-12 7:21 ` [PATCH 2/5] iomap: always run error completions in user context Christoph Hellwig
2025-11-12 7:21 ` [PATCH 3/5] iomap: rework REQ_FUA selection Christoph Hellwig
2025-11-12 7:21 ` [PATCH 4/5] iomap: support write completions from interrupt context Christoph Hellwig
2025-11-12 7:21 ` [PATCH 5/5] iomap: invert the polarity of IOMAP_DIO_INLINE_COMP Christoph Hellwig
2025-11-12 8:43 ` enable iomap dio write completions from interrupt context Damien Le Moal
2025-11-12 8:44 ` Christoph Hellwig
2025-11-12 8:46 ` Damien Le Moal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox