* [PATCH v6 01/16] mm: Move starting of background writeback into the main balancing loop
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 02/16] mm: Move updates of dirty_exceeded into one place Stefan Roesch
` (15 subsequent siblings)
16 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
From: Jan Kara <[email protected]>
We start background writeback if we are over background threshold after
exiting the main loop in balance_dirty_pages(). This may result in
basing the decision on already stale values (we may have slept for
significant amount of time) and it is also inconvenient for refactoring
needed for async dirty throttling. Move the check into the main waiting
loop.
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
---
mm/page-writeback.c | 31 ++++++++++++++-----------------
1 file changed, 14 insertions(+), 17 deletions(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 7e2da284e427..8e5e003f0093 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -1618,6 +1618,19 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
}
}
+ /*
+ * In laptop mode, we wait until hitting the higher threshold
+ * before starting background writeout, and then write out all
+ * the way down to the lower threshold. So slow writers cause
+ * minimal disk activity.
+ *
+ * In normal mode, we start background writeout at the lower
+ * background_thresh, to keep the amount of dirty memory low.
+ */
+ if (!laptop_mode && nr_reclaimable > gdtc->bg_thresh &&
+ !writeback_in_progress(wb))
+ wb_start_background_writeback(wb);
+
/*
* Throttle it only when the background writeback cannot
* catch-up. This avoids (excessively) small writeouts
@@ -1648,6 +1661,7 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
break;
}
+ /* Start writeback even when in laptop mode */
if (unlikely(!writeback_in_progress(wb)))
wb_start_background_writeback(wb);
@@ -1814,23 +1828,6 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
if (!dirty_exceeded && wb->dirty_exceeded)
wb->dirty_exceeded = 0;
-
- if (writeback_in_progress(wb))
- return;
-
- /*
- * In laptop mode, we wait until hitting the higher threshold before
- * starting background writeout, and then write out all the way down
- * to the lower threshold. So slow writers cause minimal disk activity.
- *
- * In normal mode, we start background writeout at the lower
- * background_thresh, to keep the amount of dirty memory low.
- */
- if (laptop_mode)
- return;
-
- if (nr_reclaimable > gdtc->bg_thresh)
- wb_start_background_writeback(wb);
}
static DEFINE_PER_CPU(int, bdp_ratelimits);
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 02/16] mm: Move updates of dirty_exceeded into one place
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 01/16] mm: Move starting of background writeback into the main balancing loop Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 03/16] mm: Add balance_dirty_pages_ratelimited_flags() function Stefan Roesch
` (14 subsequent siblings)
16 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
From: Jan Kara <[email protected]>
Transition of wb->dirty_exceeded from 0 to 1 happens before we go to
sleep in balance_dirty_pages() while transition from 1 to 0 happens when
exiting from balance_dirty_pages(), possibly based on old values. This
does not make a lot of sense since wb->dirty_exceeded should simply
reflect whether wb is over dirty limit and so we should ratelimit
entering to balance_dirty_pages() less. Move the two updates together.
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
---
mm/page-writeback.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 8e5e003f0093..89dcc7d8395a 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -1720,8 +1720,8 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
sdtc = mdtc;
}
- if (dirty_exceeded && !wb->dirty_exceeded)
- wb->dirty_exceeded = 1;
+ if (dirty_exceeded != wb->dirty_exceeded)
+ wb->dirty_exceeded = dirty_exceeded;
if (time_is_before_jiffies(READ_ONCE(wb->bw_time_stamp) +
BANDWIDTH_INTERVAL))
@@ -1825,9 +1825,6 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
if (fatal_signal_pending(current))
break;
}
-
- if (!dirty_exceeded && wb->dirty_exceeded)
- wb->dirty_exceeded = 0;
}
static DEFINE_PER_CPU(int, bdp_ratelimits);
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 03/16] mm: Add balance_dirty_pages_ratelimited_flags() function
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 01/16] mm: Move starting of background writeback into the main balancing loop Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 02/16] mm: Move updates of dirty_exceeded into one place Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 6:52 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create() Stefan Roesch
` (13 subsequent siblings)
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
From: Jan Kara <[email protected]>
This adds the helper function balance_dirty_pages_ratelimited_flags().
It adds the parameter flags to balance_dirty_pages_ratelimited().
The flags parameter is passed to balance_dirty_pages(). For async
buffered writes the flag value will be BDP_ASYNC.
If balance_dirty_pages() gets called for async buffered write, we don't
want to wait. Instead we need to indicate to the caller that throttling
is needed so that it can stop writing and offload the rest of the write
to a context that can block.
The new helper function is also used by balance_dirty_pages_ratelimited().
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
---
include/linux/writeback.h | 7 ++++++
mm/page-writeback.c | 48 +++++++++++++++++++++++++--------------
2 files changed, 38 insertions(+), 17 deletions(-)
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index fec248ab1fec..1bddad86a4f6 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -372,7 +372,14 @@ void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty);
unsigned long wb_calc_thresh(struct bdi_writeback *wb, unsigned long thresh);
void wb_update_bandwidth(struct bdi_writeback *wb);
+
+/* Invoke balance dirty pages in async mode. */
+#define BDP_ASYNC 0x0001
+
void balance_dirty_pages_ratelimited(struct address_space *mapping);
+int balance_dirty_pages_ratelimited_flags(struct address_space *mapping,
+ unsigned int flags);
+
bool wb_over_bg_thresh(struct bdi_writeback *wb);
typedef int (*writepage_t)(struct page *page, struct writeback_control *wbc,
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 89dcc7d8395a..3701e813d05f 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -1545,8 +1545,8 @@ static inline void wb_dirty_limits(struct dirty_throttle_control *dtc)
* If we're over `background_thresh' then the writeback threads are woken to
* perform some writeout.
*/
-static void balance_dirty_pages(struct bdi_writeback *wb,
- unsigned long pages_dirtied)
+static int balance_dirty_pages(struct bdi_writeback *wb,
+ unsigned long pages_dirtied, unsigned int flags)
{
struct dirty_throttle_control gdtc_stor = { GDTC_INIT(wb) };
struct dirty_throttle_control mdtc_stor = { MDTC_INIT(wb, &gdtc_stor) };
@@ -1566,6 +1566,7 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
struct backing_dev_info *bdi = wb->bdi;
bool strictlimit = bdi->capabilities & BDI_CAP_STRICTLIMIT;
unsigned long start_time = jiffies;
+ int ret = 0;
for (;;) {
unsigned long now = jiffies;
@@ -1794,6 +1795,10 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
period,
pause,
start_time);
+ if (flags & BDP_ASYNC) {
+ ret = -EAGAIN;
+ break;
+ }
__set_current_state(TASK_KILLABLE);
wb->dirty_sleep = now;
io_schedule_timeout(pause);
@@ -1825,6 +1830,7 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
if (fatal_signal_pending(current))
break;
}
+ return ret;
}
static DEFINE_PER_CPU(int, bdp_ratelimits);
@@ -1845,28 +1851,18 @@ static DEFINE_PER_CPU(int, bdp_ratelimits);
*/
DEFINE_PER_CPU(int, dirty_throttle_leaks) = 0;
-/**
- * balance_dirty_pages_ratelimited - balance dirty memory state
- * @mapping: address_space which was dirtied
- *
- * Processes which are dirtying memory should call in here once for each page
- * which was newly dirtied. The function will periodically check the system's
- * dirty state and will initiate writeback if needed.
- *
- * Once we're over the dirty memory limit we decrease the ratelimiting
- * by a lot, to prevent individual processes from overshooting the limit
- * by (ratelimit_pages) each.
- */
-void balance_dirty_pages_ratelimited(struct address_space *mapping)
+int balance_dirty_pages_ratelimited_flags(struct address_space *mapping,
+ unsigned int flags)
{
struct inode *inode = mapping->host;
struct backing_dev_info *bdi = inode_to_bdi(inode);
struct bdi_writeback *wb = NULL;
int ratelimit;
+ int ret = 0;
int *p;
if (!(bdi->capabilities & BDI_CAP_WRITEBACK))
- return;
+ return ret;
if (inode_cgwb_enabled(inode))
wb = wb_get_create_current(bdi, GFP_KERNEL);
@@ -1906,9 +1902,27 @@ void balance_dirty_pages_ratelimited(struct address_space *mapping)
preempt_enable();
if (unlikely(current->nr_dirtied >= ratelimit))
- balance_dirty_pages(wb, current->nr_dirtied);
+ balance_dirty_pages(wb, current->nr_dirtied, flags);
wb_put(wb);
+ return ret;
+}
+
+/**
+ * balance_dirty_pages_ratelimited - balance dirty memory state
+ * @mapping: address_space which was dirtied
+ *
+ * Processes which are dirtying memory should call in here once for each page
+ * which was newly dirtied. The function will periodically check the system's
+ * dirty state and will initiate writeback if needed.
+ *
+ * Once we're over the dirty memory limit we decrease the ratelimiting
+ * by a lot, to prevent individual processes from overshooting the limit
+ * by (ratelimit_pages) each.
+ */
+void balance_dirty_pages_ratelimited(struct address_space *mapping)
+{
+ balance_dirty_pages_ratelimited_flags(mapping, 0);
}
EXPORT_SYMBOL(balance_dirty_pages_ratelimited);
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (2 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 03/16] mm: Add balance_dirty_pages_ratelimited_flags() function Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-26 18:25 ` Darrick J. Wong
2022-05-31 6:54 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 05/16] iomap: Add async buffered write support Stefan Roesch
` (12 subsequent siblings)
16 siblings, 2 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
Add the kiocb flags parameter to the function iomap_page_create().
Depending on the value of the flags parameter it enables different gfp
flags.
No intended functional changes in this patch.
Signed-off-by: Stefan Roesch <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/iomap/buffered-io.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 8ce8720093b9..d6ddc54e190e 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -44,16 +44,21 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio)
static struct bio_set iomap_ioend_bioset;
static struct iomap_page *
-iomap_page_create(struct inode *inode, struct folio *folio)
+iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
{
struct iomap_page *iop = to_iomap_page(folio);
unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
+ gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
if (iop || nr_blocks <= 1)
return iop;
+ if (flags & IOMAP_NOWAIT)
+ gfp = GFP_NOWAIT;
+
iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)),
- GFP_NOFS | __GFP_NOFAIL);
+ gfp);
+
spin_lock_init(&iop->uptodate_lock);
if (folio_test_uptodate(folio))
bitmap_fill(iop->uptodate, nr_blocks);
@@ -226,7 +231,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter,
if (WARN_ON_ONCE(size > iomap->length))
return -EIO;
if (offset > 0)
- iop = iomap_page_create(iter->inode, folio);
+ iop = iomap_page_create(iter->inode, folio, iter->flags);
else
iop = to_iomap_page(folio);
@@ -264,7 +269,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
return iomap_read_inline_data(iter, folio);
/* zero post-eof blocks as the page may be mapped */
- iop = iomap_page_create(iter->inode, folio);
+ iop = iomap_page_create(iter->inode, folio, iter->flags);
iomap_adjust_read_range(iter->inode, folio, &pos, length, &poff, &plen);
if (plen == 0)
goto done;
@@ -550,7 +555,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
size_t len, struct folio *folio)
{
const struct iomap *srcmap = iomap_iter_srcmap(iter);
- struct iomap_page *iop = iomap_page_create(iter->inode, folio);
+ struct iomap_page *iop;
loff_t block_size = i_blocksize(iter->inode);
loff_t block_start = round_down(pos, block_size);
loff_t block_end = round_up(pos + len, block_size);
@@ -561,6 +566,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
return 0;
folio_clear_error(folio);
+ iop = iomap_page_create(iter->inode, folio, iter->flags);
+
do {
iomap_adjust_read_range(iter->inode, folio, &block_start,
block_end - block_start, &poff, &plen);
@@ -1332,7 +1339,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
struct writeback_control *wbc, struct inode *inode,
struct folio *folio, u64 end_pos)
{
- struct iomap_page *iop = iomap_page_create(inode, folio);
+ struct iomap_page *iop = iomap_page_create(inode, folio, 0);
struct iomap_ioend *ioend, *next;
unsigned len = i_blocksize(inode);
unsigned nblocks = i_blocks_per_folio(inode, folio);
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-05-26 17:38 ` [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create() Stefan Roesch
@ 2022-05-26 18:25 ` Darrick J. Wong
2022-05-26 18:43 ` Stefan Roesch
2022-06-01 0:34 ` Olivier Langlois
2022-05-31 6:54 ` Christoph Hellwig
1 sibling, 2 replies; 46+ messages in thread
From: Darrick J. Wong @ 2022-05-26 18:25 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On Thu, May 26, 2022 at 10:38:28AM -0700, Stefan Roesch wrote:
> Add the kiocb flags parameter to the function iomap_page_create().
> Depending on the value of the flags parameter it enables different gfp
> flags.
>
> No intended functional changes in this patch.
>
> Signed-off-by: Stefan Roesch <[email protected]>
> Reviewed-by: Jan Kara <[email protected]>
> ---
> fs/iomap/buffered-io.c | 19 +++++++++++++------
> 1 file changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 8ce8720093b9..d6ddc54e190e 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -44,16 +44,21 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio)
> static struct bio_set iomap_ioend_bioset;
>
> static struct iomap_page *
> -iomap_page_create(struct inode *inode, struct folio *folio)
> +iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
> {
> struct iomap_page *iop = to_iomap_page(folio);
> unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
> + gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
>
> if (iop || nr_blocks <= 1)
> return iop;
>
> + if (flags & IOMAP_NOWAIT)
> + gfp = GFP_NOWAIT;
Hmm. GFP_NOWAIT means we don't wait for reclaim or IO or filesystem
callbacks, and NOFAIL means we retry indefinitely. What happens in the
NOWAIT|NOFAIL case? Does that imply that the kzalloc loops without
triggering direct reclaim until someone else frees enough memory?
--D
> +
> iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)),
> - GFP_NOFS | __GFP_NOFAIL);
> + gfp);
> +
> spin_lock_init(&iop->uptodate_lock);
> if (folio_test_uptodate(folio))
> bitmap_fill(iop->uptodate, nr_blocks);
> @@ -226,7 +231,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter,
> if (WARN_ON_ONCE(size > iomap->length))
> return -EIO;
> if (offset > 0)
> - iop = iomap_page_create(iter->inode, folio);
> + iop = iomap_page_create(iter->inode, folio, iter->flags);
> else
> iop = to_iomap_page(folio);
>
> @@ -264,7 +269,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
> return iomap_read_inline_data(iter, folio);
>
> /* zero post-eof blocks as the page may be mapped */
> - iop = iomap_page_create(iter->inode, folio);
> + iop = iomap_page_create(iter->inode, folio, iter->flags);
> iomap_adjust_read_range(iter->inode, folio, &pos, length, &poff, &plen);
> if (plen == 0)
> goto done;
> @@ -550,7 +555,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> size_t len, struct folio *folio)
> {
> const struct iomap *srcmap = iomap_iter_srcmap(iter);
> - struct iomap_page *iop = iomap_page_create(iter->inode, folio);
> + struct iomap_page *iop;
> loff_t block_size = i_blocksize(iter->inode);
> loff_t block_start = round_down(pos, block_size);
> loff_t block_end = round_up(pos + len, block_size);
> @@ -561,6 +566,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> return 0;
> folio_clear_error(folio);
>
> + iop = iomap_page_create(iter->inode, folio, iter->flags);
> +
> do {
> iomap_adjust_read_range(iter->inode, folio, &block_start,
> block_end - block_start, &poff, &plen);
> @@ -1332,7 +1339,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
> struct writeback_control *wbc, struct inode *inode,
> struct folio *folio, u64 end_pos)
> {
> - struct iomap_page *iop = iomap_page_create(inode, folio);
> + struct iomap_page *iop = iomap_page_create(inode, folio, 0);
> struct iomap_ioend *ioend, *next;
> unsigned len = i_blocksize(inode);
> unsigned nblocks = i_blocks_per_folio(inode, folio);
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-05-26 18:25 ` Darrick J. Wong
@ 2022-05-26 18:43 ` Stefan Roesch
2022-06-01 0:34 ` Olivier Langlois
1 sibling, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 18:43 UTC (permalink / raw)
To: Darrick J. Wong
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On 5/26/22 11:25 AM, Darrick J. Wong wrote:
> On Thu, May 26, 2022 at 10:38:28AM -0700, Stefan Roesch wrote:
>> Add the kiocb flags parameter to the function iomap_page_create().
>> Depending on the value of the flags parameter it enables different gfp
>> flags.
>>
>> No intended functional changes in this patch.
>>
>> Signed-off-by: Stefan Roesch <[email protected]>
>> Reviewed-by: Jan Kara <[email protected]>
>> ---
>> fs/iomap/buffered-io.c | 19 +++++++++++++------
>> 1 file changed, 13 insertions(+), 6 deletions(-)
>>
>> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
>> index 8ce8720093b9..d6ddc54e190e 100644
>> --- a/fs/iomap/buffered-io.c
>> +++ b/fs/iomap/buffered-io.c
>> @@ -44,16 +44,21 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio)
>> static struct bio_set iomap_ioend_bioset;
>>
>> static struct iomap_page *
>> -iomap_page_create(struct inode *inode, struct folio *folio)
>> +iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
>> {
>> struct iomap_page *iop = to_iomap_page(folio);
>> unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
>> + gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
>>
>> if (iop || nr_blocks <= 1)
>> return iop;
>>
>> + if (flags & IOMAP_NOWAIT)
>> + gfp = GFP_NOWAIT;
>
> Hmm. GFP_NOWAIT means we don't wait for reclaim or IO or filesystem
> callbacks, and NOFAIL means we retry indefinitely. What happens in the
> NOWAIT|NOFAIL case? Does that imply that the kzalloc loops without
> triggering direct reclaim until someone else frees enough memory?
>
Before this patch all requests allocate memory with the GFP_NOFS | __GFP_NOFAIL
flags. With this patch no_wait requests will be allocated with GFP_NOWAIT.
I don't see how allocations will happen with GFP_NOWAIT| __GFP_NOFAIL.
If an allocation with GFP_NOWAIT fails. It will return -EAGAIN, for the write
code path. In io-uring, the write request will be punted to the io-worker.
The io-worker will process the write request, but nowait will not be specified.
> --D
>
>> +
>> iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)),
>> - GFP_NOFS | __GFP_NOFAIL);
>> + gfp);
>> +
>> spin_lock_init(&iop->uptodate_lock);
>> if (folio_test_uptodate(folio))
>> bitmap_fill(iop->uptodate, nr_blocks);
>> @@ -226,7 +231,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter,
>> if (WARN_ON_ONCE(size > iomap->length))
>> return -EIO;
>> if (offset > 0)
>> - iop = iomap_page_create(iter->inode, folio);
>> + iop = iomap_page_create(iter->inode, folio, iter->flags);
>> else
>> iop = to_iomap_page(folio);
>>
>> @@ -264,7 +269,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
>> return iomap_read_inline_data(iter, folio);
>>
>> /* zero post-eof blocks as the page may be mapped */
>> - iop = iomap_page_create(iter->inode, folio);
>> + iop = iomap_page_create(iter->inode, folio, iter->flags);
>> iomap_adjust_read_range(iter->inode, folio, &pos, length, &poff, &plen);
>> if (plen == 0)
>> goto done;
>> @@ -550,7 +555,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
>> size_t len, struct folio *folio)
>> {
>> const struct iomap *srcmap = iomap_iter_srcmap(iter);
>> - struct iomap_page *iop = iomap_page_create(iter->inode, folio);
>> + struct iomap_page *iop;
>> loff_t block_size = i_blocksize(iter->inode);
>> loff_t block_start = round_down(pos, block_size);
>> loff_t block_end = round_up(pos + len, block_size);
>> @@ -561,6 +566,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
>> return 0;
>> folio_clear_error(folio);
>>
>> + iop = iomap_page_create(iter->inode, folio, iter->flags);
>> +
>> do {
>> iomap_adjust_read_range(iter->inode, folio, &block_start,
>> block_end - block_start, &poff, &plen);
>> @@ -1332,7 +1339,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>> struct writeback_control *wbc, struct inode *inode,
>> struct folio *folio, u64 end_pos)
>> {
>> - struct iomap_page *iop = iomap_page_create(inode, folio);
>> + struct iomap_page *iop = iomap_page_create(inode, folio, 0);
>> struct iomap_ioend *ioend, *next;
>> unsigned len = i_blocksize(inode);
>> unsigned nblocks = i_blocks_per_folio(inode, folio);
>> --
>> 2.30.2
>>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-05-26 18:25 ` Darrick J. Wong
2022-05-26 18:43 ` Stefan Roesch
@ 2022-06-01 0:34 ` Olivier Langlois
2022-06-01 8:21 ` Jan Kara
1 sibling, 1 reply; 46+ messages in thread
From: Olivier Langlois @ 2022-06-01 0:34 UTC (permalink / raw)
To: Darrick J. Wong, Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On Thu, 2022-05-26 at 11:25 -0700, Darrick J. Wong wrote:
> On Thu, May 26, 2022 at 10:38:28AM -0700, Stefan Roesch wrote:
> >
> > static struct iomap_page *
> > -iomap_page_create(struct inode *inode, struct folio *folio)
> > +iomap_page_create(struct inode *inode, struct folio *folio,
> > unsigned int flags)
> > {
> > struct iomap_page *iop = to_iomap_page(folio);
> > unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
> > + gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
> >
> > if (iop || nr_blocks <= 1)
> > return iop;
> >
> > + if (flags & IOMAP_NOWAIT)
> > + gfp = GFP_NOWAIT;
>
> Hmm. GFP_NOWAIT means we don't wait for reclaim or IO or filesystem
> callbacks, and NOFAIL means we retry indefinitely. What happens in
> the
> NOWAIT|NOFAIL case? Does that imply that the kzalloc loops without
> triggering direct reclaim until someone else frees enough memory?
>
> --D
I have a question that is a bit offtopic but since it is concerning GFP
flags and this is what is discussed here maybe a participant will
kindly give me some hints about this mystery that has burned me for so
long...
Why does out_of_memory() requires GFP_FS to kill a process? AFAIK, no
filesystem-dependent operations are needed to kill a process...
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-06-01 0:34 ` Olivier Langlois
@ 2022-06-01 8:21 ` Jan Kara
2022-06-01 17:29 ` Olivier Langlois
0 siblings, 1 reply; 46+ messages in thread
From: Jan Kara @ 2022-06-01 8:21 UTC (permalink / raw)
To: Olivier Langlois
Cc: Darrick J. Wong, Stefan Roesch, io-uring, kernel-team, linux-mm,
linux-xfs, linux-fsdevel, david, jack, hch
On Tue 31-05-22 20:34:20, Olivier Langlois wrote:
> On Thu, 2022-05-26 at 11:25 -0700, Darrick J. Wong wrote:
> > On Thu, May 26, 2022 at 10:38:28AM -0700, Stefan Roesch wrote:
> > >
> > > static struct iomap_page *
> > > -iomap_page_create(struct inode *inode, struct folio *folio)
> > > +iomap_page_create(struct inode *inode, struct folio *folio,
> > > unsigned int flags)
> > > {
> > > struct iomap_page *iop = to_iomap_page(folio);
> > > unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
> > > + gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
> > >
> > > if (iop || nr_blocks <= 1)
> > > return iop;
> > >
> > > + if (flags & IOMAP_NOWAIT)
> > > + gfp = GFP_NOWAIT;
> >
> > Hmm. GFP_NOWAIT means we don't wait for reclaim or IO or filesystem
> > callbacks, and NOFAIL means we retry indefinitely. What happens in
> > the
> > NOWAIT|NOFAIL case? Does that imply that the kzalloc loops without
> > triggering direct reclaim until someone else frees enough memory?
> >
> > --D
>
> I have a question that is a bit offtopic but since it is concerning GFP
> flags and this is what is discussed here maybe a participant will
> kindly give me some hints about this mystery that has burned me for so
> long...
>
> Why does out_of_memory() requires GFP_FS to kill a process? AFAIK, no
> filesystem-dependent operations are needed to kill a process...
AFAIK it is because without GFP_FS, the chances for direct reclaim are
fairly limited so we are not sure whether the machine is indeed out of
memory or whether it is just that we need to reclaim from fs pools to free
up memory.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-06-01 8:21 ` Jan Kara
@ 2022-06-01 17:29 ` Olivier Langlois
0 siblings, 0 replies; 46+ messages in thread
From: Olivier Langlois @ 2022-06-01 17:29 UTC (permalink / raw)
To: Jan Kara
Cc: Darrick J. Wong, Stefan Roesch, io-uring, kernel-team, linux-mm,
linux-xfs, linux-fsdevel, david, hch
On Wed, 2022-06-01 at 10:21 +0200, Jan Kara wrote:
> > I have a question that is a bit offtopic but since it is concerning
> > GFP
> > flags and this is what is discussed here maybe a participant will
> > kindly give me some hints about this mystery that has burned me for
> > so
> > long...
> >
> > Why does out_of_memory() requires GFP_FS to kill a process? AFAIK,
> > no
> > filesystem-dependent operations are needed to kill a process...
>
> AFAIK it is because without GFP_FS, the chances for direct reclaim
> are
> fairly limited so we are not sure whether the machine is indeed out
> of
> memory or whether it is just that we need to reclaim from fs pools to
> free
> up memory.
>
> Honza
Jan,
thx a lot for this lead. I will study it further. Your answer made me
realized that the meaning of direct reclaim was not crystal clear to
me. I'll return to my Bovet & Cesati book to clear that out (That is
probably the book in my bookshelf that I have read the most).
After having sending out my question, I have came up with another
possible explanation...
Maybe it is not so much to send the killing signal to the oom victim
that requires GFP_FS but maybe the trouble the condition is avoiding is
the oom victim process trying to return memory to VFS as it exits while
VFS is busy waiting for its allocation request to succeed...
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-05-26 17:38 ` [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create() Stefan Roesch
2022-05-26 18:25 ` Darrick J. Wong
@ 2022-05-31 6:54 ` Christoph Hellwig
2022-05-31 18:12 ` Stefan Roesch
1 sibling, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2022-05-31 6:54 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On Thu, May 26, 2022 at 10:38:28AM -0700, Stefan Roesch wrote:
> Add the kiocb flags parameter to the function iomap_page_create().
> Depending on the value of the flags parameter it enables different gfp
> flags.
>
> No intended functional changes in this patch.
>
> Signed-off-by: Stefan Roesch <[email protected]>
> Reviewed-by: Jan Kara <[email protected]>
> ---
> fs/iomap/buffered-io.c | 19 +++++++++++++------
> 1 file changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 8ce8720093b9..d6ddc54e190e 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -44,16 +44,21 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio)
> static struct bio_set iomap_ioend_bioset;
>
> static struct iomap_page *
> -iomap_page_create(struct inode *inode, struct folio *folio)
> +iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
> {
> struct iomap_page *iop = to_iomap_page(folio);
> unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
> + gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
>
> if (iop || nr_blocks <= 1)
> return iop;
>
> + if (flags & IOMAP_NOWAIT)
> + gfp = GFP_NOWAIT;
> +
Maybe this would confuse people less if it was:
if (flags & IOMAP_NOWAIT)
gfp = GFP_NOWAIT;
else
gfp = GFP_NOFS | __GFP_NOFAIL;
but even as is it is perfectly fine (and I tend to write these kinds of
shortcuts as well).
Looks good either way:
Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-05-31 6:54 ` Christoph Hellwig
@ 2022-05-31 18:12 ` Stefan Roesch
2022-06-01 17:56 ` Darrick J. Wong
0 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-31 18:12 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack
On 5/30/22 11:54 PM, Christoph Hellwig wrote:
> On Thu, May 26, 2022 at 10:38:28AM -0700, Stefan Roesch wrote:
>> Add the kiocb flags parameter to the function iomap_page_create().
>> Depending on the value of the flags parameter it enables different gfp
>> flags.
>>
>> No intended functional changes in this patch.
>>
>> Signed-off-by: Stefan Roesch <[email protected]>
>> Reviewed-by: Jan Kara <[email protected]>
>> ---
>> fs/iomap/buffered-io.c | 19 +++++++++++++------
>> 1 file changed, 13 insertions(+), 6 deletions(-)
>>
>> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
>> index 8ce8720093b9..d6ddc54e190e 100644
>> --- a/fs/iomap/buffered-io.c
>> +++ b/fs/iomap/buffered-io.c
>> @@ -44,16 +44,21 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio)
>> static struct bio_set iomap_ioend_bioset;
>>
>> static struct iomap_page *
>> -iomap_page_create(struct inode *inode, struct folio *folio)
>> +iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
>> {
>> struct iomap_page *iop = to_iomap_page(folio);
>> unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
>> + gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
>>
>> if (iop || nr_blocks <= 1)
>> return iop;
>>
>> + if (flags & IOMAP_NOWAIT)
>> + gfp = GFP_NOWAIT;
>> +
>
> Maybe this would confuse people less if it was:
>
> if (flags & IOMAP_NOWAIT)
> gfp = GFP_NOWAIT;
> else
> gfp = GFP_NOFS | __GFP_NOFAIL;
>
I made the above change.
> but even as is it is perfectly fine (and I tend to write these kinds of
> shortcuts as well).
>
> Looks good either way:
>
> Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create()
2022-05-31 18:12 ` Stefan Roesch
@ 2022-06-01 17:56 ` Darrick J. Wong
0 siblings, 0 replies; 46+ messages in thread
From: Darrick J. Wong @ 2022-06-01 17:56 UTC (permalink / raw)
To: Stefan Roesch
Cc: Christoph Hellwig, io-uring, kernel-team, linux-mm, linux-xfs,
linux-fsdevel, david, jack
On Tue, May 31, 2022 at 11:12:38AM -0700, Stefan Roesch wrote:
>
>
> On 5/30/22 11:54 PM, Christoph Hellwig wrote:
> > On Thu, May 26, 2022 at 10:38:28AM -0700, Stefan Roesch wrote:
> >> Add the kiocb flags parameter to the function iomap_page_create().
> >> Depending on the value of the flags parameter it enables different gfp
> >> flags.
> >>
> >> No intended functional changes in this patch.
> >>
> >> Signed-off-by: Stefan Roesch <[email protected]>
> >> Reviewed-by: Jan Kara <[email protected]>
> >> ---
> >> fs/iomap/buffered-io.c | 19 +++++++++++++------
> >> 1 file changed, 13 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> >> index 8ce8720093b9..d6ddc54e190e 100644
> >> --- a/fs/iomap/buffered-io.c
> >> +++ b/fs/iomap/buffered-io.c
> >> @@ -44,16 +44,21 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio)
> >> static struct bio_set iomap_ioend_bioset;
> >>
> >> static struct iomap_page *
> >> -iomap_page_create(struct inode *inode, struct folio *folio)
> >> +iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
> >> {
> >> struct iomap_page *iop = to_iomap_page(folio);
> >> unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
> >> + gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
> >>
> >> if (iop || nr_blocks <= 1)
> >> return iop;
> >>
> >> + if (flags & IOMAP_NOWAIT)
> >> + gfp = GFP_NOWAIT;
> >> +
> >
> > Maybe this would confuse people less if it was:
> >
> > if (flags & IOMAP_NOWAIT)
> > gfp = GFP_NOWAIT;
> > else
> > gfp = GFP_NOFS | __GFP_NOFAIL;
> >
>
> I made the above change.
Thanks. I misread all the gfp handling as:
gfp_t gfp = GFP_NOFS | __GFP_NOFAIL;
if (flags & IOMAP_NOWAIT)
gfp |= GFP_NOWAIT;
Which was why my question did not make sense. Sorry about that. :(
--D
>
> > but even as is it is perfectly fine (and I tend to write these kinds of
> > shortcuts as well).
> >
> > Looks good either way:
> >
> > Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v6 05/16] iomap: Add async buffered write support
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (3 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 04/16] iomap: Add flags parameter to iomap_page_create() Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-26 18:42 ` Darrick J. Wong
` (2 more replies)
2022-05-26 17:38 ` [PATCH v6 06/16] fs: Add check for async buffered writes to generic_write_checks Stefan Roesch
` (11 subsequent siblings)
16 siblings, 3 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This adds async buffered write support to iomap.
This replaces the call to balance_dirty_pages_ratelimited() with the
call to balance_dirty_pages_ratelimited_flags. This allows to specify if
the write request is async or not.
In addition this also moves the above function call to the beginning of
the function. If the function call is at the end of the function and the
decision is made to throttle writes, then there is no request that
io-uring can wait on. By moving it to the beginning of the function, the
write request is not issued, but returns -EAGAIN instead. io-uring will
punt the request and process it in the io-worker.
By moving the function call to the beginning of the function, the write
throttling will happen one page later.
Signed-off-by: Stefan Roesch <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/iomap/buffered-io.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index d6ddc54e190e..2281667646d2 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -559,6 +559,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
loff_t block_size = i_blocksize(iter->inode);
loff_t block_start = round_down(pos, block_size);
loff_t block_end = round_up(pos + len, block_size);
+ unsigned int nr_blocks = i_blocks_per_folio(iter->inode, folio);
size_t from = offset_in_folio(folio, pos), to = from + len;
size_t poff, plen;
@@ -567,6 +568,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
folio_clear_error(folio);
iop = iomap_page_create(iter->inode, folio, iter->flags);
+ if ((iter->flags & IOMAP_NOWAIT) && !iop && nr_blocks > 1)
+ return -EAGAIN;
do {
iomap_adjust_read_range(iter->inode, folio, &block_start,
@@ -584,7 +587,12 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
return -EIO;
folio_zero_segments(folio, poff, from, to, poff + plen);
} else {
- int status = iomap_read_folio_sync(block_start, folio,
+ int status;
+
+ if (iter->flags & IOMAP_NOWAIT)
+ return -EAGAIN;
+
+ status = iomap_read_folio_sync(block_start, folio,
poff, plen, srcmap);
if (status)
return status;
@@ -613,6 +621,9 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS;
int status = 0;
+ if (iter->flags & IOMAP_NOWAIT)
+ fgp |= FGP_NOWAIT;
+
BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length);
if (srcmap != &iter->iomap)
BUG_ON(pos + len > srcmap->offset + srcmap->length);
@@ -750,6 +761,8 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
loff_t pos = iter->pos;
ssize_t written = 0;
long status = 0;
+ struct address_space *mapping = iter->inode->i_mapping;
+ unsigned int bdp_flags = (iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0;
do {
struct folio *folio;
@@ -762,6 +775,11 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
bytes = min_t(unsigned long, PAGE_SIZE - offset,
iov_iter_count(i));
again:
+ status = balance_dirty_pages_ratelimited_flags(mapping,
+ bdp_flags);
+ if (unlikely(status))
+ break;
+
if (bytes > length)
bytes = length;
@@ -770,6 +788,10 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
* Otherwise there's a nasty deadlock on copying from the
* same page as we're writing to, without it being marked
* up-to-date.
+ *
+ * For async buffered writes the assumption is that the user
+ * page has already been faulted in. This can be optimized by
+ * faulting the user page in the prepare phase of io-uring.
*/
if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) {
status = -EFAULT;
@@ -781,7 +803,7 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
break;
page = folio_file_page(folio, pos >> PAGE_SHIFT);
- if (mapping_writably_mapped(iter->inode->i_mapping))
+ if (mapping_writably_mapped(mapping))
flush_dcache_page(page);
copied = copy_page_from_iter_atomic(page, offset, bytes, i);
@@ -806,8 +828,6 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
pos += status;
written += status;
length -= status;
-
- balance_dirty_pages_ratelimited(iter->inode->i_mapping);
} while (iov_iter_count(i) && length);
return written ? written : status;
@@ -825,6 +845,9 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
};
int ret;
+ if (iocb->ki_flags & IOCB_NOWAIT)
+ iter.flags |= IOMAP_NOWAIT;
+
while ((ret = iomap_iter(&iter, ops)) > 0)
iter.processed = iomap_write_iter(&iter, i);
if (iter.pos == iocb->ki_pos)
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* Re: [PATCH v6 05/16] iomap: Add async buffered write support
2022-05-26 17:38 ` [PATCH v6 05/16] iomap: Add async buffered write support Stefan Roesch
@ 2022-05-26 18:42 ` Darrick J. Wong
2022-05-26 22:37 ` Dave Chinner
2022-05-31 6:58 ` Christoph Hellwig
2 siblings, 0 replies; 46+ messages in thread
From: Darrick J. Wong @ 2022-05-26 18:42 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On Thu, May 26, 2022 at 10:38:29AM -0700, Stefan Roesch wrote:
> This adds async buffered write support to iomap.
>
> This replaces the call to balance_dirty_pages_ratelimited() with the
> call to balance_dirty_pages_ratelimited_flags. This allows to specify if
> the write request is async or not.
>
> In addition this also moves the above function call to the beginning of
> the function. If the function call is at the end of the function and the
> decision is made to throttle writes, then there is no request that
> io-uring can wait on. By moving it to the beginning of the function, the
> write request is not issued, but returns -EAGAIN instead. io-uring will
> punt the request and process it in the io-worker.
>
> By moving the function call to the beginning of the function, the write
> throttling will happen one page later.
It does? I would have thought that moving it before iomap_write_begin
call would make the throttling happen one page sooner? Sorry if I'm
being dense here...
> Signed-off-by: Stefan Roesch <[email protected]>
> Reviewed-by: Jan Kara <[email protected]>
> ---
> fs/iomap/buffered-io.c | 31 +++++++++++++++++++++++++++----
> 1 file changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index d6ddc54e190e..2281667646d2 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -559,6 +559,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> loff_t block_size = i_blocksize(iter->inode);
> loff_t block_start = round_down(pos, block_size);
> loff_t block_end = round_up(pos + len, block_size);
> + unsigned int nr_blocks = i_blocks_per_folio(iter->inode, folio);
> size_t from = offset_in_folio(folio, pos), to = from + len;
> size_t poff, plen;
>
> @@ -567,6 +568,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> folio_clear_error(folio);
>
> iop = iomap_page_create(iter->inode, folio, iter->flags);
> + if ((iter->flags & IOMAP_NOWAIT) && !iop && nr_blocks > 1)
> + return -EAGAIN;
>
> do {
> iomap_adjust_read_range(iter->inode, folio, &block_start,
> @@ -584,7 +587,12 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> return -EIO;
> folio_zero_segments(folio, poff, from, to, poff + plen);
> } else {
> - int status = iomap_read_folio_sync(block_start, folio,
> + int status;
> +
> + if (iter->flags & IOMAP_NOWAIT)
> + return -EAGAIN;
> +
> + status = iomap_read_folio_sync(block_start, folio,
> poff, plen, srcmap);
> if (status)
> return status;
> @@ -613,6 +621,9 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS;
> int status = 0;
>
> + if (iter->flags & IOMAP_NOWAIT)
> + fgp |= FGP_NOWAIT;
FGP_NOWAIT can cause __filemap_get_folio to return a NULL folio, which
makes iomap_write_begin return -ENOMEM. If nothing has been written
yet, won't that cause the ENOMEM to escape to userspace? Why do we want
that instead of EAGAIN?
> +
> BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length);
> if (srcmap != &iter->iomap)
> BUG_ON(pos + len > srcmap->offset + srcmap->length);
> @@ -750,6 +761,8 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> loff_t pos = iter->pos;
> ssize_t written = 0;
> long status = 0;
> + struct address_space *mapping = iter->inode->i_mapping;
> + unsigned int bdp_flags = (iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0;
>
> do {
> struct folio *folio;
> @@ -762,6 +775,11 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> bytes = min_t(unsigned long, PAGE_SIZE - offset,
> iov_iter_count(i));
> again:
> + status = balance_dirty_pages_ratelimited_flags(mapping,
> + bdp_flags);
> + if (unlikely(status))
> + break;
> +
> if (bytes > length)
> bytes = length;
>
> @@ -770,6 +788,10 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> * Otherwise there's a nasty deadlock on copying from the
> * same page as we're writing to, without it being marked
> * up-to-date.
> + *
> + * For async buffered writes the assumption is that the user
> + * page has already been faulted in. This can be optimized by
> + * faulting the user page in the prepare phase of io-uring.
I don't think this pattern is unique to async writes with io_uring --
gfs2 also wanted this "try as many pages as you can until you hit a page
fault and then return a short write to caller so it can fault in the
rest" behavior.
--D
> */
> if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) {
> status = -EFAULT;
> @@ -781,7 +803,7 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> break;
>
> page = folio_file_page(folio, pos >> PAGE_SHIFT);
> - if (mapping_writably_mapped(iter->inode->i_mapping))
> + if (mapping_writably_mapped(mapping))
> flush_dcache_page(page);
>
> copied = copy_page_from_iter_atomic(page, offset, bytes, i);
> @@ -806,8 +828,6 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> pos += status;
> written += status;
> length -= status;
> -
> - balance_dirty_pages_ratelimited(iter->inode->i_mapping);
> } while (iov_iter_count(i) && length);
>
> return written ? written : status;
> @@ -825,6 +845,9 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
> };
> int ret;
>
> + if (iocb->ki_flags & IOCB_NOWAIT)
> + iter.flags |= IOMAP_NOWAIT;
> +
> while ((ret = iomap_iter(&iter, ops)) > 0)
> iter.processed = iomap_write_iter(&iter, i);
> if (iter.pos == iocb->ki_pos)
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 05/16] iomap: Add async buffered write support
2022-05-26 17:38 ` [PATCH v6 05/16] iomap: Add async buffered write support Stefan Roesch
2022-05-26 18:42 ` Darrick J. Wong
@ 2022-05-26 22:37 ` Dave Chinner
2022-05-27 8:42 ` Jan Kara
2022-05-31 6:58 ` Christoph Hellwig
2 siblings, 1 reply; 46+ messages in thread
From: Dave Chinner @ 2022-05-26 22:37 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, jack,
hch
On Thu, May 26, 2022 at 10:38:29AM -0700, Stefan Roesch wrote:
> This adds async buffered write support to iomap.
>
> This replaces the call to balance_dirty_pages_ratelimited() with the
> call to balance_dirty_pages_ratelimited_flags. This allows to specify if
> the write request is async or not.
>
> In addition this also moves the above function call to the beginning of
> the function. If the function call is at the end of the function and the
> decision is made to throttle writes, then there is no request that
> io-uring can wait on. By moving it to the beginning of the function, the
> write request is not issued, but returns -EAGAIN instead. io-uring will
> punt the request and process it in the io-worker.
>
> By moving the function call to the beginning of the function, the write
> throttling will happen one page later.
Won't it happen one page sooner? I.e. on single page writes we'll
end up throttling *before* we dirty the page, not *after* we dirty
the page. IOWs, we can't wait for the page that we just dirtied to
be cleaned to make progress and so this now makes the loop dependent
on pages dirtied by other writers being cleaned to guarantee
forwards progress?
That seems like a subtle but quite significant change of
algorithm...
> Signed-off-by: Stefan Roesch <[email protected]>
> Reviewed-by: Jan Kara <[email protected]>
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index d6ddc54e190e..2281667646d2 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -559,6 +559,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> loff_t block_size = i_blocksize(iter->inode);
> loff_t block_start = round_down(pos, block_size);
> loff_t block_end = round_up(pos + len, block_size);
> + unsigned int nr_blocks = i_blocks_per_folio(iter->inode, folio);
> size_t from = offset_in_folio(folio, pos), to = from + len;
> size_t poff, plen;
>
> @@ -567,6 +568,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> folio_clear_error(folio);
>
> iop = iomap_page_create(iter->inode, folio, iter->flags);
> + if ((iter->flags & IOMAP_NOWAIT) && !iop && nr_blocks > 1)
> + return -EAGAIN;
>
Hmmm. I see a what looks to be an undesirable pattern here...
1. Memory allocation failure here on the second page of a write.
> @@ -806,8 +828,6 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> pos += status;
> written += status;
> length -= status;
> -
> - balance_dirty_pages_ratelimited(iter->inode->i_mapping);
> } while (iov_iter_count(i) && length);
>
> return written ? written : status;
2. we break and return 4kB from the first page copied.
> @@ -825,6 +845,9 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
> };
> int ret;
>
> + if (iocb->ki_flags & IOCB_NOWAIT)
> + iter.flags |= IOMAP_NOWAIT;
> +
> while ((ret = iomap_iter(&iter, ops)) > 0)
> iter.processed = iomap_write_iter(&iter, i);
3. This sets iter.processed = 4kB, and we call iomap_iter() again.
This sees iter.processed > 0 and there's still more to write, so
it returns 1, and go around the loop again.
Hence spurious memory allocation failures in the IOMAP_NOWAIT will
not cause this buffered write loop to exit. Worst case, we fail
allocation on every second __iomap_write_begin() call and so the
write takes much longer and consume lots more CPU hammering memory
alocation because no single memory allocation will cause the write
to return a short write to the caller.
This seems undesirable to me. If we are failing memory allocations,
we need to back off, not hammer memory allocation harder without
allowing reclaim to make progress...
Cheers,
Dave.
--
Dave Chinner
[email protected]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 05/16] iomap: Add async buffered write support
2022-05-26 22:37 ` Dave Chinner
@ 2022-05-27 8:42 ` Jan Kara
2022-05-27 22:52 ` Dave Chinner
0 siblings, 1 reply; 46+ messages in thread
From: Jan Kara @ 2022-05-27 8:42 UTC (permalink / raw)
To: Dave Chinner
Cc: Stefan Roesch, io-uring, kernel-team, linux-mm, linux-xfs,
linux-fsdevel, jack, hch
On Fri 27-05-22 08:37:05, Dave Chinner wrote:
> On Thu, May 26, 2022 at 10:38:29AM -0700, Stefan Roesch wrote:
> > This adds async buffered write support to iomap.
> >
> > This replaces the call to balance_dirty_pages_ratelimited() with the
> > call to balance_dirty_pages_ratelimited_flags. This allows to specify if
> > the write request is async or not.
> >
> > In addition this also moves the above function call to the beginning of
> > the function. If the function call is at the end of the function and the
> > decision is made to throttle writes, then there is no request that
> > io-uring can wait on. By moving it to the beginning of the function, the
> > write request is not issued, but returns -EAGAIN instead. io-uring will
> > punt the request and process it in the io-worker.
> >
> > By moving the function call to the beginning of the function, the write
> > throttling will happen one page later.
>
> Won't it happen one page sooner? I.e. on single page writes we'll
> end up throttling *before* we dirty the page, not *after* we dirty
> the page. IOWs, we can't wait for the page that we just dirtied to
> be cleaned to make progress and so this now makes the loop dependent
> on pages dirtied by other writers being cleaned to guarantee
> forwards progress?
>
> That seems like a subtle but quite significant change of
> algorithm...
So I'm convinced the difference will be pretty much in the noise because of
how many dirty pages there have to be to even start throttling processes
but some more arguments are:
* we ratelimit calls to balance_dirty_pages() based on number of pages
dirtied by the current process in balance_dirty_pages_ratelimited()
* balance_dirty_pages() uses number of pages dirtied by the current process
to decide about the delay.
So the only situation where I could see this making a difference would be
if dirty limit is a handful of pages and even there I have hard time to see
how exactly. So I'm ok with the change and in the case we see it causes
problems somewhere, we'll think how to fix it based on the exact scenario.
I guess the above two points are the reason why Stefan writes about throttling
one page later because we count only number of pages dirtied until this
moment so the page dirtied by this iteration of loop in iomap_write_iter()
will get reflected only by the call to balance_dirty_pages_ratelimited() in
the next iteration (or the next call to iomap_write_iter()).
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 05/16] iomap: Add async buffered write support
2022-05-27 8:42 ` Jan Kara
@ 2022-05-27 22:52 ` Dave Chinner
2022-05-31 7:55 ` Jan Kara
0 siblings, 1 reply; 46+ messages in thread
From: Dave Chinner @ 2022-05-27 22:52 UTC (permalink / raw)
To: Jan Kara
Cc: Stefan Roesch, io-uring, kernel-team, linux-mm, linux-xfs,
linux-fsdevel, hch
On Fri, May 27, 2022 at 10:42:03AM +0200, Jan Kara wrote:
> On Fri 27-05-22 08:37:05, Dave Chinner wrote:
> > On Thu, May 26, 2022 at 10:38:29AM -0700, Stefan Roesch wrote:
> > > This adds async buffered write support to iomap.
> > >
> > > This replaces the call to balance_dirty_pages_ratelimited() with the
> > > call to balance_dirty_pages_ratelimited_flags. This allows to specify if
> > > the write request is async or not.
> > >
> > > In addition this also moves the above function call to the beginning of
> > > the function. If the function call is at the end of the function and the
> > > decision is made to throttle writes, then there is no request that
> > > io-uring can wait on. By moving it to the beginning of the function, the
> > > write request is not issued, but returns -EAGAIN instead. io-uring will
> > > punt the request and process it in the io-worker.
> > >
> > > By moving the function call to the beginning of the function, the write
> > > throttling will happen one page later.
> >
> > Won't it happen one page sooner? I.e. on single page writes we'll
> > end up throttling *before* we dirty the page, not *after* we dirty
> > the page. IOWs, we can't wait for the page that we just dirtied to
> > be cleaned to make progress and so this now makes the loop dependent
> > on pages dirtied by other writers being cleaned to guarantee
> > forwards progress?
> >
> > That seems like a subtle but quite significant change of
> > algorithm...
>
> So I'm convinced the difference will be pretty much in the noise because of
> how many dirty pages there have to be to even start throttling processes
> but some more arguments are:
>
> * we ratelimit calls to balance_dirty_pages() based on number of pages
> dirtied by the current process in balance_dirty_pages_ratelimited()
>
> * balance_dirty_pages() uses number of pages dirtied by the current process
> to decide about the delay.
>
> So the only situation where I could see this making a difference would be
> if dirty limit is a handful of pages and even there I have hard time to see
> how exactly.
That's kinda what worries me - we do see people winding the dirty
thresholds way down to work around various niche problems with
dirty page buildup.
We also have small extra accounting overhead for cases where we've
stacked layers to so the lower layers don't dirty throttle before
the higher layer. If the lower layer throttles first, then the
higher layer can't clean pages and we can deadlock.
Those are the sorts of subtle, niche situations where I worry that
the subtle "throttle first, write second" change could manifest...
Cheers,
Dave.
--
Dave Chinner
[email protected]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 05/16] iomap: Add async buffered write support
2022-05-27 22:52 ` Dave Chinner
@ 2022-05-31 7:55 ` Jan Kara
0 siblings, 0 replies; 46+ messages in thread
From: Jan Kara @ 2022-05-31 7:55 UTC (permalink / raw)
To: Dave Chinner
Cc: Jan Kara, Stefan Roesch, io-uring, kernel-team, linux-mm,
linux-xfs, linux-fsdevel, hch
On Sat 28-05-22 08:52:40, Dave Chinner wrote:
> On Fri, May 27, 2022 at 10:42:03AM +0200, Jan Kara wrote:
> > On Fri 27-05-22 08:37:05, Dave Chinner wrote:
> > > On Thu, May 26, 2022 at 10:38:29AM -0700, Stefan Roesch wrote:
> > > > This adds async buffered write support to iomap.
> > > >
> > > > This replaces the call to balance_dirty_pages_ratelimited() with the
> > > > call to balance_dirty_pages_ratelimited_flags. This allows to specify if
> > > > the write request is async or not.
> > > >
> > > > In addition this also moves the above function call to the beginning of
> > > > the function. If the function call is at the end of the function and the
> > > > decision is made to throttle writes, then there is no request that
> > > > io-uring can wait on. By moving it to the beginning of the function, the
> > > > write request is not issued, but returns -EAGAIN instead. io-uring will
> > > > punt the request and process it in the io-worker.
> > > >
> > > > By moving the function call to the beginning of the function, the write
> > > > throttling will happen one page later.
> > >
> > > Won't it happen one page sooner? I.e. on single page writes we'll
> > > end up throttling *before* we dirty the page, not *after* we dirty
> > > the page. IOWs, we can't wait for the page that we just dirtied to
> > > be cleaned to make progress and so this now makes the loop dependent
> > > on pages dirtied by other writers being cleaned to guarantee
> > > forwards progress?
> > >
> > > That seems like a subtle but quite significant change of
> > > algorithm...
> >
> > So I'm convinced the difference will be pretty much in the noise because of
> > how many dirty pages there have to be to even start throttling processes
> > but some more arguments are:
> >
> > * we ratelimit calls to balance_dirty_pages() based on number of pages
> > dirtied by the current process in balance_dirty_pages_ratelimited()
> >
> > * balance_dirty_pages() uses number of pages dirtied by the current process
> > to decide about the delay.
> >
> > So the only situation where I could see this making a difference would be
> > if dirty limit is a handful of pages and even there I have hard time to see
> > how exactly.
>
> That's kinda what worries me - we do see people winding the dirty
> thresholds way down to work around various niche problems with
> dirty page buildup.
>
> We also have small extra accounting overhead for cases where we've
> stacked layers to so the lower layers don't dirty throttle before
> the higher layer. If the lower layer throttles first, then the
> higher layer can't clean pages and we can deadlock.
>
> Those are the sorts of subtle, niche situations where I worry that
> the subtle "throttle first, write second" change could manifest...
Well, I'd think about the change more as "write first, throttle on next
write" because balance_dirty_pages_ratelimited() throttles based on the
number of pages dirtied until the moment it is called. So first invocation
of balance_dirty_pages_ratelimited() will not do anything because
current->nr_dirtied will be zero. So effectively we always let the process
run longer than before the change before we throttle it. But number of
dirtied pages until we throttle should be the same for both cases.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 05/16] iomap: Add async buffered write support
2022-05-26 17:38 ` [PATCH v6 05/16] iomap: Add async buffered write support Stefan Roesch
2022-05-26 18:42 ` Darrick J. Wong
2022-05-26 22:37 ` Dave Chinner
@ 2022-05-31 6:58 ` Christoph Hellwig
2 siblings, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2022-05-31 6:58 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
I'll leave the throttling algorithm discussion to the experts, but
from the pure code POV this looks good to me:
Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v6 06/16] fs: Add check for async buffered writes to generic_write_checks
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (4 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 05/16] iomap: Add async buffered write support Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 6:59 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 07/16] fs: add __remove_file_privs() with flags parameter Stefan Roesch
` (10 subsequent siblings)
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This introduces the flag FMODE_BUF_WASYNC. If devices support async
buffered writes, this flag can be set. It also modifies the check in
generic_write_checks to take async buffered writes into consideration.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/read_write.c | 4 +++-
include/linux/fs.h | 3 +++
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/read_write.c b/fs/read_write.c
index e643aec2b0ef..175d98713b9a 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -1633,7 +1633,9 @@ int generic_write_checks_count(struct kiocb *iocb, loff_t *count)
if (iocb->ki_flags & IOCB_APPEND)
iocb->ki_pos = i_size_read(inode);
- if ((iocb->ki_flags & IOCB_NOWAIT) && !(iocb->ki_flags & IOCB_DIRECT))
+ if ((iocb->ki_flags & IOCB_NOWAIT) &&
+ !((iocb->ki_flags & IOCB_DIRECT) ||
+ (file->f_mode & FMODE_BUF_WASYNC)))
return -EINVAL;
return generic_write_check_limits(iocb->ki_filp, iocb->ki_pos, count);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 87b5af1d9fbe..407ad5004f54 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -177,6 +177,9 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
/* File supports async buffered reads */
#define FMODE_BUF_RASYNC ((__force fmode_t)0x40000000)
+/* File supports async nowait buffered writes */
+#define FMODE_BUF_WASYNC ((__force fmode_t)0x80000000)
+
/*
* Attribute flags. These should be or-ed together to figure out what
* has been changed!
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 07/16] fs: add __remove_file_privs() with flags parameter
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (5 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 06/16] fs: Add check for async buffered writes to generic_write_checks Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 7:00 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 08/16] fs: Split off inode_needs_update_time and __file_update_time Stefan Roesch
` (9 subsequent siblings)
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This adds the function __remove_file_privs, which allows the caller to
pass the kiocb flags parameter.
No intended functional changes in this patch.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/inode.c | 57 +++++++++++++++++++++++++++++++++++-------------------
1 file changed, 37 insertions(+), 20 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index 9d9b422504d1..ac1cf5aa78c8 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2010,36 +2010,43 @@ static int __remove_privs(struct user_namespace *mnt_userns,
return notify_change(mnt_userns, dentry, &newattrs, NULL);
}
-/*
- * Remove special file priviledges (suid, capabilities) when file is written
- * to or truncated.
- */
-int file_remove_privs(struct file *file)
+static int __file_remove_privs(struct file *file, unsigned int flags)
{
struct dentry *dentry = file_dentry(file);
struct inode *inode = file_inode(file);
+ int error;
int kill;
- int error = 0;
- /*
- * Fast path for nothing security related.
- * As well for non-regular files, e.g. blkdev inodes.
- * For example, blkdev_write_iter() might get here
- * trying to remove privs which it is not allowed to.
- */
if (IS_NOSEC(inode) || !S_ISREG(inode->i_mode))
return 0;
kill = dentry_needs_remove_privs(dentry);
- if (kill < 0)
+ if (kill <= 0)
return kill;
- if (kill)
- error = __remove_privs(file_mnt_user_ns(file), dentry, kill);
+
+ if (flags & IOCB_NOWAIT)
+ return -EAGAIN;
+
+ error = __remove_privs(file_mnt_user_ns(file), dentry, kill);
if (!error)
inode_has_no_xattr(inode);
return error;
}
+
+/**
+ * file_remove_privs - remove special file privileges (suid, capabilities)
+ * @file: file to remove privileges from
+ *
+ * When file is modified by a write or truncation ensure that special
+ * file privileges are removed.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int file_remove_privs(struct file *file)
+{
+ return __file_remove_privs(file, 0);
+}
EXPORT_SYMBOL(file_remove_privs);
/**
@@ -2090,18 +2097,28 @@ int file_update_time(struct file *file)
}
EXPORT_SYMBOL(file_update_time);
-/* Caller must hold the file's inode lock */
+/**
+ * file_modified - handle mandated vfs changes when modifying a file
+ * @file: file that was modified
+ *
+ * When file has been modified ensure that special
+ * file privileges are removed and time settings are updated.
+ *
+ * Context: Caller must hold the file's inode lock.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
int file_modified(struct file *file)
{
- int err;
+ int ret;
/*
* Clear the security bits if the process is not being run by root.
* This keeps people from modifying setuid and setgid binaries.
*/
- err = file_remove_privs(file);
- if (err)
- return err;
+ ret = __file_remove_privs(file, 0);
+ if (ret)
+ return ret;
if (unlikely(file->f_mode & FMODE_NOCMTIME))
return 0;
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 08/16] fs: Split off inode_needs_update_time and __file_update_time
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (6 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 07/16] fs: add __remove_file_privs() with flags parameter Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 7:01 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 09/16] fs: Add async write file modification handling Stefan Roesch
` (8 subsequent siblings)
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This splits off the functions inode_needs_update_time() and
__file_update_time() from the function file_update_time().
This is required to support async buffered writes.
No intended functional changes in this patch.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/inode.c | 76 +++++++++++++++++++++++++++++++++++-------------------
1 file changed, 50 insertions(+), 26 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index ac1cf5aa78c8..c44573a32c6a 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2049,35 +2049,18 @@ int file_remove_privs(struct file *file)
}
EXPORT_SYMBOL(file_remove_privs);
-/**
- * file_update_time - update mtime and ctime time
- * @file: file accessed
- *
- * Update the mtime and ctime members of an inode and mark the inode
- * for writeback. Note that this function is meant exclusively for
- * usage in the file write path of filesystems, and filesystems may
- * choose to explicitly ignore update via this function with the
- * S_NOCMTIME inode flag, e.g. for network filesystem where these
- * timestamps are handled by the server. This can return an error for
- * file systems who need to allocate space in order to update an inode.
- */
-
-int file_update_time(struct file *file)
+static int inode_needs_update_time(struct inode *inode, struct timespec64 *now)
{
- struct inode *inode = file_inode(file);
- struct timespec64 now;
int sync_it = 0;
- int ret;
/* First try to exhaust all avenues to not sync */
if (IS_NOCMTIME(inode))
return 0;
- now = current_time(inode);
- if (!timespec64_equal(&inode->i_mtime, &now))
+ if (!timespec64_equal(&inode->i_mtime, now))
sync_it = S_MTIME;
- if (!timespec64_equal(&inode->i_ctime, &now))
+ if (!timespec64_equal(&inode->i_ctime, now))
sync_it |= S_CTIME;
if (IS_I_VERSION(inode) && inode_iversion_need_inc(inode))
@@ -2086,15 +2069,50 @@ int file_update_time(struct file *file)
if (!sync_it)
return 0;
- /* Finally allowed to write? Takes lock. */
- if (__mnt_want_write_file(file))
- return 0;
+ return sync_it;
+}
+
+static int __file_update_time(struct file *file, struct timespec64 *now,
+ int sync_mode)
+{
+ int ret = 0;
+ struct inode *inode = file_inode(file);
- ret = inode_update_time(inode, &now, sync_it);
- __mnt_drop_write_file(file);
+ /* try to update time settings */
+ if (!__mnt_want_write_file(file)) {
+ ret = inode_update_time(inode, now, sync_mode);
+ __mnt_drop_write_file(file);
+ }
return ret;
}
+
+ /**
+ * file_update_time - update mtime and ctime time
+ * @file: file accessed
+ *
+ * Update the mtime and ctime members of an inode and mark the inode for
+ * writeback. Note that this function is meant exclusively for usage in
+ * the file write path of filesystems, and filesystems may choose to
+ * explicitly ignore updates via this function with the _NOCMTIME inode
+ * flag, e.g. for network filesystem where these imestamps are handled
+ * by the server. This can return an error for file systems who need to
+ * allocate space in order to update an inode.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int file_update_time(struct file *file)
+{
+ int ret;
+ struct inode *inode = file_inode(file);
+ struct timespec64 now = current_time(inode);
+
+ ret = inode_needs_update_time(inode, &now);
+ if (ret <= 0)
+ return ret;
+
+ return __file_update_time(file, &now, ret);
+}
EXPORT_SYMBOL(file_update_time);
/**
@@ -2111,6 +2129,8 @@ EXPORT_SYMBOL(file_update_time);
int file_modified(struct file *file)
{
int ret;
+ struct inode *inode = file_inode(file);
+ struct timespec64 now = current_time(inode);
/*
* Clear the security bits if the process is not being run by root.
@@ -2123,7 +2143,11 @@ int file_modified(struct file *file)
if (unlikely(file->f_mode & FMODE_NOCMTIME))
return 0;
- return file_update_time(file);
+ ret = inode_needs_update_time(inode, &now);
+ if (ret <= 0)
+ return ret;
+
+ return __file_update_time(file, &now, ret);
}
EXPORT_SYMBOL(file_modified);
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* Re: [PATCH v6 08/16] fs: Split off inode_needs_update_time and __file_update_time
2022-05-26 17:38 ` [PATCH v6 08/16] fs: Split off inode_needs_update_time and __file_update_time Stefan Roesch
@ 2022-05-31 7:01 ` Christoph Hellwig
2022-05-31 19:02 ` Stefan Roesch
0 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2022-05-31 7:01 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
This patch itself looks fine, but I think with how the next patch goes
we don't even need it anymore, do we?
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 08/16] fs: Split off inode_needs_update_time and __file_update_time
2022-05-31 7:01 ` Christoph Hellwig
@ 2022-05-31 19:02 ` Stefan Roesch
0 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-31 19:02 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack
On 5/31/22 12:01 AM, Christoph Hellwig wrote:
> This patch itself looks fine, but I think with how the next patch goes
> we don't even need it anymore, do we?
>
We still need the patch:
- I don't want to set the pending time flag in all calls to update_time
(for "fs: Optimization for concurrent file time updates")
I only want to set the flag in the file_modified case and if the time needs
to be updated.
Also setting and clearing the flag in pending time flag in the same procedure
makes it easier to understand.
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v6 09/16] fs: Add async write file modification handling.
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (7 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 08/16] fs: Split off inode_needs_update_time and __file_update_time Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 7:01 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 10/16] fs: Optimization for concurrent file time updates Stefan Roesch
` (7 subsequent siblings)
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This adds a file_modified_async() function to return -EAGAIN if the
request either requires to remove privileges or needs to update the file
modification time. This is required for async buffered writes, so the
request gets handled in the io worker of io-uring.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/inode.c | 43 +++++++++++++++++++++++++++++++++++++++++--
include/linux/fs.h | 1 +
2 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index c44573a32c6a..4503bed063e7 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2116,17 +2116,21 @@ int file_update_time(struct file *file)
EXPORT_SYMBOL(file_update_time);
/**
- * file_modified - handle mandated vfs changes when modifying a file
+ * file_modified_flags - handle mandated vfs changes when modifying a file
* @file: file that was modified
+ * @flags: kiocb flags
*
* When file has been modified ensure that special
* file privileges are removed and time settings are updated.
*
+ * If IOCB_NOWAIT is set, special file privileges will not be removed and
+ * time settings will not be updated. It will return -EAGAIN.
+ *
* Context: Caller must hold the file's inode lock.
*
* Return: 0 on success, negative errno on failure.
*/
-int file_modified(struct file *file)
+static int file_modified_flags(struct file *file, int flags)
{
int ret;
struct inode *inode = file_inode(file);
@@ -2146,11 +2150,46 @@ int file_modified(struct file *file)
ret = inode_needs_update_time(inode, &now);
if (ret <= 0)
return ret;
+ if (flags & IOCB_NOWAIT)
+ return -EAGAIN;
return __file_update_time(file, &now, ret);
}
+
+/**
+ * file_modified - handle mandated vfs changes when modifying a file
+ * @file: file that was modified
+ *
+ * When file has been modified ensure that special
+ * file privileges are removed and time settings are updated.
+ *
+ * Context: Caller must hold the file's inode lock.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int file_modified(struct file *file)
+{
+ return file_modified_flags(file, 0);
+}
EXPORT_SYMBOL(file_modified);
+/**
+ * kiocb_modified - handle mandated vfs changes when modifying a file
+ * @iocb: iocb that was modified
+ *
+ * When file has been modified ensure that special
+ * file privileges are removed and time settings are updated.
+ *
+ * Context: Caller must hold the file's inode lock.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int kiocb_modified(struct kiocb *iocb)
+{
+ return file_modified_flags(iocb->ki_filp, iocb->ki_flags);
+}
+EXPORT_SYMBOL_GPL(kiocb_modified);
+
int inode_needs_sync(struct inode *inode)
{
if (IS_SYNC(inode))
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 407ad5004f54..2d9b3afcb4a5 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2384,6 +2384,7 @@ static inline void file_accessed(struct file *file)
}
extern int file_modified(struct file *file);
+int kiocb_modified(struct kiocb *iocb);
int sync_inode_metadata(struct inode *inode, int wait);
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 10/16] fs: Optimization for concurrent file time updates.
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (8 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 09/16] fs: Add async write file modification handling Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 11/16] io_uring: Add support for async buffered writes Stefan Roesch
` (6 subsequent siblings)
16 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This introduces the S_PENDING_TIME flag. If an async buffered write
needs to update the time, it cannot be processed in the fast path of
io-uring. When a time update is pending this flag is set for async
buffered writes. Other concurrent async buffered writes for the same
file do not need to wait while this time update is pending.
This reduces the number of async buffered writes that need to get punted
to the io-workers in io-uring.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/inode.c | 11 +++++++++--
include/linux/fs.h | 3 +++
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index 4503bed063e7..7185d860d423 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2150,10 +2150,17 @@ static int file_modified_flags(struct file *file, int flags)
ret = inode_needs_update_time(inode, &now);
if (ret <= 0)
return ret;
- if (flags & IOCB_NOWAIT)
+ if (flags & IOCB_NOWAIT) {
+ if (IS_PENDING_TIME(inode))
+ return 0;
+
+ inode_set_flags(inode, S_PENDING_TIME, S_PENDING_TIME);
return -EAGAIN;
+ }
- return __file_update_time(file, &now, ret);
+ ret = __file_update_time(file, &now, ret);
+ inode_set_flags(inode, 0, S_PENDING_TIME);
+ return ret;
}
/**
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 2d9b3afcb4a5..5924c90eab1d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2143,6 +2143,8 @@ struct super_operations {
#define S_CASEFOLD (1 << 15) /* Casefolded file */
#define S_VERITY (1 << 16) /* Verity file (using fs/verity/) */
#define S_KERNEL_FILE (1 << 17) /* File is in use by the kernel (eg. fs/cachefiles) */
+#define S_PENDING_TIME (1 << 18) /* File update time is pending */
+
/*
* Note that nosuid etc flags are inode-specific: setting some file-system
@@ -2185,6 +2187,7 @@ static inline bool sb_rdonly(const struct super_block *sb) { return sb->s_flags
#define IS_ENCRYPTED(inode) ((inode)->i_flags & S_ENCRYPTED)
#define IS_CASEFOLDED(inode) ((inode)->i_flags & S_CASEFOLD)
#define IS_VERITY(inode) ((inode)->i_flags & S_VERITY)
+#define IS_PENDING_TIME(inode) ((inode)->i_flags & S_PENDING_TIME)
#define IS_WHITEOUT(inode) (S_ISCHR(inode->i_mode) && \
(inode)->i_rdev == WHITEOUT_DEV)
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 11/16] io_uring: Add support for async buffered writes
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (9 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 10/16] fs: Optimization for concurrent file time updates Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 12/16] io_uring: Add tracepoint for short writes Stefan Roesch
` (5 subsequent siblings)
16 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This enables the async buffered writes for the filesystems that support
async buffered writes in io-uring. Buffered writes are enabled for
blocks that are already in the page cache or can be acquired with noio.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/io_uring.c | 29 ++++++++++++++++++++++++-----
1 file changed, 24 insertions(+), 5 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 9f1c682d7caf..c0771e215669 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4257,7 +4257,7 @@ static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
return -EINVAL;
}
-static bool need_read_all(struct io_kiocb *req)
+static bool need_complete_io(struct io_kiocb *req)
{
return req->flags & REQ_F_ISREG ||
S_ISBLK(file_inode(req->file)->i_mode);
@@ -4386,7 +4386,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
} else if (ret == -EIOCBQUEUED) {
goto out_free;
} else if (ret == req->cqe.res || ret <= 0 || !force_nonblock ||
- (req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
+ (req->flags & REQ_F_NOWAIT) || !need_complete_io(req)) {
/* read all, failed, already did sync or don't want to retry */
goto done;
}
@@ -4482,9 +4482,10 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
if (unlikely(!io_file_supports_nowait(req)))
goto copy_iov;
- /* file path doesn't support NOWAIT for non-direct_IO */
- if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
- (req->flags & REQ_F_ISREG))
+ /* File path supports NOWAIT for non-direct_IO only for block devices. */
+ if (!(kiocb->ki_flags & IOCB_DIRECT) &&
+ !(kiocb->ki_filp->f_mode & FMODE_BUF_WASYNC) &&
+ (req->flags & REQ_F_ISREG))
goto copy_iov;
kiocb->ki_flags |= IOCB_NOWAIT;
@@ -4538,6 +4539,24 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
/* IOPOLL retry should happen for io-wq threads */
if (ret2 == -EAGAIN && (req->ctx->flags & IORING_SETUP_IOPOLL))
goto copy_iov;
+
+ if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)) {
+ struct io_async_rw *rw;
+
+ /* This is a partial write. The file pos has already been
+ * updated, setup the async struct to complete the request
+ * in the worker. Also update bytes_done to account for
+ * the bytes already written.
+ */
+ iov_iter_save_state(&s->iter, &s->iter_state);
+ ret = io_setup_async_rw(req, iovec, s, true);
+
+ rw = req->async_data;
+ if (rw)
+ rw->bytes_done += ret2;
+
+ return ret ? ret : -EAGAIN;
+ }
done:
kiocb_done(req, ret2, issue_flags);
} else {
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 12/16] io_uring: Add tracepoint for short writes
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (10 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 11/16] io_uring: Add support for async buffered writes Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-26 17:38 ` [PATCH v6 13/16] xfs: Specify lockmode when calling xfs_ilock_for_iomap() Stefan Roesch
` (4 subsequent siblings)
16 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This adds the io_uring_short_write tracepoint to io_uring. A short write
is issued if not all pages that are required for a write are in the page
cache and the async buffered writes have to return EAGAIN.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/io_uring.c | 3 +++
include/trace/events/io_uring.h | 25 +++++++++++++++++++++++++
2 files changed, 28 insertions(+)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index c0771e215669..9ab68138f442 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4543,6 +4543,9 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)) {
struct io_async_rw *rw;
+ trace_io_uring_short_write(req->ctx, kiocb->ki_pos - ret2,
+ req->cqe.res, ret2);
+
/* This is a partial write. The file pos has already been
* updated, setup the async struct to complete the request
* in the worker. Also update bytes_done to account for
diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
index 66fcc5a1a5b1..25df513660cc 100644
--- a/include/trace/events/io_uring.h
+++ b/include/trace/events/io_uring.h
@@ -600,6 +600,31 @@ TRACE_EVENT(io_uring_cqe_overflow,
__entry->cflags, __entry->ocqe)
);
+TRACE_EVENT(io_uring_short_write,
+
+ TP_PROTO(void *ctx, u64 fpos, u64 wanted, u64 got),
+
+ TP_ARGS(ctx, fpos, wanted, got),
+
+ TP_STRUCT__entry(
+ __field(void *, ctx)
+ __field(u64, fpos)
+ __field(u64, wanted)
+ __field(u64, got)
+ ),
+
+ TP_fast_assign(
+ __entry->ctx = ctx;
+ __entry->fpos = fpos;
+ __entry->wanted = wanted;
+ __entry->got = got;
+ ),
+
+ TP_printk("ring %p, fpos %lld, wanted %lld, got %lld",
+ __entry->ctx, __entry->fpos,
+ __entry->wanted, __entry->got)
+);
+
#endif /* _TRACE_IO_URING_H */
/* This part must be outside protection */
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 13/16] xfs: Specify lockmode when calling xfs_ilock_for_iomap()
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (11 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 12/16] io_uring: Add tracepoint for short writes Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 7:03 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb() Stefan Roesch
` (3 subsequent siblings)
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This patch changes the helper function xfs_ilock_for_iomap such that the
lock mode must be passed in.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/xfs/xfs_iomap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index e552ce541ec2..3aa60e53a181 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -659,7 +659,7 @@ xfs_ilock_for_iomap(
unsigned flags,
unsigned *lockmode)
{
- unsigned mode = XFS_ILOCK_SHARED;
+ unsigned int mode = *lockmode;
bool is_write = flags & (IOMAP_WRITE | IOMAP_ZERO);
/*
@@ -737,7 +737,7 @@ xfs_direct_write_iomap_begin(
int nimaps = 1, error = 0;
bool shared = false;
u16 iomap_flags = 0;
- unsigned lockmode;
+ unsigned int lockmode = XFS_ILOCK_SHARED;
ASSERT(flags & (IOMAP_WRITE | IOMAP_ZERO));
@@ -1167,7 +1167,7 @@ xfs_read_iomap_begin(
xfs_fileoff_t end_fsb = xfs_iomap_end_fsb(mp, offset, length);
int nimaps = 1, error = 0;
bool shared = false;
- unsigned lockmode;
+ unsigned int lockmode = XFS_ILOCK_SHARED;
ASSERT(!(flags & (IOMAP_WRITE | IOMAP_ZERO)));
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb()
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (12 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 13/16] xfs: Specify lockmode when calling xfs_ilock_for_iomap() Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 7:04 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 15/16] xfs: Add async buffered write support Stefan Roesch
` (2 subsequent siblings)
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This changes the function signature of the function xfs_ilock_iocb():
- the parameter iocb is replaced with ip, passing in an xfs_inode
- the parameter iocb_flags is added to be able to pass in the iocb flags
This allows to call the function from xfs_file_buffered_writes.
All the callers are changed accordingly.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/xfs/xfs_file.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 5bddb1e9e0b3..50dea07f5e56 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -190,14 +190,13 @@ xfs_file_fsync(
return error;
}
-static int
+static inline int
xfs_ilock_iocb(
- struct kiocb *iocb,
+ struct xfs_inode *ip,
+ int iocb_flags,
unsigned int lock_mode)
{
- struct xfs_inode *ip = XFS_I(file_inode(iocb->ki_filp));
-
- if (iocb->ki_flags & IOCB_NOWAIT) {
+ if (iocb_flags & IOCB_NOWAIT) {
if (!xfs_ilock_nowait(ip, lock_mode))
return -EAGAIN;
} else {
@@ -222,7 +221,7 @@ xfs_file_dio_read(
file_accessed(iocb->ki_filp);
- ret = xfs_ilock_iocb(iocb, XFS_IOLOCK_SHARED);
+ ret = xfs_ilock_iocb(ip, iocb->ki_flags, XFS_IOLOCK_SHARED);
if (ret)
return ret;
ret = iomap_dio_rw(iocb, to, &xfs_read_iomap_ops, NULL, 0, 0);
@@ -244,7 +243,7 @@ xfs_file_dax_read(
if (!iov_iter_count(to))
return 0; /* skip atime */
- ret = xfs_ilock_iocb(iocb, XFS_IOLOCK_SHARED);
+ ret = xfs_ilock_iocb(ip, iocb->ki_flags, XFS_IOLOCK_SHARED);
if (ret)
return ret;
ret = dax_iomap_rw(iocb, to, &xfs_read_iomap_ops);
@@ -264,7 +263,7 @@ xfs_file_buffered_read(
trace_xfs_file_buffered_read(iocb, to);
- ret = xfs_ilock_iocb(iocb, XFS_IOLOCK_SHARED);
+ ret = xfs_ilock_iocb(ip, iocb->ki_flags, XFS_IOLOCK_SHARED);
if (ret)
return ret;
ret = generic_file_read_iter(iocb, to);
@@ -343,7 +342,7 @@ xfs_file_write_checks(
if (*iolock == XFS_IOLOCK_SHARED && !IS_NOSEC(inode)) {
xfs_iunlock(ip, *iolock);
*iolock = XFS_IOLOCK_EXCL;
- error = xfs_ilock_iocb(iocb, *iolock);
+ error = xfs_ilock_iocb(ip, iocb->ki_flags, *iolock);
if (error) {
*iolock = 0;
return error;
@@ -516,7 +515,7 @@ xfs_file_dio_write_aligned(
int iolock = XFS_IOLOCK_SHARED;
ssize_t ret;
- ret = xfs_ilock_iocb(iocb, iolock);
+ ret = xfs_ilock_iocb(ip, iocb->ki_flags, iolock);
if (ret)
return ret;
ret = xfs_file_write_checks(iocb, from, &iolock);
@@ -583,7 +582,7 @@ xfs_file_dio_write_unaligned(
flags = IOMAP_DIO_FORCE_WAIT;
}
- ret = xfs_ilock_iocb(iocb, iolock);
+ ret = xfs_ilock_iocb(ip, iocb->ki_flags, iolock);
if (ret)
return ret;
@@ -659,7 +658,7 @@ xfs_file_dax_write(
ssize_t ret, error = 0;
loff_t pos;
- ret = xfs_ilock_iocb(iocb, iolock);
+ ret = xfs_ilock_iocb(ip, iocb->ki_flags, iolock);
if (ret)
return ret;
ret = xfs_file_write_checks(iocb, from, &iolock);
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* Re: [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb()
2022-05-26 17:38 ` [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb() Stefan Roesch
@ 2022-05-31 7:04 ` Christoph Hellwig
2022-05-31 19:15 ` Stefan Roesch
0 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2022-05-31 7:04 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On Thu, May 26, 2022 at 10:38:38AM -0700, Stefan Roesch wrote:
> This changes the function signature of the function xfs_ilock_iocb():
> - the parameter iocb is replaced with ip, passing in an xfs_inode
> - the parameter iocb_flags is added to be able to pass in the iocb flags
>
> This allows to call the function from xfs_file_buffered_writes.
xfs_file_buffered_write? But even that already has the iocb, so I'm
not sure why we need that change to start with.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb()
2022-05-31 7:04 ` Christoph Hellwig
@ 2022-05-31 19:15 ` Stefan Roesch
2022-06-01 5:26 ` Christoph Hellwig
0 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-31 19:15 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack
On 5/31/22 12:04 AM, Christoph Hellwig wrote:
> On Thu, May 26, 2022 at 10:38:38AM -0700, Stefan Roesch wrote:
>> This changes the function signature of the function xfs_ilock_iocb():
>> - the parameter iocb is replaced with ip, passing in an xfs_inode
>> - the parameter iocb_flags is added to be able to pass in the iocb flags
>>
>> This allows to call the function from xfs_file_buffered_writes.
>
> xfs_file_buffered_write? But even that already has the iocb, so I'm
> not sure why we need that change to start with.
Yes xfs_file_buffered_write().
The problem is that xfs_iolock_iocb uses: iocb->ki_filp->f_inode,
but xfs_file_buffered_write: iocb->ki_ki_filp->f_mapping->host
This requires to pass in the xfs_inode *.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb()
2022-05-31 19:15 ` Stefan Roesch
@ 2022-06-01 5:26 ` Christoph Hellwig
2022-06-01 17:15 ` Stefan Roesch
0 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2022-06-01 5:26 UTC (permalink / raw)
To: Stefan Roesch
Cc: Christoph Hellwig, io-uring, kernel-team, linux-mm, linux-xfs,
linux-fsdevel, david, jack
On Tue, May 31, 2022 at 12:15:19PM -0700, Stefan Roesch wrote:
> The problem is that xfs_iolock_iocb uses: iocb->ki_filp->f_inode,
> but xfs_file_buffered_write: iocb->ki_ki_filp->f_mapping->host
>
> This requires to pass in the xfs_inode *.
Both must be the same. The indirection only matters for device files
(and coda).
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb()
2022-06-01 5:26 ` Christoph Hellwig
@ 2022-06-01 17:15 ` Stefan Roesch
0 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-06-01 17:15 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack
On 5/31/22 10:26 PM, Christoph Hellwig wrote:
> On Tue, May 31, 2022 at 12:15:19PM -0700, Stefan Roesch wrote:
>> The problem is that xfs_iolock_iocb uses: iocb->ki_filp->f_inode,
>> but xfs_file_buffered_write: iocb->ki_ki_filp->f_mapping->host
>>
>> This requires to pass in the xfs_inode *.
>
> Both must be the same. The indirection only matters for device files
> (and coda).
I verified it. The patch is no longer needed. I will remove the patch.
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v6 15/16] xfs: Add async buffered write support
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (13 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 14/16] xfs: Change function signature of xfs_ilock_iocb() Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 7:05 ` Christoph Hellwig
2022-05-26 17:38 ` [PATCH v6 16/16] xfs: Enable " Stefan Roesch
2022-05-26 18:12 ` [PATCH v6 00/16] io-uring/xfs: support async buffered writes Matthew Wilcox
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This adds the async buffered write support to XFS. For async buffered
write requests, the request will return -EAGAIN if the ilock cannot be
obtained immediately.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/xfs/xfs_file.c | 9 ++++-----
fs/xfs/xfs_iomap.c | 5 ++++-
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 50dea07f5e56..e9d615f4c209 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -409,7 +409,7 @@ xfs_file_write_checks(
spin_unlock(&ip->i_flags_lock);
out:
- return file_modified(file);
+ return kiocb_modified(iocb);
}
static int
@@ -701,12 +701,11 @@ xfs_file_buffered_write(
bool cleared_space = false;
int iolock;
- if (iocb->ki_flags & IOCB_NOWAIT)
- return -EOPNOTSUPP;
-
write_retry:
iolock = XFS_IOLOCK_EXCL;
- xfs_ilock(ip, iolock);
+ ret = xfs_ilock_iocb(ip, iocb->ki_flags, iolock);
+ if (ret)
+ return ret;
ret = xfs_file_write_checks(iocb, from, &iolock);
if (ret)
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 3aa60e53a181..af8c50140f0f 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -881,6 +881,7 @@ xfs_buffered_write_iomap_begin(
bool eof = false, cow_eof = false, shared = false;
int allocfork = XFS_DATA_FORK;
int error = 0;
+ unsigned int lockmode = XFS_ILOCK_EXCL;
if (xfs_is_shutdown(mp))
return -EIO;
@@ -892,7 +893,9 @@ xfs_buffered_write_iomap_begin(
ASSERT(!XFS_IS_REALTIME_INODE(ip));
- xfs_ilock(ip, XFS_ILOCK_EXCL);
+ error = xfs_ilock_for_iomap(ip, flags, &lockmode);
+ if (error)
+ return error;
if (XFS_IS_CORRUPT(mp, !xfs_ifork_has_extents(&ip->i_df)) ||
XFS_TEST_ERROR(false, mp, XFS_ERRTAG_BMAPIFORMAT)) {
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v6 16/16] xfs: Enable async buffered write support
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (14 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 15/16] xfs: Add async buffered write support Stefan Roesch
@ 2022-05-26 17:38 ` Stefan Roesch
2022-05-31 7:05 ` Christoph Hellwig
2022-05-26 18:12 ` [PATCH v6 00/16] io-uring/xfs: support async buffered writes Matthew Wilcox
16 siblings, 1 reply; 46+ messages in thread
From: Stefan Roesch @ 2022-05-26 17:38 UTC (permalink / raw)
To: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel
Cc: shr, david, jack, hch
This turns on the async buffered write support for XFS.
Signed-off-by: Stefan Roesch <[email protected]>
---
fs/xfs/xfs_file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index e9d615f4c209..2297770364b0 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1169,7 +1169,7 @@ xfs_file_open(
return -EFBIG;
if (xfs_is_shutdown(XFS_M(inode->i_sb)))
return -EIO;
- file->f_mode |= FMODE_NOWAIT | FMODE_BUF_RASYNC;
+ file->f_mode |= FMODE_NOWAIT | FMODE_BUF_RASYNC | FMODE_BUF_WASYNC;
return 0;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 46+ messages in thread
* Re: [PATCH v6 16/16] xfs: Enable async buffered write support
2022-05-26 17:38 ` [PATCH v6 16/16] xfs: Enable " Stefan Roesch
@ 2022-05-31 7:05 ` Christoph Hellwig
2022-05-31 19:18 ` Stefan Roesch
0 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2022-05-31 7:05 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On Thu, May 26, 2022 at 10:38:40AM -0700, Stefan Roesch wrote:
> This turns on the async buffered write support for XFS.
I think this belongs into the previous patch, but otherwise this looks
obviously fine.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 16/16] xfs: Enable async buffered write support
2022-05-31 7:05 ` Christoph Hellwig
@ 2022-05-31 19:18 ` Stefan Roesch
0 siblings, 0 replies; 46+ messages in thread
From: Stefan Roesch @ 2022-05-31 19:18 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack
On 5/31/22 12:05 AM, Christoph Hellwig wrote:
> On Thu, May 26, 2022 at 10:38:40AM -0700, Stefan Roesch wrote:
>> This turns on the async buffered write support for XFS.
>
> I think this belongs into the previous patch, but otherwise this looks
> obviously fine.
Merged it with the previous patch.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v6 00/16] io-uring/xfs: support async buffered writes
2022-05-26 17:38 [PATCH v6 00/16] io-uring/xfs: support async buffered writes Stefan Roesch
` (15 preceding siblings ...)
2022-05-26 17:38 ` [PATCH v6 16/16] xfs: Enable " Stefan Roesch
@ 2022-05-26 18:12 ` Matthew Wilcox
16 siblings, 0 replies; 46+ messages in thread
From: Matthew Wilcox @ 2022-05-26 18:12 UTC (permalink / raw)
To: Stefan Roesch
Cc: io-uring, kernel-team, linux-mm, linux-xfs, linux-fsdevel, david,
jack, hch
On Thu, May 26, 2022 at 10:38:24AM -0700, Stefan Roesch wrote:
> Changes:
> V6:
> - pass in iter->flags to calls in iomap_page_create()
Dude, calm down. It's been less than 24 hours since v5. It's the
middle of the merge window and you have to give people more time to
review your patches. Don't post another version for a week.
^ permalink raw reply [flat|nested] 46+ messages in thread