* [PATCH v4 1/8] io_uring/cmd: expose iowq to cmds
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 2/8] io_uring/cmd: give inline space in request " Pavel Begunkov
` (9 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
When an io_uring request needs blocking context we offload it to the
io_uring's thread pool called io-wq. We can get there off ->uring_cmd
by returning -EAGAIN, but there is no straightforward way of doing that
from an asynchronous callback. Add a helper that would transfer a
command to a blocking context.
Note, we do an extra hop via task_work before io_queue_iowq(), that's a
limitation of io_uring infra we have that can likely be lifted later
if that would ever become a problem.
Signed-off-by: Pavel Begunkov <[email protected]>
---
include/linux/io_uring/cmd.h | 6 ++++++
io_uring/io_uring.c | 11 +++++++++++
io_uring/io_uring.h | 1 +
io_uring/uring_cmd.c | 7 +++++++
4 files changed, 25 insertions(+)
diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
index 447fbfd32215..86ceb3383e49 100644
--- a/include/linux/io_uring/cmd.h
+++ b/include/linux/io_uring/cmd.h
@@ -48,6 +48,9 @@ void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
unsigned int issue_flags);
+/* Execute the request from a blocking context */
+void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd);
+
#else
static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd)
@@ -67,6 +70,9 @@ static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
}
+static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
+{
+}
#endif
/*
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 1aca501efaf6..86cf31902841 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -533,6 +533,17 @@ static void io_queue_iowq(struct io_kiocb *req)
io_queue_linked_timeout(link);
}
+static void io_req_queue_iowq_tw(struct io_kiocb *req, struct io_tw_state *ts)
+{
+ io_queue_iowq(req);
+}
+
+void io_req_queue_iowq(struct io_kiocb *req)
+{
+ req->io_task_work.func = io_req_queue_iowq_tw;
+ io_req_task_work_add(req);
+}
+
static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
{
while (!list_empty(&ctx->defer_list)) {
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 65078e641390..9d70b2cf7b1e 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -94,6 +94,7 @@ int io_uring_alloc_task_context(struct task_struct *task,
int io_ring_add_registered_file(struct io_uring_task *tctx, struct file *file,
int start, int end);
+void io_req_queue_iowq(struct io_kiocb *req);
int io_poll_issue(struct io_kiocb *req, struct io_tw_state *ts);
int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr);
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 8391c7c7c1ec..39c3c816ec78 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -277,6 +277,13 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
}
EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);
+void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
+{
+ struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+
+ io_req_queue_iowq(req);
+}
+
static inline int io_uring_cmd_getsockopt(struct socket *sock,
struct io_uring_cmd *cmd,
unsigned int issue_flags)
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 2/8] io_uring/cmd: give inline space in request to cmds
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 1/8] io_uring/cmd: expose iowq to cmds Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 3/8] filemap: introduce filemap_invalidate_pages Pavel Begunkov
` (8 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
Some io_uring commands can use some inline space in io_kiocb. We have 32
bytes in struct io_uring_cmd, expose it.
Signed-off-by: Pavel Begunkov <[email protected]>
---
include/linux/io_uring/cmd.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
index 86ceb3383e49..c189d36ad55e 100644
--- a/include/linux/io_uring/cmd.h
+++ b/include/linux/io_uring/cmd.h
@@ -23,6 +23,15 @@ static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe)
return sqe->cmd;
}
+static inline void io_uring_cmd_private_sz_check(size_t cmd_sz)
+{
+ BUILD_BUG_ON(cmd_sz > sizeof_field(struct io_uring_cmd, pdu));
+}
+#define io_uring_cmd_to_pdu(cmd, pdu_type) ( \
+ io_uring_cmd_private_sz_check(sizeof(pdu_type)), \
+ ((pdu_type *)&(cmd)->pdu) \
+)
+
#if defined(CONFIG_IO_URING)
int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd);
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 3/8] filemap: introduce filemap_invalidate_pages
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 1/8] io_uring/cmd: expose iowq to cmds Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 2/8] io_uring/cmd: give inline space in request " Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 4/8] block: introduce blk_validate_byte_range() Pavel Begunkov
` (7 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
kiocb_invalidate_pages() is useful for the write path, however not
everything is backed by kiocb and we want to reuse the function for bio
based discard implementation. Extract and and reuse a new helper called
filemap_invalidate_pages(), which takes a argument indicating whether it
should be non-blocking and might return -EAGAIN.
Signed-off-by: Pavel Begunkov <[email protected]>
---
include/linux/pagemap.h | 2 ++
mm/filemap.c | 17 ++++++++++++-----
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index d9c7edb6422b..e39c3a7ce33c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -32,6 +32,8 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
pgoff_t start, pgoff_t end);
int kiocb_invalidate_pages(struct kiocb *iocb, size_t count);
void kiocb_invalidate_post_direct_write(struct kiocb *iocb, size_t count);
+int filemap_invalidate_pages(struct address_space *mapping,
+ loff_t pos, loff_t end, bool nowait);
int write_inode_now(struct inode *, int sync);
int filemap_fdatawrite(struct address_space *);
diff --git a/mm/filemap.c b/mm/filemap.c
index d62150418b91..6843ed4847d4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2712,14 +2712,12 @@ int kiocb_write_and_wait(struct kiocb *iocb, size_t count)
}
EXPORT_SYMBOL_GPL(kiocb_write_and_wait);
-int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
+int filemap_invalidate_pages(struct address_space *mapping,
+ loff_t pos, loff_t end, bool nowait)
{
- struct address_space *mapping = iocb->ki_filp->f_mapping;
- loff_t pos = iocb->ki_pos;
- loff_t end = pos + count - 1;
int ret;
- if (iocb->ki_flags & IOCB_NOWAIT) {
+ if (nowait) {
/* we could block if there are any pages in the range */
if (filemap_range_has_page(mapping, pos, end))
return -EAGAIN;
@@ -2738,6 +2736,15 @@ int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
return invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
end >> PAGE_SHIFT);
}
+
+int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
+{
+ struct address_space *mapping = iocb->ki_filp->f_mapping;
+
+ return filemap_invalidate_pages(mapping, iocb->ki_pos,
+ iocb->ki_pos + count - 1,
+ iocb->ki_flags & IOCB_NOWAIT);
+}
EXPORT_SYMBOL_GPL(kiocb_invalidate_pages);
/**
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 4/8] block: introduce blk_validate_byte_range()
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (2 preceding siblings ...)
2024-09-06 22:57 ` [PATCH v4 3/8] filemap: introduce filemap_invalidate_pages Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-10 7:55 ` Christoph Hellwig
2024-09-06 22:57 ` [PATCH v4 5/8] block: implement async discard as io_uring cmd Pavel Begunkov
` (6 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
In preparation to further changes extract a helper function out of
blk_ioctl_discard() that validates if we can do IO against the given
range of disk byte addresses.
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/ioctl.c | 44 ++++++++++++++++++++++++++------------------
1 file changed, 26 insertions(+), 18 deletions(-)
diff --git a/block/ioctl.c b/block/ioctl.c
index e8e4a4190f18..a820f692dd1c 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -92,38 +92,46 @@ static int compat_blkpg_ioctl(struct block_device *bdev,
}
#endif
+static int blk_validate_byte_range(struct block_device *bdev,
+ uint64_t start, uint64_t len)
+{
+ unsigned int bs_mask = bdev_logical_block_size(bdev) - 1;
+ uint64_t end;
+
+ if ((start | len) & bs_mask)
+ return -EINVAL;
+ if (!len)
+ return -EINVAL;
+ if (check_add_overflow(start, len, &end) || end > bdev_nr_bytes(bdev))
+ return -EINVAL;
+
+ return 0;
+}
+
static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
unsigned long arg)
{
- unsigned int bs_mask = bdev_logical_block_size(bdev) - 1;
- uint64_t range[2], start, len, end;
+ uint64_t range[2], start, len;
struct bio *prev = NULL, *bio;
sector_t sector, nr_sects;
struct blk_plug plug;
int err;
- if (!(mode & BLK_OPEN_WRITE))
- return -EBADF;
-
- if (!bdev_max_discard_sectors(bdev))
- return -EOPNOTSUPP;
- if (bdev_read_only(bdev))
- return -EPERM;
-
if (copy_from_user(range, (void __user *)arg, sizeof(range)))
return -EFAULT;
-
start = range[0];
len = range[1];
- if (!len)
- return -EINVAL;
- if ((start | len) & bs_mask)
- return -EINVAL;
+ if (!bdev_max_discard_sectors(bdev))
+ return -EOPNOTSUPP;
- if (check_add_overflow(start, len, &end) ||
- end > bdev_nr_bytes(bdev))
- return -EINVAL;
+ if (!(mode & BLK_OPEN_WRITE))
+ return -EBADF;
+ if (bdev_read_only(bdev))
+ return -EPERM;
+ err = blk_validate_byte_range(bdev, start, len);
+ if (err)
+ return err;
filemap_invalidate_lock(bdev->bd_mapping);
err = truncate_bdev_range(bdev, mode, start, start + len - 1);
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/8] block: introduce blk_validate_byte_range()
2024-09-06 22:57 ` [PATCH v4 4/8] block: introduce blk_validate_byte_range() Pavel Begunkov
@ 2024-09-10 7:55 ` Christoph Hellwig
0 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-09-10 7:55 UTC (permalink / raw)
To: Pavel Begunkov
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
On Fri, Sep 06, 2024 at 11:57:21PM +0100, Pavel Begunkov wrote:
> +static int blk_validate_byte_range(struct block_device *bdev,
> + uint64_t start, uint64_t len)
A little comment explaining what it validates would be useful here.
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v4 5/8] block: implement async discard as io_uring cmd
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (3 preceding siblings ...)
2024-09-06 22:57 ` [PATCH v4 4/8] block: introduce blk_validate_byte_range() Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-10 8:01 ` Christoph Hellwig
2024-09-06 22:57 ` [PATCH v4 6/8] block: implement async write zeroes command Pavel Begunkov
` (5 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
io_uring allows to implement custom file specific operations via
fops->uring_cmd callback. Use it to wire up asynchronous discard
commands. Normally, first it tries to do a non-blocking issue, and if
fails we'd retry from a blocking context by returning -EAGAIN to
core io_uring.
Note, unlike ioctl(BLKDISCARD) with stronger guarantees against races,
we only do a best effort attempt to invalidate page cache, and it can
race with any writes and reads and leave page cache stale. It's the
same kind of races we allow to direct writes.
Suggested-by: Conrad Meyer <[email protected]>
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/blk-lib.c | 3 +-
block/blk.h | 1 +
block/fops.c | 2 +
block/ioctl.c | 102 ++++++++++++++++++++++++++++++++++++++++
include/linux/bio.h | 2 +
include/uapi/linux/fs.h | 2 +
6 files changed, 111 insertions(+), 1 deletion(-)
diff --git a/block/blk-lib.c b/block/blk-lib.c
index 83eb7761c2bf..c94c67a75f7e 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -10,7 +10,8 @@
#include "blk.h"
-static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
+/* The maximum size of a discard that can be issued from a given sector. */
+sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
{
unsigned int discard_granularity = bdev_discard_granularity(bdev);
sector_t granularity_aligned_sector;
diff --git a/block/blk.h b/block/blk.h
index 32f4e9f630a3..1a1a18d118f7 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -605,6 +605,7 @@ blk_mode_t file_to_blk_mode(struct file *file);
int truncate_bdev_range(struct block_device *bdev, blk_mode_t mode,
loff_t lstart, loff_t lend);
long blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg);
+int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags);
long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg);
extern const struct address_space_operations def_blk_aops;
diff --git a/block/fops.c b/block/fops.c
index 9825c1713a49..8154b10b5abf 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -17,6 +17,7 @@
#include <linux/fs.h>
#include <linux/iomap.h>
#include <linux/module.h>
+#include <linux/io_uring/cmd.h>
#include "blk.h"
static inline struct inode *bdev_file_inode(struct file *file)
@@ -873,6 +874,7 @@ const struct file_operations def_blk_fops = {
.splice_read = filemap_splice_read,
.splice_write = iter_file_splice_write,
.fallocate = blkdev_fallocate,
+ .uring_cmd = blkdev_uring_cmd,
.fop_flags = FOP_BUFFER_RASYNC,
};
diff --git a/block/ioctl.c b/block/ioctl.c
index a820f692dd1c..19fba8332eee 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -11,6 +11,8 @@
#include <linux/blktrace_api.h>
#include <linux/pr.h>
#include <linux/uaccess.h>
+#include <linux/pagemap.h>
+#include <linux/io_uring/cmd.h>
#include "blk.h"
static int blkpg_do_ioctl(struct block_device *bdev,
@@ -742,3 +744,103 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
return ret;
}
#endif
+
+struct blk_iou_cmd {
+ int res;
+ bool nowait;
+};
+
+static void blk_cmd_complete(struct io_uring_cmd *cmd, unsigned int issue_flags)
+{
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+
+ if (bic->res == -EAGAIN && bic->nowait)
+ io_uring_cmd_issue_blocking(cmd);
+ else
+ io_uring_cmd_done(cmd, bic->res, 0, issue_flags);
+}
+
+static void bio_cmd_bio_end_io(struct bio *bio)
+{
+ struct io_uring_cmd *cmd = bio->bi_private;
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+
+ if (unlikely(bio->bi_status) && !bic->res)
+ bic->res = blk_status_to_errno(bio->bi_status);
+
+ io_uring_cmd_do_in_task_lazy(cmd, blk_cmd_complete);
+ bio_put(bio);
+}
+
+static int blkdev_cmd_discard(struct io_uring_cmd *cmd,
+ struct block_device *bdev,
+ uint64_t start, uint64_t len, bool nowait)
+{
+ gfp_t gfp = nowait ? GFP_NOWAIT : GFP_KERNEL;
+ sector_t sector = start >> SECTOR_SHIFT;
+ sector_t nr_sects = len >> SECTOR_SHIFT;
+ struct bio *prev = NULL, *bio;
+ int err;
+
+ if (!bdev_max_discard_sectors(bdev))
+ return -EOPNOTSUPP;
+
+ if (!(file_to_blk_mode(cmd->file) & BLK_OPEN_WRITE))
+ return -EBADF;
+ if (bdev_read_only(bdev))
+ return -EPERM;
+ err = blk_validate_byte_range(bdev, start, len);
+ if (err)
+ return err;
+
+ /*
+ * Don't allow multi-bio non-blocking submissions as subsequent bios
+ * may fail but we won't get a direct indication of that. Normally,
+ * the caller should retry from a blocking context.
+ */
+ if (nowait && nr_sects > bio_discard_limit(bdev, sector))
+ return -EAGAIN;
+
+ err = filemap_invalidate_pages(bdev->bd_mapping, start,
+ start + len - 1, nowait);
+ if (err)
+ return err;
+
+ while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
+ if (nowait)
+ bio->bi_opf |= REQ_NOWAIT;
+ prev = bio_chain_and_submit(prev, bio);
+ }
+ if (!prev)
+ return -EAGAIN;
+
+ prev->bi_private = cmd;
+ prev->bi_end_io = bio_cmd_bio_end_io;
+ submit_bio(prev);
+ return -EIOCBQUEUED;
+}
+
+int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+{
+ struct block_device *bdev = I_BDEV(cmd->file->f_mapping->host);
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+ const struct io_uring_sqe *sqe = cmd->sqe;
+ u32 cmd_op = cmd->cmd_op;
+ uint64_t start, len;
+
+ if (unlikely(sqe->ioprio || sqe->__pad1 || sqe->len ||
+ sqe->rw_flags || sqe->file_index))
+ return -EINVAL;
+
+ bic->res = 0;
+ bic->nowait = issue_flags & IO_URING_F_NONBLOCK;
+
+ start = READ_ONCE(sqe->addr);
+ len = READ_ONCE(sqe->addr3);
+
+ switch (cmd_op) {
+ case BLOCK_URING_CMD_DISCARD:
+ return blkdev_cmd_discard(cmd, bdev, start, len, bic->nowait);
+ }
+ return -EINVAL;
+}
diff --git a/include/linux/bio.h b/include/linux/bio.h
index faceadb040f9..78ead424484c 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -684,4 +684,6 @@ struct bio *bio_chain_and_submit(struct bio *prev, struct bio *new);
struct bio *blk_alloc_discard_bio(struct block_device *bdev,
sector_t *sector, sector_t *nr_sects, gfp_t gfp_mask);
+sector_t bio_discard_limit(struct block_device *bdev, sector_t sector);
+
#endif /* __LINUX_BIO_H */
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 753971770733..7ea41ca97158 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -208,6 +208,8 @@ struct fsxattr {
* (see uapi/linux/blkzoned.h)
*/
+#define BLOCK_URING_CMD_DISCARD _IO(0x12,137)
+
#define BMAP_IOCTL 1 /* obsolete - kept for compatibility */
#define FIBMAP _IO(0x00,1) /* bmap access */
#define FIGETBSZ _IO(0x00,2) /* get the block size used for bmap */
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/8] block: implement async discard as io_uring cmd
2024-09-06 22:57 ` [PATCH v4 5/8] block: implement async discard as io_uring cmd Pavel Begunkov
@ 2024-09-10 8:01 ` Christoph Hellwig
2024-09-10 10:58 ` Pavel Begunkov
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2024-09-10 8:01 UTC (permalink / raw)
To: Pavel Begunkov
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
> + sector_t sector = start >> SECTOR_SHIFT;
> + sector_t nr_sects = len >> SECTOR_SHIFT;
> + struct bio *prev = NULL, *bio;
> + int err;
> +
> + if (!bdev_max_discard_sectors(bdev))
> + return -EOPNOTSUPP;
> +
> + if (!(file_to_blk_mode(cmd->file) & BLK_OPEN_WRITE))
> + return -EBADF;
> + if (bdev_read_only(bdev))
> + return -EPERM;
> + err = blk_validate_byte_range(bdev, start, len);
> + if (err)
> + return err;
Based on the above this function is misnamed, as it validates sector_t
range and not a byte range.
> + if (nowait && nr_sects > bio_discard_limit(bdev, sector))
> + return -EAGAIN;
> +
> + err = filemap_invalidate_pages(bdev->bd_mapping, start,
> + start + len - 1, nowait);
> + if (err)
> + return err;
> +
> + while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
> + if (nowait)
> + bio->bi_opf |= REQ_NOWAIT;
> + prev = bio_chain_and_submit(prev, bio);
> + }
> + if (!prev)
> + return -EAGAIN;
If a user changes the max_discard value between the check above and
the loop here this is racy.
> +sector_t bio_discard_limit(struct block_device *bdev, sector_t sector);
And to be honest, I'd really prefer to not have bio_discard_limit
exposed. Certainly not outside a header private to block/.
> +
> #endif /* __LINUX_BIO_H */
> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index 753971770733..7ea41ca97158 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -208,6 +208,8 @@ struct fsxattr {
> * (see uapi/linux/blkzoned.h)
> */
>
> +#define BLOCK_URING_CMD_DISCARD _IO(0x12,137)
Whitespace after the comma please. Also why start at 137? A comment
would generally be pretty useful as well.
Also can we have a include/uapi/linux/blkdev.h for this instead of
bloating fs.h that gets included just about everywhere?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/8] block: implement async discard as io_uring cmd
2024-09-10 8:01 ` Christoph Hellwig
@ 2024-09-10 10:58 ` Pavel Begunkov
2024-09-10 14:17 ` Christoph Hellwig
0 siblings, 1 reply; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-10 10:58 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm
On 9/10/24 09:01, Christoph Hellwig wrote:
>> + sector_t sector = start >> SECTOR_SHIFT;
>> + sector_t nr_sects = len >> SECTOR_SHIFT;
>> + struct bio *prev = NULL, *bio;
>> + int err;
>> +
>> + if (!bdev_max_discard_sectors(bdev))
>> + return -EOPNOTSUPP;
>> +
>> + if (!(file_to_blk_mode(cmd->file) & BLK_OPEN_WRITE))
>> + return -EBADF;
>> + if (bdev_read_only(bdev))
>> + return -EPERM;
>> + err = blk_validate_byte_range(bdev, start, len);
>> + if (err)
>> + return err;
>
> Based on the above this function is misnamed, as it validates sector_t
> range and not a byte range.
Start and len here are in bytes. What do you mean?
>> + if (nowait && nr_sects > bio_discard_limit(bdev, sector))
>> + return -EAGAIN;
>> +
>> + err = filemap_invalidate_pages(bdev->bd_mapping, start,
>> + start + len - 1, nowait);
>> + if (err)
>> + return err;
>> +
>> + while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
>> + if (nowait)
>> + bio->bi_opf |= REQ_NOWAIT;
>> + prev = bio_chain_and_submit(prev, bio);
>> + }
>> + if (!prev)
>> + return -EAGAIN;
>
> If a user changes the max_discard value between the check above and
> the loop here this is racy.
If the driver randomly changes it, it's racy either way. What do
you want to protect against?
>> +sector_t bio_discard_limit(struct block_device *bdev, sector_t sector);
>
> And to be honest, I'd really prefer to not have bio_discard_limit
> exposed. Certainly not outside a header private to block/.
Which is the other reason why first versions were putting down
a bio seeing that there is more to be done for nowait, which
you didn't like. I can return back to it or narrow the scopre.
>> +
>> #endif /* __LINUX_BIO_H */
>> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
>> index 753971770733..7ea41ca97158 100644
>> --- a/include/uapi/linux/fs.h
>> +++ b/include/uapi/linux/fs.h
>> @@ -208,6 +208,8 @@ struct fsxattr {
>> * (see uapi/linux/blkzoned.h)
>> */
>>
>> +#define BLOCK_URING_CMD_DISCARD _IO(0x12,137)
>
> Whitespace after the comma please.
That appears to be the "code style" of all BLK ioctls.
> Also why start at 137? A comment
> would generally be pretty useful as well.
There is a comment, 2 lines above the new define.
/*
* A jump here: 130-136 are reserved for zoned block devices
* (see uapi/linux/blkzoned.h)
*/
Is that your concern?
> Also can we have a include/uapi/linux/blkdev.h for this instead of
> bloating fs.h that gets included just about everywhere?
I don't think it belongs to this series. Regardless, how do you
see it? The new file can have just those several new definitions,
in fs.h we'd have another comment why there is another empty range,
but I don't think it's worth it at all.
Another option is to move there everything block related, and make
fs.h include blkdev.h, which can always be done on top.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/8] block: implement async discard as io_uring cmd
2024-09-10 10:58 ` Pavel Begunkov
@ 2024-09-10 14:17 ` Christoph Hellwig
2024-09-10 20:22 ` Pavel Begunkov
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2024-09-10 14:17 UTC (permalink / raw)
To: Pavel Begunkov
Cc: Christoph Hellwig, io-uring, Jens Axboe, Conrad Meyer,
linux-block, linux-mm
On Tue, Sep 10, 2024 at 11:58:23AM +0100, Pavel Begunkov wrote:
> > Based on the above this function is misnamed, as it validates sector_t
> > range and not a byte range.
>
> Start and len here are in bytes. What do you mean?
You are right, sorry.
> > > +
> > > + err = filemap_invalidate_pages(bdev->bd_mapping, start,
> > > + start + len - 1, nowait);
> > > + if (err)
> > > + return err;
> > > +
> > > + while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
> > > + if (nowait)
> > > + bio->bi_opf |= REQ_NOWAIT;
> > > + prev = bio_chain_and_submit(prev, bio);
> > > + }
> > > + if (!prev)
> > > + return -EAGAIN;
> >
> > If a user changes the max_discard value between the check above and
> > the loop here this is racy.
>
> If the driver randomly changes it, it's racy either way. What do
> you want to protect against?
The discard limit shrinking and now this successfully returning while
not actually discarding the range. The fix is pretty simple in that
the nowait case should simply break out of the loop after the first bio.
> > > +sector_t bio_discard_limit(struct block_device *bdev, sector_t sector);
> >
> > And to be honest, I'd really prefer to not have bio_discard_limit
> > exposed. Certainly not outside a header private to block/.
>
> Which is the other reason why first versions were putting down
> a bio seeing that there is more to be done for nowait, which
> you didn't like. I can return back to it or narrow the scopre.
The above should also take care of that.
>
> > Also why start at 137? A comment
> > would generally be pretty useful as well.
>
> There is a comment, 2 lines above the new define.
>
> /*
> * A jump here: 130-136 are reserved for zoned block devices
> * (see uapi/linux/blkzoned.h)
> */
>
> Is that your concern?
But those are ioctls, this is not an ioctl and uses a different
number space. Take a look at e.g. nvme uring cmds which also
don't try to use the same number space as the ioctl.
> > Also can we have a include/uapi/linux/blkdev.h for this instead of
> > bloating fs.h that gets included just about everywhere?
> I don't think it belongs to this series.
How would adding a proper header instead of bloating fs.h not be
part of the series adding the first ever block layer uring_cmds?
Just in case I wasn't clear - this isn't asking you to move anything
existing as we can't do that without breaking existing applications.
It is about adding the new command to the proper place.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/8] block: implement async discard as io_uring cmd
2024-09-10 14:17 ` Christoph Hellwig
@ 2024-09-10 20:22 ` Pavel Begunkov
2024-09-12 9:28 ` Christoph Hellwig
0 siblings, 1 reply; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-10 20:22 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm
On 9/10/24 15:17, Christoph Hellwig wrote:
> On Tue, Sep 10, 2024 at 11:58:23AM +0100, Pavel Begunkov wrote:
>>>> +
>>>> + err = filemap_invalidate_pages(bdev->bd_mapping, start,
>>>> + start + len - 1, nowait);
>>>> + if (err)
>>>> + return err;
>>>> +
>>>> + while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
>>>> + if (nowait)
>>>> + bio->bi_opf |= REQ_NOWAIT;
>>>> + prev = bio_chain_and_submit(prev, bio);
>>>> + }
>>>> + if (!prev)
>>>> + return -EAGAIN;
>>>
>>> If a user changes the max_discard value between the check above and
>>> the loop here this is racy.
>>
>> If the driver randomly changes it, it's racy either way. What do
>> you want to protect against?
>
> The discard limit shrinking and now this successfully returning while
> not actually discarding the range. The fix is pretty simple in that
If it's shrinking then bios initialised and submitted with that
initial larger limit should fail, e.g. by the disk or driver, which
would be caught by bio_cmd_bio_end_io(). If nobody fails bios, then
nothing ever will help here, you can always first queue up bios
and change the limit afterwards while they're still in flight.
> the nowait case should simply break out of the loop after the first bio.
while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
if (nowait)
bio->bi_opf |= REQ_NOWAIT;
prev = bio_chain_and_submit(prev, bio);
if (nowait)
break;
}
Like this? I need to add nr_sects==0 post loop checking either way,
but I don't see how this break would be better any better than
bio_put before the submit from v2.
>>>> +sector_t bio_discard_limit(struct block_device *bdev, sector_t sector);
>>>
>>> And to be honest, I'd really prefer to not have bio_discard_limit
>>> exposed. Certainly not outside a header private to block/.
>>
>> Which is the other reason why first versions were putting down
>> a bio seeing that there is more to be done for nowait, which
>> you didn't like. I can return back to it or narrow the scopre.
>
> The above should also take care of that.
>
>>
>>> Also why start at 137? A comment
>>> would generally be pretty useful as well.
>>
>> There is a comment, 2 lines above the new define.
>>
>> /*
>> * A jump here: 130-136 are reserved for zoned block devices
>> * (see uapi/linux/blkzoned.h)
>> */
>>
>> Is that your concern?
>
> But those are ioctls, this is not an ioctl and uses a different
> number space. Take a look at e.g. nvme uring cmds which also
> don't try to use the same number space as the ioctl.
As far as I see nvme cmds are just dropped onto the 0x80- range. Not
continuing ioctls, right, but nevertheless random and undocumented. And
if we're arguing that IOC helps preventing people issuing ioctls to a
wrong file type, we can easily extend it to "what if someone passes BLK*
ioctl number to io_uring or vise versa? Not to mention that most of the
IOC selling points have zero sense for io_uring like struct size and
struct copy direction.
>>> Also can we have a include/uapi/linux/blkdev.h for this instead of
>>> bloating fs.h that gets included just about everywhere?
>> I don't think it belongs to this series.
>
> How would adding a proper header instead of bloating fs.h not be
> part of the series adding the first ever block layer uring_cmds?
Because, apparently, no one have ever gave a damn about it.
I'll add it for you, but with header probing instead of a simple
ifdef I'd call it a usability downgrade.
> Just in case I wasn't clear - this isn't asking you to move anything
> existing as we can't do that without breaking existing applications.
We can, by including blkdev.h into fs.h, but that's a different
kind of a structure.
> It is about adding the new command to the proper place.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/8] block: implement async discard as io_uring cmd
2024-09-10 20:22 ` Pavel Begunkov
@ 2024-09-12 9:28 ` Christoph Hellwig
0 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-09-12 9:28 UTC (permalink / raw)
To: Pavel Begunkov
Cc: Christoph Hellwig, io-uring, Jens Axboe, Conrad Meyer,
linux-block, linux-mm
On Tue, Sep 10, 2024 at 09:22:37PM +0100, Pavel Begunkov wrote:
>
> while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
> if (nowait)
> bio->bi_opf |= REQ_NOWAIT;
> prev = bio_chain_and_submit(prev, bio);
> if (nowait)
> break;
> }
>
> Like this? I need to add nr_sects==0 post loop checking either way,
> but I don't see how this break would be better any better than
> bio_put before the submit from v2.
You don't need the bio_chain_and_submit as bio is guaranteed to be NULL
here.
> > How would adding a proper header instead of bloating fs.h not be
> > part of the series adding the first ever block layer uring_cmds?
>
> Because, apparently, no one have ever gave a damn about it.
> I'll add it for you, but with header probing instead of a simple
> ifdef I'd call it a usability downgrade.
blk ioctls have historically been in fs.h, and keeping it that way
instead of moving some in the same range makes perfect sense.
Adding new commands to this mess absolutely does not.
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v4 6/8] block: implement async write zeroes command
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (4 preceding siblings ...)
2024-09-06 22:57 ` [PATCH v4 5/8] block: implement async discard as io_uring cmd Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 7/8] block: add nowait flag for __blkdev_issue_zero_pages Pavel Begunkov
` (4 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
Add another io_uring cmd for block layer implementing asynchronous write
zeroes. It reuses helpers we've added for async discards, and inherits
the code structure as well as all considerations in regards to page
cache races.
Suggested-by: Conrad Meyer <[email protected]>
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/ioctl.c | 64 +++++++++++++++++++++++++++++++++++++++++
include/uapi/linux/fs.h | 1 +
2 files changed, 65 insertions(+)
diff --git a/block/ioctl.c b/block/ioctl.c
index 19fba8332eee..ef4b2a90ad79 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -772,6 +772,67 @@ static void bio_cmd_bio_end_io(struct bio *bio)
bio_put(bio);
}
+static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
+ struct block_device *bdev,
+ uint64_t start, uint64_t len, bool nowait)
+{
+
+ sector_t bs_mask = (bdev_logical_block_size(bdev) >> SECTOR_SHIFT) - 1;
+ sector_t limit = bdev_write_zeroes_sectors(bdev);
+ sector_t sector = start >> SECTOR_SHIFT;
+ sector_t nr_sects = len >> SECTOR_SHIFT;
+ struct bio *prev = NULL, *bio;
+ gfp_t gfp = nowait ? GFP_NOWAIT : GFP_KERNEL;
+ int err;
+
+ if (!(file_to_blk_mode(cmd->file) & BLK_OPEN_WRITE))
+ return -EBADF;
+ if (bdev_read_only(bdev))
+ return -EPERM;
+ err = blk_validate_byte_range(bdev, start, len);
+ if (err)
+ return err;
+
+ if (!limit)
+ return -EOPNOTSUPP;
+ /*
+ * Don't allow multi-bio non-blocking submissions as subsequent bios
+ * may fail but we won't get a direct indication of that. Normally,
+ * the caller should retry from a blocking context.
+ */
+ if (nowait && nr_sects > limit)
+ return -EAGAIN;
+
+ err = filemap_invalidate_pages(bdev->bd_mapping, start,
+ start + len - 1, nowait);
+ if (err)
+ return err;
+
+ limit = min(limit, (UINT_MAX >> SECTOR_SHIFT) & ~bs_mask);
+ while (nr_sects) {
+ sector_t bio_sects = min(nr_sects, limit);
+
+ bio = bio_alloc(bdev, 0, REQ_OP_WRITE_ZEROES|REQ_NOUNMAP, gfp);
+ if (!bio)
+ break;
+ if (nowait)
+ bio->bi_opf |= REQ_NOWAIT;
+ bio->bi_iter.bi_sector = sector;
+ bio->bi_iter.bi_size = bio_sects << SECTOR_SHIFT;
+ sector += bio_sects;
+ nr_sects -= bio_sects;
+
+ prev = bio_chain_and_submit(prev, bio);
+ }
+ if (!prev)
+ return -EAGAIN;
+
+ prev->bi_private = cmd;
+ prev->bi_end_io = bio_cmd_bio_end_io;
+ submit_bio(prev);
+ return -EIOCBQUEUED;
+}
+
static int blkdev_cmd_discard(struct io_uring_cmd *cmd,
struct block_device *bdev,
uint64_t start, uint64_t len, bool nowait)
@@ -841,6 +902,9 @@ int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
switch (cmd_op) {
case BLOCK_URING_CMD_DISCARD:
return blkdev_cmd_discard(cmd, bdev, start, len, bic->nowait);
+ case BLOCK_URING_CMD_WRITE_ZEROES:
+ return blkdev_cmd_write_zeroes(cmd, bdev, start, len,
+ bic->nowait);
}
return -EINVAL;
}
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 7ea41ca97158..68b0fccebf92 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -209,6 +209,7 @@ struct fsxattr {
*/
#define BLOCK_URING_CMD_DISCARD _IO(0x12,137)
+#define BLOCK_URING_CMD_WRITE_ZEROES _IO(0x12,138)
#define BMAP_IOCTL 1 /* obsolete - kept for compatibility */
#define FIBMAP _IO(0x00,1) /* bmap access */
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 7/8] block: add nowait flag for __blkdev_issue_zero_pages
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (5 preceding siblings ...)
2024-09-06 22:57 ` [PATCH v4 6/8] block: implement async write zeroes command Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-06 22:57 ` [PATCH v4 8/8] block: implement async write zero pages command Pavel Begunkov
` (3 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
To reuse __blkdev_issue_zero_pages() in the following patch, we need to
make it work with non-blocking requests. Add a new nowait flag we can
pass inside. Return errors if something went wrong, and check
bio_alloc() for failures, which wasn't supposed to happen before because
of what gfp flags the callers are passing. Note that there might be a
bio passed back even when the function returned an error. To limit the
scope of the patch, don't add return code handling to callers, that can
be deferred to a follow up.
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/blk-lib.c | 24 +++++++++++++++++++-----
include/linux/bio.h | 4 ++++
include/linux/blkdev.h | 1 +
3 files changed, 24 insertions(+), 5 deletions(-)
diff --git a/block/blk-lib.c b/block/blk-lib.c
index c94c67a75f7e..0d8f1b93b4c3 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -193,20 +193,32 @@ static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
return min(pages, (sector_t)BIO_MAX_VECS);
}
-static void __blkdev_issue_zero_pages(struct block_device *bdev,
+int blkdev_issue_zero_pages_bio(struct block_device *bdev,
sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
struct bio **biop, unsigned int flags)
{
+ blk_opf_t opf = REQ_OP_WRITE;
+
+ if (flags & BLKDEV_ZERO_PAGES_NOWAIT) {
+ sector_t max_bio_sectors = BIO_MAX_VECS << PAGE_SECTORS_SHIFT;
+
+ if (nr_sects > max_bio_sectors)
+ return -EAGAIN;
+ opf |= REQ_NOWAIT;
+ }
+
while (nr_sects) {
unsigned int nr_vecs = __blkdev_sectors_to_bio_pages(nr_sects);
struct bio *bio;
- bio = bio_alloc(bdev, nr_vecs, REQ_OP_WRITE, gfp_mask);
+ bio = bio_alloc(bdev, nr_vecs, opf, gfp_mask);
+ if (!bio)
+ return -ENOMEM;
bio->bi_iter.bi_sector = sector;
if ((flags & BLKDEV_ZERO_KILLABLE) &&
fatal_signal_pending(current))
- break;
+ return -EINTR;
do {
unsigned int len, added;
@@ -223,6 +235,8 @@ static void __blkdev_issue_zero_pages(struct block_device *bdev,
*biop = bio_chain_and_submit(*biop, bio);
cond_resched();
}
+
+ return 0;
}
static int blkdev_issue_zero_pages(struct block_device *bdev, sector_t sector,
@@ -236,7 +250,7 @@ static int blkdev_issue_zero_pages(struct block_device *bdev, sector_t sector,
return -EOPNOTSUPP;
blk_start_plug(&plug);
- __blkdev_issue_zero_pages(bdev, sector, nr_sects, gfp, &bio, flags);
+ blkdev_issue_zero_pages_bio(bdev, sector, nr_sects, gfp, &bio, flags);
if (bio) {
if ((flags & BLKDEV_ZERO_KILLABLE) &&
fatal_signal_pending(current)) {
@@ -286,7 +300,7 @@ int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
} else {
if (flags & BLKDEV_ZERO_NOFALLBACK)
return -EOPNOTSUPP;
- __blkdev_issue_zero_pages(bdev, sector, nr_sects, gfp_mask,
+ blkdev_issue_zero_pages_bio(bdev, sector, nr_sects, gfp_mask,
biop, flags);
}
return 0;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 78ead424484c..87d85b326e1e 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -686,4 +686,8 @@ struct bio *blk_alloc_discard_bio(struct block_device *bdev,
sector_t bio_discard_limit(struct block_device *bdev, sector_t sector);
+int blkdev_issue_zero_pages_bio(struct block_device *bdev,
+ sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
+ struct bio **biop, unsigned int flags);
+
#endif /* __LINUX_BIO_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 643c9020a35a..bf1aa951fda2 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1098,6 +1098,7 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
#define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */
#define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */
#define BLKDEV_ZERO_KILLABLE (1 << 2) /* interruptible by fatal signals */
+#define BLKDEV_ZERO_PAGES_NOWAIT (1 << 3) /* non-blocking submission */
extern int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop,
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 8/8] block: implement async write zero pages command
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (6 preceding siblings ...)
2024-09-06 22:57 ` [PATCH v4 7/8] block: add nowait flag for __blkdev_issue_zero_pages Pavel Begunkov
@ 2024-09-06 22:57 ` Pavel Begunkov
2024-09-10 8:02 ` Christoph Hellwig
2024-09-08 22:25 ` [PATCH v4 0/8] implement async block discards and other ops via io_uring Jens Axboe
` (2 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-06 22:57 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
Add a command that writes the zero page to the drive. Apart from passing
the zero page instead of actual data it uses the normal write path and
doesn't do any further acceleration, nor it requires any special
hardware support. The indended use is to have a fallback when
BLOCK_URING_CMD_WRITE_ZEROES is not supported.
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/ioctl.c | 24 +++++++++++++++++++++---
include/uapi/linux/fs.h | 1 +
2 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/block/ioctl.c b/block/ioctl.c
index ef4b2a90ad79..3cb479192023 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -774,7 +774,8 @@ static void bio_cmd_bio_end_io(struct bio *bio)
static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
struct block_device *bdev,
- uint64_t start, uint64_t len, bool nowait)
+ uint64_t start, uint64_t len,
+ bool nowait, bool zero_pages)
{
sector_t bs_mask = (bdev_logical_block_size(bdev) >> SECTOR_SHIFT) - 1;
@@ -793,6 +794,20 @@ static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
if (err)
return err;
+ if (zero_pages) {
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd,
+ struct blk_iou_cmd);
+
+ err = blkdev_issue_zero_pages_bio(bdev, sector, nr_sects,
+ gfp, &prev,
+ BLKDEV_ZERO_PAGES_NOWAIT);
+ if (!prev)
+ return -EAGAIN;
+ if (err)
+ bic->res = err;
+ goto out_submit;
+ }
+
if (!limit)
return -EOPNOTSUPP;
/*
@@ -826,7 +841,7 @@ static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
}
if (!prev)
return -EAGAIN;
-
+out_submit:
prev->bi_private = cmd;
prev->bi_end_io = bio_cmd_bio_end_io;
submit_bio(prev);
@@ -904,7 +919,10 @@ int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
return blkdev_cmd_discard(cmd, bdev, start, len, bic->nowait);
case BLOCK_URING_CMD_WRITE_ZEROES:
return blkdev_cmd_write_zeroes(cmd, bdev, start, len,
- bic->nowait);
+ bic->nowait, false);
+ case BLOCK_URING_CMD_WRITE_ZERO_PAGE:
+ return blkdev_cmd_write_zeroes(cmd, bdev, start, len,
+ bic->nowait, true);
}
return -EINVAL;
}
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 68b0fccebf92..f4337b87d846 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -210,6 +210,7 @@ struct fsxattr {
#define BLOCK_URING_CMD_DISCARD _IO(0x12,137)
#define BLOCK_URING_CMD_WRITE_ZEROES _IO(0x12,138)
+#define BLOCK_URING_CMD_WRITE_ZERO_PAGE _IO(0x12,139)
#define BMAP_IOCTL 1 /* obsolete - kept for compatibility */
#define FIBMAP _IO(0x00,1) /* bmap access */
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 8/8] block: implement async write zero pages command
2024-09-06 22:57 ` [PATCH v4 8/8] block: implement async write zero pages command Pavel Begunkov
@ 2024-09-10 8:02 ` Christoph Hellwig
2024-09-10 12:17 ` Pavel Begunkov
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2024-09-10 8:02 UTC (permalink / raw)
To: Pavel Begunkov
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm,
Christoph Hellwig
On Fri, Sep 06, 2024 at 11:57:25PM +0100, Pavel Begunkov wrote:
> Add a command that writes the zero page to the drive. Apart from passing
> the zero page instead of actual data it uses the normal write path and
> doesn't do any further acceleration, nor it requires any special
> hardware support. The indended use is to have a fallback when
> BLOCK_URING_CMD_WRITE_ZEROES is not supported.
That's just a horrible API. The user should not have to care if the
kernel is using different kinds of implementations.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 8/8] block: implement async write zero pages command
2024-09-10 8:02 ` Christoph Hellwig
@ 2024-09-10 12:17 ` Pavel Begunkov
2024-09-10 14:20 ` Christoph Hellwig
0 siblings, 1 reply; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-10 12:17 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm
On 9/10/24 09:02, Christoph Hellwig wrote:
> On Fri, Sep 06, 2024 at 11:57:25PM +0100, Pavel Begunkov wrote:
>> Add a command that writes the zero page to the drive. Apart from passing
>> the zero page instead of actual data it uses the normal write path and
>> doesn't do any further acceleration, nor it requires any special
>> hardware support. The indended use is to have a fallback when
>> BLOCK_URING_CMD_WRITE_ZEROES is not supported.
>
> That's just a horrible API. The user should not have to care if the
> kernel is using different kinds of implementations.
It's rather not a good api when instead of issuing a presumably low
overhead fast command the user expects sending a good bunch of actual
writes with different performance characteristics. In my experience,
such fallbacks cause more pain when a more explicit approach is
possible. And let me note that it's already exposed via fallocate, even
though in a bit different way.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 8/8] block: implement async write zero pages command
2024-09-10 12:17 ` Pavel Begunkov
@ 2024-09-10 14:20 ` Christoph Hellwig
2024-09-10 20:10 ` Pavel Begunkov
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2024-09-10 14:20 UTC (permalink / raw)
To: Pavel Begunkov
Cc: Christoph Hellwig, io-uring, Jens Axboe, Conrad Meyer,
linux-block, linux-mm
On Tue, Sep 10, 2024 at 01:17:48PM +0100, Pavel Begunkov wrote:
> > > Add a command that writes the zero page to the drive. Apart from passing
> > > the zero page instead of actual data it uses the normal write path and
> > > doesn't do any further acceleration, nor it requires any special
> > > hardware support. The indended use is to have a fallback when
> > > BLOCK_URING_CMD_WRITE_ZEROES is not supported.
> >
> > That's just a horrible API. The user should not have to care if the
> > kernel is using different kinds of implementations.
>
> It's rather not a good api when instead of issuing a presumably low
> overhead fast command the user expects sending a good bunch of actual
> writes with different performance characteristics.
The normal use case (at least the ones I've been involved with) are
simply zero these blocks or the entire device, and please do it as
good as you can. Needing asynchronous error handling in userspace
for that is extremely counter productive.
> In my experience,
> such fallbacks cause more pain when a more explicit approach is
> possible. And let me note that it's already exposed via fallocate, even
> though in a bit different way.
Do you mean the FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE case in
blkdev_fallocate? As far as I can tell this is actually a really bad
example, as even a hardware offloaded write zeroes can and often does
write physical zeroes to the media, and does so from a firmware path
that is often slower than the kernel loop.
But you have an actual use case where you want to send a write zeroes
command but never a loop of writes, it would be good to document that
and add a flag for it. And if we don't have that case it would still
be good to have a reserved flags field to add it later if needed.
Btw, do you have API documentation (e.g. in the form of a man page)
for these new calls somewhere?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 8/8] block: implement async write zero pages command
2024-09-10 14:20 ` Christoph Hellwig
@ 2024-09-10 20:10 ` Pavel Begunkov
2024-09-12 9:26 ` Christoph Hellwig
0 siblings, 1 reply; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-10 20:10 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm
On 9/10/24 15:20, Christoph Hellwig wrote:
> On Tue, Sep 10, 2024 at 01:17:48PM +0100, Pavel Begunkov wrote:
>>>> Add a command that writes the zero page to the drive. Apart from passing
>>>> the zero page instead of actual data it uses the normal write path and
>>>> doesn't do any further acceleration, nor it requires any special
>>>> hardware support. The indended use is to have a fallback when
>>>> BLOCK_URING_CMD_WRITE_ZEROES is not supported.
>>>
>>> That's just a horrible API. The user should not have to care if the
>>> kernel is using different kinds of implementations.
>>
>> It's rather not a good api when instead of issuing a presumably low
>> overhead fast command the user expects sending a good bunch of actual
>> writes with different performance characteristics.
>
> The normal use case (at least the ones I've been involved with) are
> simply zero these blocks or the entire device, and please do it as
> good as you can. Needing asynchronous error handling in userspace
> for that is extremely counter productive.
If we expect any error handling from the user space at all (we do),
it'll and have to be asynchronous, it's async commands and io_uring.
Asking the user to reissue a command in some form is normal.
>> In my experience,
>> such fallbacks cause more pain when a more explicit approach is
>> possible. And let me note that it's already exposed via fallocate, even
>> though in a bit different way.
>
> Do you mean the FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE case in
> blkdev_fallocate? As far as I can tell this is actually a really bad
> example, as even a hardware offloaded write zeroes can and often does
> write physical zeroes to the media, and does so from a firmware path
> that is often slower than the kernel loop.
That's a shame, I agree, which is why I call it "presumably" faster,
but that actually gives more reasons why you might want this cmd
separately from write zeroes, considering the user might know
its hardware and the kernel doesn't try to choose which approach
faster.
> But you have an actual use case where you want to send a write zeroes
> command but never a loop of writes, it would be good to document that
> and add a flag for it. And if we don't have that case it would still
Users who know more about hw and e.g. prefer writes with 0 page as
per above. Users with lots of devices who care about pcie / memory
bandwidth, there is enough of those, they might want to do
something different like adjusting algorithms and throttling.
Better/easier testing, though of lesser importance.
Those I made up just now on the spot, but the reporter did
specifically ask about some way to differentiate fallbacks.
> be good to have a reserved flags field to add it later if needed.
if (unlikely(sqe->ioprio || sqe->__pad1 || sqe->len ||
sqe->rw_flags || sqe->file_index))
return -EINVAL;
There is a good bunch of sqe fields that can used for that later.
> Btw, do you have API documentation (e.g. in the form of a man page)
> for these new calls somewhere?
Mentioned in the cover:
tests and docs:
https://github.com/isilence/liburing.git discard-cmd
man page specifically:
https://github.com/isilence/liburing/commit/a6fa2bc2400bf7fcb80496e322b5db4c8b3191f0
I'll send them once the kernel is set in place.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 8/8] block: implement async write zero pages command
2024-09-10 20:10 ` Pavel Begunkov
@ 2024-09-12 9:26 ` Christoph Hellwig
2024-09-12 16:38 ` Pavel Begunkov
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2024-09-12 9:26 UTC (permalink / raw)
To: Pavel Begunkov
Cc: Christoph Hellwig, io-uring, Jens Axboe, Conrad Meyer,
linux-block, linux-mm
On Tue, Sep 10, 2024 at 09:10:34PM +0100, Pavel Begunkov wrote:
> If we expect any error handling from the user space at all (we do),
> it'll and have to be asynchronous, it's async commands and io_uring.
> Asking the user to reissue a command in some form is normal.
The point is that pretty much all other errors are fatal, while this
is a not supported for which we have a guaranteed to work kernel
fallback. Kicking it off reuires a bit of work, but I'd rather have
that in one place rather than applications that work on some hardware
and not others.
> That's a shame, I agree, which is why I call it "presumably" faster,
> but that actually gives more reasons why you might want this cmd
> separately from write zeroes, considering the user might know
> its hardware and the kernel doesn't try to choose which approach
> faster.
But the kernel is the right place to make that decision, even if we
aren't very smart about it right now. Fanning that out to every
single applications is a bad idea.
> Users who know more about hw and e.g. prefer writes with 0 page as
> per above. Users with lots of devices who care about pcie / memory
> bandwidth, there is enough of those, they might want to do
> something different like adjusting algorithms and throttling.
> Better/easier testing, though of lesser importance.
>
> Those I made up just now on the spot, but the reporter did
> specifically ask about some way to differentiate fallbacks.
Well, an optional nofallback flag would be in line with how we do
that. Do you have the original report to share somewhere?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 8/8] block: implement async write zero pages command
2024-09-12 9:26 ` Christoph Hellwig
@ 2024-09-12 16:38 ` Pavel Begunkov
0 siblings, 0 replies; 25+ messages in thread
From: Pavel Begunkov @ 2024-09-12 16:38 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, Jens Axboe, Conrad Meyer, linux-block, linux-mm
On 9/12/24 10:26, Christoph Hellwig wrote:
> On Tue, Sep 10, 2024 at 09:10:34PM +0100, Pavel Begunkov wrote:
>> If we expect any error handling from the user space at all (we do),
>> it'll and have to be asynchronous, it's async commands and io_uring.
>> Asking the user to reissue a command in some form is normal.
>
> The point is that pretty much all other errors are fatal, while this
> is a not supported for which we have a guaranteed to work kernel
Yes, and there will be an error indicating that it's not
supported, just like it'll return an error this io_uring
commands are not supported by a given kernel.
> fallback. Kicking it off reuires a bit of work, but I'd rather have
> that in one place rather than applications that work on some hardware
> and not others.
There is nothing new in features that might be unsupported,
because of hardware or otherwise, it's giving control to the
userspace.
>> That's a shame, I agree, which is why I call it "presumably" faster,
>> but that actually gives more reasons why you might want this cmd
>> separately from write zeroes, considering the user might know
>> its hardware and the kernel doesn't try to choose which approach
>> faster.
>
> But the kernel is the right place to make that decision, even if we
> aren't very smart about it right now. Fanning that out to every
> single applications is a bad idea.
Apart that it will never happen
>> Users who know more about hw and e.g. prefer writes with 0 page as
>> per above. Users with lots of devices who care about pcie / memory
>> bandwidth, there is enough of those, they might want to do
>> something different like adjusting algorithms and throttling.
>> Better/easier testing, though of lesser importance.
>>
>> Those I made up just now on the spot, but the reporter did
>> specifically ask about some way to differentiate fallbacks.
>
> Well, an optional nofallback flag would be in line with how we do
> that. Do you have the original report to share somewhere?
Following with another flag "please do fallback", at which
point it doesn't make any sense when that can be done in
userspace.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 0/8] implement async block discards and other ops via io_uring
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (7 preceding siblings ...)
2024-09-06 22:57 ` [PATCH v4 8/8] block: implement async write zero pages command Pavel Begunkov
@ 2024-09-08 22:25 ` Jens Axboe
2024-09-09 14:51 ` Jens Axboe
2024-09-09 15:09 ` Jens Axboe
10 siblings, 0 replies; 25+ messages in thread
From: Jens Axboe @ 2024-09-08 22:25 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
Cc: Conrad Meyer, linux-block, linux-mm, Christoph Hellwig
On 9/6/24 4:57 PM, Pavel Begunkov wrote:
> There is an interest in having asynchronous block operations like
> discard and write zeroes. The series implements that as io_uring commands,
> which is an io_uring request type allowing to implement custom file
> specific operations.
>
> First 4 are preparation patches. Patch 5 introduces the main chunk of
> cmd infrastructure and discard commands. Patches 6-8 implement
> write zeroes variants.
>
> Branch with tests and docs:
> https://github.com/isilence/liburing.git discard-cmd
>
> The man page specifically (need to shuffle it to some cmd section):
> https://github.com/isilence/liburing/commit/a6fa2bc2400bf7fcb80496e322b5db4c8b3191f0
This looks good to me now. Only minor nit is that I generally don't
like:
while ((bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp))) {
where assignment and test are in one line as they are harder do read,
prefer doing:
do {
bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp);
if (!bio)
break;
[...]
} while (1);
instead. But nothing that should need a respin or anything.
I'll run some testing on this tomorrow!
Thanks,
--
Jens Axboe
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 0/8] implement async block discards and other ops via io_uring
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (8 preceding siblings ...)
2024-09-08 22:25 ` [PATCH v4 0/8] implement async block discards and other ops via io_uring Jens Axboe
@ 2024-09-09 14:51 ` Jens Axboe
2024-09-09 15:33 ` Jens Axboe
2024-09-09 15:09 ` Jens Axboe
10 siblings, 1 reply; 25+ messages in thread
From: Jens Axboe @ 2024-09-09 14:51 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
Cc: Conrad Meyer, linux-block, linux-mm, Christoph Hellwig
On 9/6/24 4:57 PM, Pavel Begunkov wrote:
> There is an interest in having asynchronous block operations like
> discard and write zeroes. The series implements that as io_uring commands,
> which is an io_uring request type allowing to implement custom file
> specific operations.
>
> First 4 are preparation patches. Patch 5 introduces the main chunk of
> cmd infrastructure and discard commands. Patches 6-8 implement
> write zeroes variants.
Sitting in for-6.12/io_uring-discard for now, as there's a hidden
dependency with the end/len patch in for-6.12/block.
Ran a quick test - have 64 4k discards inflight. Here's the current
performance, with 64 threads with sync discard:
qd64 sync discard: 21K IOPS, lat avg 3 msec (max 21 msec)
and using io_uring with async discard, otherwise same test case:
qd64 async discard: 76K IOPS, lat avg 845 usec (max 2.2 msec)
If we switch to doing 1M discards, then we get:
qd64 sync discard: 14K IOPS, lat avg 5 msec (max 25 msec)
and using io_uring with async discard, otherwise same test case:
qd64 async discard: 56K IOPS, lat avg 1153 usec (max 3.6 msec)
This is on a:
Samsung Electronics Co Ltd NVMe SSD Controller PM174X
nvme device. It doesn't have the fastest discard, but still nicely shows
the improvement over a purely sync discard.
--
Jens Axboe
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 0/8] implement async block discards and other ops via io_uring
2024-09-09 14:51 ` Jens Axboe
@ 2024-09-09 15:33 ` Jens Axboe
0 siblings, 0 replies; 25+ messages in thread
From: Jens Axboe @ 2024-09-09 15:33 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
Cc: Conrad Meyer, linux-block, linux-mm, Christoph Hellwig
On 9/9/24 8:51 AM, Jens Axboe wrote:
> On 9/6/24 4:57 PM, Pavel Begunkov wrote:
>> There is an interest in having asynchronous block operations like
>> discard and write zeroes. The series implements that as io_uring commands,
>> which is an io_uring request type allowing to implement custom file
>> specific operations.
>>
>> First 4 are preparation patches. Patch 5 introduces the main chunk of
>> cmd infrastructure and discard commands. Patches 6-8 implement
>> write zeroes variants.
>
> Sitting in for-6.12/io_uring-discard for now, as there's a hidden
> dependency with the end/len patch in for-6.12/block.
>
> Ran a quick test - have 64 4k discards inflight. Here's the current
> performance, with 64 threads with sync discard:
>
> qd64 sync discard: 21K IOPS, lat avg 3 msec (max 21 msec)
>
> and using io_uring with async discard, otherwise same test case:
>
> qd64 async discard: 76K IOPS, lat avg 845 usec (max 2.2 msec)
>
> If we switch to doing 1M discards, then we get:
>
> qd64 sync discard: 14K IOPS, lat avg 5 msec (max 25 msec)
>
> and using io_uring with async discard, otherwise same test case:
>
> qd64 async discard: 56K IOPS, lat avg 1153 usec (max 3.6 msec)
>
> This is on a:
>
> Samsung Electronics Co Ltd NVMe SSD Controller PM174X
>
> nvme device. It doesn't have the fastest discard, but still nicely shows
> the improvement over a purely sync discard.
Did some basic testing with null_blk just to get a better idea of what
it'd look like on a faster devices. Same test cases as above (qd=64, 4k
and 1M random trims):
Type Trim size IOPS Lat avg (usec) Lat Max (usec)
==============================================================
sync 4k 144K 444 20314
async 4k 1353K 47 595
sync 1M 56K 1136 21031
async 1M 94K 680 760
--
Jens Axboe
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 0/8] implement async block discards and other ops via io_uring
2024-09-06 22:57 [PATCH v4 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (9 preceding siblings ...)
2024-09-09 14:51 ` Jens Axboe
@ 2024-09-09 15:09 ` Jens Axboe
10 siblings, 0 replies; 25+ messages in thread
From: Jens Axboe @ 2024-09-09 15:09 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
Cc: Conrad Meyer, linux-block, linux-mm, Christoph Hellwig
On Fri, 06 Sep 2024 23:57:17 +0100, Pavel Begunkov wrote:
> There is an interest in having asynchronous block operations like
> discard and write zeroes. The series implements that as io_uring commands,
> which is an io_uring request type allowing to implement custom file
> specific operations.
>
> First 4 are preparation patches. Patch 5 introduces the main chunk of
> cmd infrastructure and discard commands. Patches 6-8 implement
> write zeroes variants.
>
> [...]
Applied, thanks!
[1/8] io_uring/cmd: expose iowq to cmds
commit: c6472f5f9a0806b0598ba513344b5a30cfa53b97
[2/8] io_uring/cmd: give inline space in request to cmds
commit: 1a7628d034f8328813163d07ce112e1198289aeb
[3/8] filemap: introduce filemap_invalidate_pages
commit: 1f027ae3136dfb4bfe40d83f3e0f5019e63db883
[4/8] block: introduce blk_validate_byte_range()
commit: da22f537db72c2520c48445840b7e371c58762a7
[5/8] block: implement async discard as io_uring cmd
commit: 0d266c981982f0f54165f05dbcdf449bb87f5184
[6/8] block: implement async write zeroes command
commit: b56d5132a78db21ca3b386056af38802aea0a274
[7/8] block: add nowait flag for __blkdev_issue_zero_pages
commit: 4f8e422a0744f1294c784109cfbedafd97263c2f
[8/8] block: implement async write zero pages command
commit: 4811c90cbf179b4c58fdbad54c5b05efc0d59159
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 25+ messages in thread