* [PATCH v5 1/8] io_uring/cmd: expose iowq to cmds
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-11 16:34 ` [PATCH v5 2/8] io_uring/cmd: give inline space in request " Pavel Begunkov
` (7 subsequent siblings)
8 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig
When an io_uring request needs blocking context we offload it to the
io_uring's thread pool called io-wq. We can get there off ->uring_cmd
by returning -EAGAIN, but there is no straightforward way of doing that
from an asynchronous callback. Add a helper that would transfer a
command to a blocking context.
Note, we do an extra hop via task_work before io_queue_iowq(), that's a
limitation of io_uring infra we have that can likely be lifted later
if that would ever become a problem.
Signed-off-by: Pavel Begunkov <[email protected]>
---
include/linux/io_uring/cmd.h | 6 ++++++
io_uring/io_uring.c | 11 +++++++++++
io_uring/io_uring.h | 1 +
io_uring/uring_cmd.c | 7 +++++++
4 files changed, 25 insertions(+)
diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
index 447fbfd32215..86ceb3383e49 100644
--- a/include/linux/io_uring/cmd.h
+++ b/include/linux/io_uring/cmd.h
@@ -48,6 +48,9 @@ void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
unsigned int issue_flags);
+/* Execute the request from a blocking context */
+void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd);
+
#else
static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd)
@@ -67,6 +70,9 @@ static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
}
+static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
+{
+}
#endif
/*
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 1aca501efaf6..86cf31902841 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -533,6 +533,17 @@ static void io_queue_iowq(struct io_kiocb *req)
io_queue_linked_timeout(link);
}
+static void io_req_queue_iowq_tw(struct io_kiocb *req, struct io_tw_state *ts)
+{
+ io_queue_iowq(req);
+}
+
+void io_req_queue_iowq(struct io_kiocb *req)
+{
+ req->io_task_work.func = io_req_queue_iowq_tw;
+ io_req_task_work_add(req);
+}
+
static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
{
while (!list_empty(&ctx->defer_list)) {
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 65078e641390..9d70b2cf7b1e 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -94,6 +94,7 @@ int io_uring_alloc_task_context(struct task_struct *task,
int io_ring_add_registered_file(struct io_uring_task *tctx, struct file *file,
int start, int end);
+void io_req_queue_iowq(struct io_kiocb *req);
int io_poll_issue(struct io_kiocb *req, struct io_tw_state *ts);
int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr);
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 8391c7c7c1ec..39c3c816ec78 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -277,6 +277,13 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
}
EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);
+void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd)
+{
+ struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+
+ io_req_queue_iowq(req);
+}
+
static inline int io_uring_cmd_getsockopt(struct socket *sock,
struct io_uring_cmd *cmd,
unsigned int issue_flags)
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v5 2/8] io_uring/cmd: give inline space in request to cmds
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
2024-09-11 16:34 ` [PATCH v5 1/8] io_uring/cmd: expose iowq to cmds Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-11 16:34 ` [PATCH v5 3/8] filemap: introduce filemap_invalidate_pages Pavel Begunkov
` (6 subsequent siblings)
8 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig
Some io_uring commands can use some inline space in io_kiocb. We have 32
bytes in struct io_uring_cmd, expose it.
Signed-off-by: Pavel Begunkov <[email protected]>
---
include/linux/io_uring/cmd.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
index 86ceb3383e49..c189d36ad55e 100644
--- a/include/linux/io_uring/cmd.h
+++ b/include/linux/io_uring/cmd.h
@@ -23,6 +23,15 @@ static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe)
return sqe->cmd;
}
+static inline void io_uring_cmd_private_sz_check(size_t cmd_sz)
+{
+ BUILD_BUG_ON(cmd_sz > sizeof_field(struct io_uring_cmd, pdu));
+}
+#define io_uring_cmd_to_pdu(cmd, pdu_type) ( \
+ io_uring_cmd_private_sz_check(sizeof(pdu_type)), \
+ ((pdu_type *)&(cmd)->pdu) \
+)
+
#if defined(CONFIG_IO_URING)
int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
struct iov_iter *iter, void *ioucmd);
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v5 3/8] filemap: introduce filemap_invalidate_pages
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
2024-09-11 16:34 ` [PATCH v5 1/8] io_uring/cmd: expose iowq to cmds Pavel Begunkov
2024-09-11 16:34 ` [PATCH v5 2/8] io_uring/cmd: give inline space in request " Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-11 16:34 ` [PATCH v5 4/8] block: introduce blk_validate_byte_range() Pavel Begunkov
` (5 subsequent siblings)
8 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig
kiocb_invalidate_pages() is useful for the write path, however not
everything is backed by kiocb and we want to reuse the function for bio
based discard implementation. Extract and and reuse a new helper called
filemap_invalidate_pages(), which takes a argument indicating whether it
should be non-blocking and might return -EAGAIN.
Signed-off-by: Pavel Begunkov <[email protected]>
---
include/linux/pagemap.h | 2 ++
mm/filemap.c | 17 ++++++++++++-----
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index d9c7edb6422b..e39c3a7ce33c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -32,6 +32,8 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
pgoff_t start, pgoff_t end);
int kiocb_invalidate_pages(struct kiocb *iocb, size_t count);
void kiocb_invalidate_post_direct_write(struct kiocb *iocb, size_t count);
+int filemap_invalidate_pages(struct address_space *mapping,
+ loff_t pos, loff_t end, bool nowait);
int write_inode_now(struct inode *, int sync);
int filemap_fdatawrite(struct address_space *);
diff --git a/mm/filemap.c b/mm/filemap.c
index d62150418b91..6843ed4847d4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2712,14 +2712,12 @@ int kiocb_write_and_wait(struct kiocb *iocb, size_t count)
}
EXPORT_SYMBOL_GPL(kiocb_write_and_wait);
-int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
+int filemap_invalidate_pages(struct address_space *mapping,
+ loff_t pos, loff_t end, bool nowait)
{
- struct address_space *mapping = iocb->ki_filp->f_mapping;
- loff_t pos = iocb->ki_pos;
- loff_t end = pos + count - 1;
int ret;
- if (iocb->ki_flags & IOCB_NOWAIT) {
+ if (nowait) {
/* we could block if there are any pages in the range */
if (filemap_range_has_page(mapping, pos, end))
return -EAGAIN;
@@ -2738,6 +2736,15 @@ int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
return invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
end >> PAGE_SHIFT);
}
+
+int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
+{
+ struct address_space *mapping = iocb->ki_filp->f_mapping;
+
+ return filemap_invalidate_pages(mapping, iocb->ki_pos,
+ iocb->ki_pos + count - 1,
+ iocb->ki_flags & IOCB_NOWAIT);
+}
EXPORT_SYMBOL_GPL(kiocb_invalidate_pages);
/**
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v5 4/8] block: introduce blk_validate_byte_range()
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (2 preceding siblings ...)
2024-09-11 16:34 ` [PATCH v5 3/8] filemap: introduce filemap_invalidate_pages Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-12 9:29 ` Christoph Hellwig
2024-09-11 16:34 ` [PATCH v5 5/8] block: implement async io_uring discard cmd Pavel Begunkov
` (4 subsequent siblings)
8 siblings, 1 reply; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig
In preparation to further changes extract a helper function out of
blk_ioctl_discard() that validates if we can do IO against the given
range of disk byte addresses.
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/ioctl.c | 49 +++++++++++++++++++++++++++++++------------------
1 file changed, 31 insertions(+), 18 deletions(-)
diff --git a/block/ioctl.c b/block/ioctl.c
index e8e4a4190f18..6d663d6ae036 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -92,38 +92,51 @@ static int compat_blkpg_ioctl(struct block_device *bdev,
}
#endif
+/*
+ * Check that [start, start + len) is a valid range from the block device's
+ * perspective, including verifying that it can be correctly translated into
+ * logical block addresses.
+ */
+static int blk_validate_byte_range(struct block_device *bdev,
+ uint64_t start, uint64_t len)
+{
+ unsigned int bs_mask = bdev_logical_block_size(bdev) - 1;
+ uint64_t end;
+
+ if ((start | len) & bs_mask)
+ return -EINVAL;
+ if (!len)
+ return -EINVAL;
+ if (check_add_overflow(start, len, &end) || end > bdev_nr_bytes(bdev))
+ return -EINVAL;
+
+ return 0;
+}
+
static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
unsigned long arg)
{
- unsigned int bs_mask = bdev_logical_block_size(bdev) - 1;
- uint64_t range[2], start, len, end;
+ uint64_t range[2], start, len;
struct bio *prev = NULL, *bio;
sector_t sector, nr_sects;
struct blk_plug plug;
int err;
- if (!(mode & BLK_OPEN_WRITE))
- return -EBADF;
-
- if (!bdev_max_discard_sectors(bdev))
- return -EOPNOTSUPP;
- if (bdev_read_only(bdev))
- return -EPERM;
-
if (copy_from_user(range, (void __user *)arg, sizeof(range)))
return -EFAULT;
-
start = range[0];
len = range[1];
- if (!len)
- return -EINVAL;
- if ((start | len) & bs_mask)
- return -EINVAL;
+ if (!bdev_max_discard_sectors(bdev))
+ return -EOPNOTSUPP;
- if (check_add_overflow(start, len, &end) ||
- end > bdev_nr_bytes(bdev))
- return -EINVAL;
+ if (!(mode & BLK_OPEN_WRITE))
+ return -EBADF;
+ if (bdev_read_only(bdev))
+ return -EPERM;
+ err = blk_validate_byte_range(bdev, start, len);
+ if (err)
+ return err;
filemap_invalidate_lock(bdev->bd_mapping);
err = truncate_bdev_range(bdev, mode, start, start + len - 1);
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v5 5/8] block: implement async io_uring discard cmd
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (3 preceding siblings ...)
2024-09-11 16:34 ` [PATCH v5 4/8] block: introduce blk_validate_byte_range() Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-12 9:31 ` Christoph Hellwig
2024-09-11 16:34 ` [PATCH v5 6/8] block: implement write zeroes io_uring cmd Pavel Begunkov
` (3 subsequent siblings)
8 siblings, 1 reply; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig, Conrad Meyer
io_uring allows implementing custom file specific asynchronous
operations via the fops->uring_cmd callback, a.k.a. IORING_OP_URING_CMD
requests or just io_uring commands. Use it to add support for async
discards.
Normally, it first tries to queue up bios in a non-blocking context,
and if that fails, we'd retry from a blocking context by returning
-EAGAIN to the core io_uring. We always get the result from bios
asynchronously by setting a custom bi_end_io callback, at which point
we drag the request into the task context to either reissue or complete
it and post a completion to the user.
Unlike ioctl(BLKDISCARD) with stronger guarantees against races, we only
do a best effort attempt to invalidate page cache, and it can race with
any writes and reads and leave page cache stale. It's the same kind of
races we allow to direct writes.
Also, apart from cases where discarding is not allowed at all, e.g.
discards are not supported or the file/device is read only, the user
should assume that the sector range on disk is not valid anymore, even
when an error was returned to the user.
Suggested-by: Conrad Meyer <[email protected]>
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/blk.h | 1 +
block/fops.c | 2 +
block/ioctl.c | 112 ++++++++++++++++++++++++++++++++++++
include/uapi/linux/blkdev.h | 14 +++++
4 files changed, 129 insertions(+)
create mode 100644 include/uapi/linux/blkdev.h
diff --git a/block/blk.h b/block/blk.h
index 32f4e9f630a3..1a1a18d118f7 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -605,6 +605,7 @@ blk_mode_t file_to_blk_mode(struct file *file);
int truncate_bdev_range(struct block_device *bdev, blk_mode_t mode,
loff_t lstart, loff_t lend);
long blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg);
+int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags);
long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg);
extern const struct address_space_operations def_blk_aops;
diff --git a/block/fops.c b/block/fops.c
index 9825c1713a49..8154b10b5abf 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -17,6 +17,7 @@
#include <linux/fs.h>
#include <linux/iomap.h>
#include <linux/module.h>
+#include <linux/io_uring/cmd.h>
#include "blk.h"
static inline struct inode *bdev_file_inode(struct file *file)
@@ -873,6 +874,7 @@ const struct file_operations def_blk_fops = {
.splice_read = filemap_splice_read,
.splice_write = iter_file_splice_write,
.fallocate = blkdev_fallocate,
+ .uring_cmd = blkdev_uring_cmd,
.fop_flags = FOP_BUFFER_RASYNC,
};
diff --git a/block/ioctl.c b/block/ioctl.c
index 6d663d6ae036..007f6399de66 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -11,6 +11,9 @@
#include <linux/blktrace_api.h>
#include <linux/pr.h>
#include <linux/uaccess.h>
+#include <linux/pagemap.h>
+#include <linux/io_uring/cmd.h>
+#include <uapi/linux/blkdev.h>
#include "blk.h"
static int blkpg_do_ioctl(struct block_device *bdev,
@@ -747,3 +750,112 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
return ret;
}
#endif
+
+struct blk_iou_cmd {
+ int res;
+ bool nowait;
+};
+
+static void blk_cmd_complete(struct io_uring_cmd *cmd, unsigned int issue_flags)
+{
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+
+ if (bic->res == -EAGAIN && bic->nowait)
+ io_uring_cmd_issue_blocking(cmd);
+ else
+ io_uring_cmd_done(cmd, bic->res, 0, issue_flags);
+}
+
+static void bio_cmd_bio_end_io(struct bio *bio)
+{
+ struct io_uring_cmd *cmd = bio->bi_private;
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+
+ if (unlikely(bio->bi_status) && !bic->res)
+ bic->res = blk_status_to_errno(bio->bi_status);
+
+ io_uring_cmd_do_in_task_lazy(cmd, blk_cmd_complete);
+ bio_put(bio);
+}
+
+static int blkdev_cmd_discard(struct io_uring_cmd *cmd,
+ struct block_device *bdev,
+ uint64_t start, uint64_t len, bool nowait)
+{
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+ gfp_t gfp = nowait ? GFP_NOWAIT : GFP_KERNEL;
+ sector_t sector = start >> SECTOR_SHIFT;
+ sector_t nr_sects = len >> SECTOR_SHIFT;
+ struct bio *prev = NULL, *bio;
+ int err;
+
+ if (!bdev_max_discard_sectors(bdev))
+ return -EOPNOTSUPP;
+ if (!(file_to_blk_mode(cmd->file) & BLK_OPEN_WRITE))
+ return -EBADF;
+ if (bdev_read_only(bdev))
+ return -EPERM;
+ err = blk_validate_byte_range(bdev, start, len);
+ if (err)
+ return err;
+
+ err = filemap_invalidate_pages(bdev->bd_mapping, start,
+ start + len - 1, nowait);
+ if (err)
+ return err;
+
+ while (true) {
+ bio = blk_alloc_discard_bio(bdev, §or, &nr_sects, gfp);
+ if (!bio)
+ break;
+ if (nowait) {
+ /*
+ * Don't allow multi-bio non-blocking submissions as
+ * subsequent bios may fail but we won't get a direct
+ * indication of that. Normally, the caller should
+ * retry from a blocking context.
+ */
+ if (unlikely(nr_sects)) {
+ bio_put(bio);
+ return -EAGAIN;
+ }
+ bio->bi_opf |= REQ_NOWAIT;
+ }
+
+ prev = bio_chain_and_submit(prev, bio);
+ }
+ if (unlikely(!prev))
+ return -EAGAIN;
+ if (unlikely(nr_sects))
+ bic->res = -EAGAIN;
+
+ prev->bi_private = cmd;
+ prev->bi_end_io = bio_cmd_bio_end_io;
+ submit_bio(prev);
+ return -EIOCBQUEUED;
+}
+
+int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+{
+ struct block_device *bdev = I_BDEV(cmd->file->f_mapping->host);
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+ const struct io_uring_sqe *sqe = cmd->sqe;
+ u32 cmd_op = cmd->cmd_op;
+ uint64_t start, len;
+
+ if (unlikely(sqe->ioprio || sqe->__pad1 || sqe->len ||
+ sqe->rw_flags || sqe->file_index))
+ return -EINVAL;
+
+ bic->res = 0;
+ bic->nowait = issue_flags & IO_URING_F_NONBLOCK;
+
+ start = READ_ONCE(sqe->addr);
+ len = READ_ONCE(sqe->addr3);
+
+ switch (cmd_op) {
+ case BLOCK_URING_CMD_DISCARD:
+ return blkdev_cmd_discard(cmd, bdev, start, len, bic->nowait);
+ }
+ return -EINVAL;
+}
diff --git a/include/uapi/linux/blkdev.h b/include/uapi/linux/blkdev.h
new file mode 100644
index 000000000000..66373cd1a83a
--- /dev/null
+++ b/include/uapi/linux/blkdev.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_LINUX_BLKDEV_H
+#define _UAPI_LINUX_BLKDEV_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+/*
+ * io_uring block file commands, see IORING_OP_URING_CMD.
+ * It's a different number space from ioctl(), reuse the block's code 0x12.
+ */
+#define BLOCK_URING_CMD_DISCARD _IO(0x12, 0)
+
+#endif
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v5 5/8] block: implement async io_uring discard cmd
2024-09-11 16:34 ` [PATCH v5 5/8] block: implement async io_uring discard cmd Pavel Begunkov
@ 2024-09-12 9:31 ` Christoph Hellwig
2024-09-12 16:19 ` Pavel Begunkov
0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2024-09-12 9:31 UTC (permalink / raw)
To: Pavel Begunkov
Cc: io-uring, Jens Axboe, linux-block, linux-mm, Christoph Hellwig,
Conrad Meyer
> + * io_uring block file commands, see IORING_OP_URING_CMD.
> + * It's a different number space from ioctl(), reuse the block's code 0x12.
> + */
> +#define BLOCK_URING_CMD_DISCARD _IO(0x12, 0)
Please just start out at some arbitrary boundary, but don't reuse
the ioctl code from an ioctl that does something vaguely similar for
no good reason.
The rest looks good.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v5 5/8] block: implement async io_uring discard cmd
2024-09-12 9:31 ` Christoph Hellwig
@ 2024-09-12 16:19 ` Pavel Begunkov
0 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-12 16:19 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, Jens Axboe, linux-block, linux-mm, Conrad Meyer
On 9/12/24 10:31, Christoph Hellwig wrote:
>> + * io_uring block file commands, see IORING_OP_URING_CMD.
>> + * It's a different number space from ioctl(), reuse the block's code 0x12.
>> + */
>> +#define BLOCK_URING_CMD_DISCARD _IO(0x12, 0)
>
> Please just start out at some arbitrary boundary, but don't reuse
> the ioctl code from an ioctl that does something vaguely similar for
> no good reason.
It's a entirely different number space, it shouldn't care about
ioctl numbering. Regardless, BLK* in fs.h start with 93. I don't
even have a clue where is the rest.
Code Seq# Include File Comments
0x12 all linux/fs.h BLK* ioctls
linux/blkpg.h
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v5 6/8] block: implement write zeroes io_uring cmd
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (4 preceding siblings ...)
2024-09-11 16:34 ` [PATCH v5 5/8] block: implement async io_uring discard cmd Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-12 9:32 ` Christoph Hellwig
2024-09-11 16:34 ` [PATCH v5 7/8] block: add nowait flag for __blkdev_issue_zero_pages Pavel Begunkov
` (2 subsequent siblings)
8 siblings, 1 reply; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring
Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig, Conrad Meyer
Add a second io_uring cmd for block layer implementing asynchronous
write zeroes. It reuses helpers we've added for async discards, and
inherits the code structure as well as all considerations in regards to
page cache races. It has to be supported by underlying hardware to be
used, otherwise the request will fail. A fallback version is implemented
separately in a later patch.
Suggested-by: Conrad Meyer <[email protected]>
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/ioctl.c | 66 +++++++++++++++++++++++++++++++++++++
include/uapi/linux/blkdev.h | 1 +
2 files changed, 67 insertions(+)
diff --git a/block/ioctl.c b/block/ioctl.c
index 007f6399de66..f8f6d6a71ff0 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -778,6 +778,69 @@ static void bio_cmd_bio_end_io(struct bio *bio)
bio_put(bio);
}
+static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
+ struct block_device *bdev,
+ uint64_t start, uint64_t len, bool nowait)
+{
+ struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
+ sector_t bs_mask = (bdev_logical_block_size(bdev) >> SECTOR_SHIFT) - 1;
+ sector_t limit = bdev_write_zeroes_sectors(bdev);
+ sector_t sector = start >> SECTOR_SHIFT;
+ sector_t nr_sects = len >> SECTOR_SHIFT;
+ struct bio *prev = NULL, *bio;
+ gfp_t gfp = nowait ? GFP_NOWAIT : GFP_KERNEL;
+ int err;
+
+ if (!(file_to_blk_mode(cmd->file) & BLK_OPEN_WRITE))
+ return -EBADF;
+ if (bdev_read_only(bdev))
+ return -EPERM;
+ err = blk_validate_byte_range(bdev, start, len);
+ if (err)
+ return err;
+
+ if (!limit)
+ return -EOPNOTSUPP;
+ /*
+ * Don't allow multi-bio non-blocking submissions as subsequent bios
+ * may fail but we won't get a direct indication of that. Normally,
+ * the caller should retry from a blocking context.
+ */
+ if (nowait && nr_sects > limit)
+ return -EAGAIN;
+
+ err = filemap_invalidate_pages(bdev->bd_mapping, start,
+ start + len - 1, nowait);
+ if (err)
+ return err;
+
+ limit = min(limit, (UINT_MAX >> SECTOR_SHIFT) & ~bs_mask);
+ while (nr_sects) {
+ sector_t bio_sects = min(nr_sects, limit);
+
+ bio = bio_alloc(bdev, 0, REQ_OP_WRITE_ZEROES|REQ_NOUNMAP, gfp);
+ if (!bio)
+ break;
+ if (nowait)
+ bio->bi_opf |= REQ_NOWAIT;
+ bio->bi_iter.bi_sector = sector;
+ bio->bi_iter.bi_size = bio_sects << SECTOR_SHIFT;
+ sector += bio_sects;
+ nr_sects -= bio_sects;
+
+ prev = bio_chain_and_submit(prev, bio);
+ }
+ if (unlikely(!prev))
+ return -EAGAIN;
+ if (unlikely(nr_sects))
+ bic->res = -EAGAIN;
+
+ prev->bi_private = cmd;
+ prev->bi_end_io = bio_cmd_bio_end_io;
+ submit_bio(prev);
+ return -EIOCBQUEUED;
+}
+
static int blkdev_cmd_discard(struct io_uring_cmd *cmd,
struct block_device *bdev,
uint64_t start, uint64_t len, bool nowait)
@@ -856,6 +919,9 @@ int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
switch (cmd_op) {
case BLOCK_URING_CMD_DISCARD:
return blkdev_cmd_discard(cmd, bdev, start, len, bic->nowait);
+ case BLOCK_URING_CMD_WRITE_ZEROES:
+ return blkdev_cmd_write_zeroes(cmd, bdev, start, len,
+ bic->nowait);
}
return -EINVAL;
}
diff --git a/include/uapi/linux/blkdev.h b/include/uapi/linux/blkdev.h
index 66373cd1a83a..b4664139a82a 100644
--- a/include/uapi/linux/blkdev.h
+++ b/include/uapi/linux/blkdev.h
@@ -10,5 +10,6 @@
* It's a different number space from ioctl(), reuse the block's code 0x12.
*/
#define BLOCK_URING_CMD_DISCARD _IO(0x12, 0)
+#define BLOCK_URING_CMD_WRITE_ZEROES _IO(0x12, 1)
#endif
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v5 6/8] block: implement write zeroes io_uring cmd
2024-09-11 16:34 ` [PATCH v5 6/8] block: implement write zeroes io_uring cmd Pavel Begunkov
@ 2024-09-12 9:32 ` Christoph Hellwig
2024-09-12 16:25 ` Pavel Begunkov
0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2024-09-12 9:32 UTC (permalink / raw)
To: Pavel Begunkov
Cc: io-uring, Jens Axboe, linux-block, linux-mm, Christoph Hellwig,
Conrad Meyer
On Wed, Sep 11, 2024 at 05:34:42PM +0100, Pavel Begunkov wrote:
> Add a second io_uring cmd for block layer implementing asynchronous
> write zeroes. It reuses helpers we've added for async discards, and
> inherits the code structure as well as all considerations in regards to
> page cache races. It has to be supported by underlying hardware to be
> used, otherwise the request will fail. A fallback version is implemented
> separately in a later patch.
Except that as far as I can tell it doesn't implement a fallback, but
an entirely different command leading to applications breaking when
just using the command and the hardware doesn't support it.
Nacked-by: Christoph Hellwig <[email protected]>
to this incomplete API that will just create incompatbilities.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v5 6/8] block: implement write zeroes io_uring cmd
2024-09-12 9:32 ` Christoph Hellwig
@ 2024-09-12 16:25 ` Pavel Begunkov
2024-09-12 16:32 ` Jens Axboe
0 siblings, 1 reply; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-12 16:25 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, Jens Axboe, linux-block, linux-mm, Conrad Meyer
On 9/12/24 10:32, Christoph Hellwig wrote:
> On Wed, Sep 11, 2024 at 05:34:42PM +0100, Pavel Begunkov wrote:
>> Add a second io_uring cmd for block layer implementing asynchronous
>> write zeroes. It reuses helpers we've added for async discards, and
>> inherits the code structure as well as all considerations in regards to
>> page cache races. It has to be supported by underlying hardware to be
>> used, otherwise the request will fail. A fallback version is implemented
>> separately in a later patch.
>
> Except that as far as I can tell it doesn't implement a fallback, but
I could've worded it better, but it doesn't say anything about
implementing fallback for this opcode.
> an entirely different command leading to applications breaking when
> just using the command and the hardware doesn't support it.
>
> Nacked-by: Christoph Hellwig <[email protected]>
>
> to this incomplete API that will just create incompatbilities.
That's fine, I'd rather take your nack than humouring the idea
of having a worse api than it could be.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v5 6/8] block: implement write zeroes io_uring cmd
2024-09-12 16:25 ` Pavel Begunkov
@ 2024-09-12 16:32 ` Jens Axboe
2024-09-12 16:53 ` Pavel Begunkov
2024-09-13 7:45 ` Christoph Hellwig
0 siblings, 2 replies; 19+ messages in thread
From: Jens Axboe @ 2024-09-12 16:32 UTC (permalink / raw)
To: Pavel Begunkov, Christoph Hellwig
Cc: io-uring, linux-block, linux-mm, Conrad Meyer
On 9/12/24 10:25 AM, Pavel Begunkov wrote:
>> an entirely different command leading to applications breaking when
>> just using the command and the hardware doesn't support it.
>>
>> Nacked-by: Christoph Hellwig <[email protected]>
>>
>> to this incomplete API that will just create incompatbilities.
>
> That's fine, I'd rather take your nack than humouring the idea
> of having a worse api than it could be.
How about we just drop 6-8 for now, and just focus on getting the actual
main discard operation in? That's (by far) the most important anyway,
and we can always add the write-zeroes bit later.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v5 6/8] block: implement write zeroes io_uring cmd
2024-09-12 16:32 ` Jens Axboe
@ 2024-09-12 16:53 ` Pavel Begunkov
2024-09-12 16:59 ` Jens Axboe
2024-09-13 7:45 ` Christoph Hellwig
1 sibling, 1 reply; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-12 16:53 UTC (permalink / raw)
To: Jens Axboe, Christoph Hellwig
Cc: io-uring, linux-block, linux-mm, Conrad Meyer
On 9/12/24 17:32, Jens Axboe wrote:
> On 9/12/24 10:25 AM, Pavel Begunkov wrote:
>>> an entirely different command leading to applications breaking when
>>> just using the command and the hardware doesn't support it.
>>>
>>> Nacked-by: Christoph Hellwig <[email protected]>
>>>
>>> to this incomplete API that will just create incompatbilities.
>>
>> That's fine, I'd rather take your nack than humouring the idea
>> of having a worse api than it could be.
>
> How about we just drop 6-8 for now, and just focus on getting the actual
> main discard operation in? That's (by far) the most important anyway,
> and we can always add the write-zeroes bit later.
That's surely an option, and it's naturally up to you.
But to be clear, unless there are some new good arguments, I don't
buy those requested changes for 6-8 and refuse to go with it, that's
just creating a mess of flags cancelling each other down the line.
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v5 6/8] block: implement write zeroes io_uring cmd
2024-09-12 16:53 ` Pavel Begunkov
@ 2024-09-12 16:59 ` Jens Axboe
0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2024-09-12 16:59 UTC (permalink / raw)
To: Pavel Begunkov, Christoph Hellwig
Cc: io-uring, linux-block, linux-mm, Conrad Meyer
On 9/12/24 10:53 AM, Pavel Begunkov wrote:
> On 9/12/24 17:32, Jens Axboe wrote:
>> On 9/12/24 10:25 AM, Pavel Begunkov wrote:
>>>> an entirely different command leading to applications breaking when
>>>> just using the command and the hardware doesn't support it.
>>>>
>>>> Nacked-by: Christoph Hellwig <[email protected]>
>>>>
>>>> to this incomplete API that will just create incompatbilities.
>>>
>>> That's fine, I'd rather take your nack than humouring the idea
>>> of having a worse api than it could be.
>>
>> How about we just drop 6-8 for now, and just focus on getting the actual
>> main discard operation in? That's (by far) the most important anyway,
>> and we can always add the write-zeroes bit later.
>
> That's surely an option, and it's naturally up to you.
Just trying to make forward progress - and since the actual discard is
the most interesting bit (imho), seems silly to gate it on the latter
bits.
> But to be clear, unless there are some new good arguments, I don't
> buy those requested changes for 6-8 and refuse to go with it, that's
> just creating a mess of flags cancelling each other down the line.
That's something to hash out, might be easier to do in person at LPC
next week.
For now I'll just drop 6-8.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v5 6/8] block: implement write zeroes io_uring cmd
2024-09-12 16:32 ` Jens Axboe
2024-09-12 16:53 ` Pavel Begunkov
@ 2024-09-13 7:45 ` Christoph Hellwig
1 sibling, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2024-09-13 7:45 UTC (permalink / raw)
To: Jens Axboe
Cc: Pavel Begunkov, Christoph Hellwig, io-uring, linux-block,
linux-mm, Conrad Meyer
On Thu, Sep 12, 2024 at 10:32:21AM -0600, Jens Axboe wrote:
> How about we just drop 6-8 for now, and just focus on getting the actual
> main discard operation in? That's (by far) the most important anyway,
> and we can always add the write-zeroes bit later.
Sounds fine to me.
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v5 7/8] block: add nowait flag for __blkdev_issue_zero_pages
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (5 preceding siblings ...)
2024-09-11 16:34 ` [PATCH v5 6/8] block: implement write zeroes io_uring cmd Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-11 16:34 ` [PATCH v5 8/8] block: implement write zero pages cmd Pavel Begunkov
2024-09-11 20:56 ` [PATCH v5 0/8] implement async block discards and other ops via io_uring Jens Axboe
8 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig
To reuse __blkdev_issue_zero_pages() in the following patch, we need to
make it work with non-blocking requests. Add a new nowait flag we can
pass inside. Return errors if something went wrong, and check
bio_alloc() for failures, which wasn't supposed to happen before because
of what gfp flags the callers are passing. Note that there might be a
bio passed back even when the function returned an error. To limit the
scope of the patch, don't add return code handling to callers, that can
be deferred to a follow up.
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/blk-lib.c | 22 ++++++++++++++++++----
include/linux/bio.h | 4 ++++
include/linux/blkdev.h | 1 +
3 files changed, 23 insertions(+), 4 deletions(-)
diff --git a/block/blk-lib.c b/block/blk-lib.c
index 83eb7761c2bf..40bb59f583ee 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -192,20 +192,32 @@ static unsigned int __blkdev_sectors_to_bio_pages(sector_t nr_sects)
return min(pages, (sector_t)BIO_MAX_VECS);
}
-static void __blkdev_issue_zero_pages(struct block_device *bdev,
+int blkdev_issue_zero_pages_bio(struct block_device *bdev,
sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
struct bio **biop, unsigned int flags)
{
+ blk_opf_t opf = REQ_OP_WRITE;
+
+ if (flags & BLKDEV_ZERO_PAGES_NOWAIT) {
+ sector_t max_bio_sectors = BIO_MAX_VECS << PAGE_SECTORS_SHIFT;
+
+ if (nr_sects > max_bio_sectors)
+ return -EAGAIN;
+ opf |= REQ_NOWAIT;
+ }
+
while (nr_sects) {
unsigned int nr_vecs = __blkdev_sectors_to_bio_pages(nr_sects);
struct bio *bio;
bio = bio_alloc(bdev, nr_vecs, REQ_OP_WRITE, gfp_mask);
+ if (!bio)
+ return -ENOMEM;
bio->bi_iter.bi_sector = sector;
if ((flags & BLKDEV_ZERO_KILLABLE) &&
fatal_signal_pending(current))
- break;
+ return -EINTR;
do {
unsigned int len, added;
@@ -222,6 +234,8 @@ static void __blkdev_issue_zero_pages(struct block_device *bdev,
*biop = bio_chain_and_submit(*biop, bio);
cond_resched();
}
+
+ return 0;
}
static int blkdev_issue_zero_pages(struct block_device *bdev, sector_t sector,
@@ -235,7 +249,7 @@ static int blkdev_issue_zero_pages(struct block_device *bdev, sector_t sector,
return -EOPNOTSUPP;
blk_start_plug(&plug);
- __blkdev_issue_zero_pages(bdev, sector, nr_sects, gfp, &bio, flags);
+ blkdev_issue_zero_pages_bio(bdev, sector, nr_sects, gfp, &bio, flags);
if (bio) {
if ((flags & BLKDEV_ZERO_KILLABLE) &&
fatal_signal_pending(current)) {
@@ -285,7 +299,7 @@ int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
} else {
if (flags & BLKDEV_ZERO_NOFALLBACK)
return -EOPNOTSUPP;
- __blkdev_issue_zero_pages(bdev, sector, nr_sects, gfp_mask,
+ blkdev_issue_zero_pages_bio(bdev, sector, nr_sects, gfp_mask,
biop, flags);
}
return 0;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index faceadb040f9..e4e495a4a95b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -684,4 +684,8 @@ struct bio *bio_chain_and_submit(struct bio *prev, struct bio *new);
struct bio *blk_alloc_discard_bio(struct block_device *bdev,
sector_t *sector, sector_t *nr_sects, gfp_t gfp_mask);
+int blkdev_issue_zero_pages_bio(struct block_device *bdev,
+ sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
+ struct bio **biop, unsigned int flags);
+
#endif /* __LINUX_BIO_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 643c9020a35a..bf1aa951fda2 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1098,6 +1098,7 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
#define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */
#define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */
#define BLKDEV_ZERO_KILLABLE (1 << 2) /* interruptible by fatal signals */
+#define BLKDEV_ZERO_PAGES_NOWAIT (1 << 3) /* non-blocking submission */
extern int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop,
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v5 8/8] block: implement write zero pages cmd
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (6 preceding siblings ...)
2024-09-11 16:34 ` [PATCH v5 7/8] block: add nowait flag for __blkdev_issue_zero_pages Pavel Begunkov
@ 2024-09-11 16:34 ` Pavel Begunkov
2024-09-11 20:56 ` [PATCH v5 0/8] implement async block discards and other ops via io_uring Jens Axboe
8 siblings, 0 replies; 19+ messages in thread
From: Pavel Begunkov @ 2024-09-11 16:34 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence, linux-block, linux-mm,
Christoph Hellwig
Add a command that writes the zero page to the drive. Apart from passing
the zero page instead of actual data it uses the normal write path and
doesn't do any further acceleration. Subsequently, it doesn't requires
any special hardware support, and for example can be used as a fallback
option for BLOCK_URING_CMD_WRITE_ZEROES.
Signed-off-by: Pavel Begunkov <[email protected]>
---
block/ioctl.c | 21 ++++++++++++++++++---
include/uapi/linux/blkdev.h | 1 +
2 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/block/ioctl.c b/block/ioctl.c
index f8f6d6a71ff0..08e0d2d52b06 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -780,7 +780,8 @@ static void bio_cmd_bio_end_io(struct bio *bio)
static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
struct block_device *bdev,
- uint64_t start, uint64_t len, bool nowait)
+ uint64_t start, uint64_t len,
+ bool nowait, bool zero_pages)
{
struct blk_iou_cmd *bic = io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd);
sector_t bs_mask = (bdev_logical_block_size(bdev) >> SECTOR_SHIFT) - 1;
@@ -799,6 +800,17 @@ static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
if (err)
return err;
+ if (zero_pages) {
+ err = blkdev_issue_zero_pages_bio(bdev, sector, nr_sects,
+ gfp, &prev,
+ BLKDEV_ZERO_PAGES_NOWAIT);
+ if (!prev)
+ return -EAGAIN;
+ if (err)
+ bic->res = err;
+ goto out_submit;
+ }
+
if (!limit)
return -EOPNOTSUPP;
/*
@@ -834,7 +846,7 @@ static int blkdev_cmd_write_zeroes(struct io_uring_cmd *cmd,
return -EAGAIN;
if (unlikely(nr_sects))
bic->res = -EAGAIN;
-
+out_submit:
prev->bi_private = cmd;
prev->bi_end_io = bio_cmd_bio_end_io;
submit_bio(prev);
@@ -921,7 +933,10 @@ int blkdev_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
return blkdev_cmd_discard(cmd, bdev, start, len, bic->nowait);
case BLOCK_URING_CMD_WRITE_ZEROES:
return blkdev_cmd_write_zeroes(cmd, bdev, start, len,
- bic->nowait);
+ bic->nowait, false);
+ case BLOCK_URING_CMD_WRITE_ZERO_PAGE:
+ return blkdev_cmd_write_zeroes(cmd, bdev, start, len,
+ bic->nowait, true);
}
return -EINVAL;
}
diff --git a/include/uapi/linux/blkdev.h b/include/uapi/linux/blkdev.h
index b4664139a82a..880b292d2d0d 100644
--- a/include/uapi/linux/blkdev.h
+++ b/include/uapi/linux/blkdev.h
@@ -11,5 +11,6 @@
*/
#define BLOCK_URING_CMD_DISCARD _IO(0x12, 0)
#define BLOCK_URING_CMD_WRITE_ZEROES _IO(0x12, 1)
+#define BLOCK_URING_CMD_WRITE_ZERO_PAGE _IO(0x12, 2)
#endif
--
2.45.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v5 0/8] implement async block discards and other ops via io_uring
2024-09-11 16:34 [PATCH v5 0/8] implement async block discards and other ops via io_uring Pavel Begunkov
` (7 preceding siblings ...)
2024-09-11 16:34 ` [PATCH v5 8/8] block: implement write zero pages cmd Pavel Begunkov
@ 2024-09-11 20:56 ` Jens Axboe
8 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2024-09-11 20:56 UTC (permalink / raw)
To: io-uring, Pavel Begunkov; +Cc: linux-block, linux-mm, Christoph Hellwig
On Wed, 11 Sep 2024 17:34:36 +0100, Pavel Begunkov wrote:
> There is an interest in having asynchronous block operations like
> discard and write zeroes. The series implements that as io_uring commands,
> which is an io_uring request type allowing to implement custom file
> specific operations.
>
> First 4 are preparation patches. Patch 5 introduces the main chunk of
> cmd infrastructure and discard commands. Patches 6-8 implement
> write zeroes variants.
>
> [...]
Applied, thanks!
[1/8] io_uring/cmd: expose iowq to cmds
(no commit info)
[2/8] io_uring/cmd: give inline space in request to cmds
(no commit info)
[3/8] filemap: introduce filemap_invalidate_pages
(no commit info)
[4/8] block: introduce blk_validate_byte_range()
(no commit info)
[5/8] block: implement async io_uring discard cmd
(no commit info)
[6/8] block: implement write zeroes io_uring cmd
(no commit info)
[7/8] block: add nowait flag for __blkdev_issue_zero_pages
(no commit info)
[8/8] block: implement write zero pages cmd
(no commit info)
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread