public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH v4 00/11] block atomic writes
@ 2024-02-19 13:00 John Garry
  2024-02-19 13:00 ` [PATCH v4 01/11] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
                   ` (10 more replies)
  0 siblings, 11 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:00 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, John Garry

This series introduces a proposal to implementing atomic writes in the
kernel for torn-write protection.

This series takes the approach of adding a new "atomic" flag to each of
pwritev2() and iocb->ki_flags - RWF_ATOMIC and IOCB_ATOMIC, respectively.
When set, these indicate that we want the write issued "atomically".

Only direct IO is supported and for block devices here. For this, atomic
write HW is required, like SCSI ATOMIC WRITE (16).

XFS FS support will require rework according to discussion at:
https://lore.kernel.org/linux-fsdevel/[email protected]/T/#m916df899e9d0fb688cdbd415826ae2423306c2e0

The current plan there is to use forcealign feature from the start. This
will take a bit of time to get done.

Updated man pages have been posted at:
https://lore.kernel.org/lkml/[email protected]/T/#m520dca97a9748de352b5a723d3155a4bb1e46456

The goal here is to provide an interface that allows applications use
application-specific block sizes larger than logical block size
reported by the storage device or larger than filesystem block size as
reported by stat().

With this new interface, application blocks will never be torn or
fractured when written. For a power fail, for each individual application
block, all or none of the data to be written. A racing atomic write and
read will mean that the read sees all the old data or all the new data,
but never a mix of old and new.

Three new fields are added to struct statx - atomic_write_unit_min,
atomic_write_unit_max, and atomic_write_segments_max. For each atomic
individual write, the total length of a write must be a between
atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
power-of-2. The write must also be at a natural offset in the file
wrt the write length. For pwritev2, iovcnt is limited by
atomic_write_segments_max.

SCSI sd.c and scsi_debug and NVMe kernel support is added.

This series is based on v6.8-rc5.

Patches can be found at:
https://github.com/johnpgarry/linux/commits/atomic-writes-v6.8-v4

Changes since v3:
- Condense and reorder patches, and also write proper commit messages
- Add patch block fops.c patch to change blkdev_dio_unaligned() callsite
- Re-use block limits in nvme_valid_atomic_write()
- Disallow RWF_ATOMIC for reads
- Add HCH RB tag for blk_queue_get_max_sectors() patch and modify commit
  message for new position

Changes since v2:
- Support atomic_write_segments_max
- Limit atomic write paramaters to max_hw_sectors_kb
- Don't increase fmode_t
- Change value for RWF_ATOMIC
- Various tidying (including advised by Jan)


Alan Adamson (2):
  nvme: Atomic write support
  nvme: Ensure atomic writes will be executed atomically

John Garry (6):
  block: Pass blk_queue_get_max_sectors() a request pointer
  block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  block: Add core atomic write support
  block: Add fops atomic write support
  scsi: sd: Atomic write support
  scsi: scsi_debug: Atomic write support

Prasad Singamsetty (3):
  fs: Initial atomic write support
  fs: Add initial atomic write support info to statx
  block: Add atomic write support for statx

 Documentation/ABI/stable/sysfs-block |  52 +++
 block/bdev.c                         |  37 +-
 block/blk-merge.c                    |  94 ++++-
 block/blk-mq.c                       |   2 +-
 block/blk-settings.c                 | 103 +++++
 block/blk-sysfs.c                    |  33 ++
 block/blk.h                          |   9 +-
 block/fops.c                         |  43 +-
 drivers/nvme/host/core.c             |  73 ++++
 drivers/scsi/scsi_debug.c            | 589 +++++++++++++++++++++------
 drivers/scsi/scsi_trace.c            |  22 +
 drivers/scsi/sd.c                    |  93 ++++-
 drivers/scsi/sd.h                    |   8 +
 fs/aio.c                             |   8 +-
 fs/btrfs/ioctl.c                     |   2 +-
 fs/read_write.c                      |   2 +-
 fs/stat.c                            |  47 ++-
 include/linux/blk_types.h            |   2 +
 include/linux/blkdev.h               |  65 ++-
 include/linux/fs.h                   |  39 +-
 include/linux/stat.h                 |   3 +
 include/scsi/scsi_proto.h            |   1 +
 include/trace/events/scsi.h          |   1 +
 include/uapi/linux/fs.h              |   5 +-
 include/uapi/linux/stat.h            |   9 +-
 io_uring/rw.c                        |   4 +-
 26 files changed, 1166 insertions(+), 180 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v4 01/11] block: Pass blk_queue_get_max_sectors() a request pointer
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
@ 2024-02-19 13:00 ` John Garry
  2024-02-19 18:57   ` Keith Busch
  2024-02-19 13:01 ` [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 51+ messages in thread
From: John Garry @ 2024-02-19 13:00 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, John Garry

Currently blk_queue_get_max_sectors() is passed a enum req_op. In future
the value returned from blk_queue_get_max_sectors() may depend on certain
request flags, so pass a request pointer.

Also use rq->cmd_flags instead of rq->bio->bi_opf when possible.

Signed-off-by: John Garry <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
 block/blk-merge.c | 3 ++-
 block/blk-mq.c    | 2 +-
 block/blk.h       | 6 ++++--
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2d470cf2173e..74e9e775f13d 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -592,7 +592,8 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
 	if (blk_rq_is_passthrough(rq))
 		return q->limits.max_hw_sectors;
 
-	max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+	max_sectors = blk_queue_get_max_sectors(rq);
+
 	if (!q->limits.chunk_sectors ||
 	    req_op(rq) == REQ_OP_DISCARD ||
 	    req_op(rq) == REQ_OP_SECURE_ERASE)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2dc01551e27c..0855f75bcad7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3046,7 +3046,7 @@ void blk_mq_submit_bio(struct bio *bio)
 blk_status_t blk_insert_cloned_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
-	unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+	unsigned int max_sectors = blk_queue_get_max_sectors(rq);
 	unsigned int max_segments = blk_rq_get_max_segments(rq);
 	blk_status_t ret;
 
diff --git a/block/blk.h b/block/blk.h
index 1ef920f72e0f..050696131329 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -166,9 +166,11 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
 	return queue_max_segments(rq->q);
 }
 
-static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,
-						     enum req_op op)
+static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
 {
+	struct request_queue *q = rq->q;
+	enum req_op op = req_op(rq);
+
 	if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE))
 		return min(q->limits.max_discard_sectors,
 			   UINT_MAX >> SECTOR_SHIFT);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
  2024-02-19 13:00 ` [PATCH v4 01/11] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-19 18:57   ` Keith Busch
  2024-02-20  6:54   ` Christoph Hellwig
  2024-02-19 13:01 ` [PATCH v4 03/11] fs: Initial atomic write support John Garry
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, John Garry

blkdev_dio_unaligned() is called from __blkdev_direct_IO(),
__blkdev_direct_IO_simple(), and __blkdev_direct_IO_async(), and all these
are only called from blkdev_direct_IO().

Move the blkdev_dio_unaligned() call to the common callsite,
blkdev_direct_IO().

Signed-off-by: John Garry <[email protected]>
---
 block/fops.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/block/fops.c b/block/fops.c
index 0cf8cf72cdfa..28382b4d097a 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -53,9 +53,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
 	struct bio bio;
 	ssize_t ret;
 
-	if (blkdev_dio_unaligned(bdev, pos, iter))
-		return -EINVAL;
-
 	if (nr_pages <= DIO_INLINE_BIO_VECS)
 		vecs = inline_vecs;
 	else {
@@ -171,9 +168,6 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 	loff_t pos = iocb->ki_pos;
 	int ret = 0;
 
-	if (blkdev_dio_unaligned(bdev, pos, iter))
-		return -EINVAL;
-
 	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
 		opf |= REQ_ALLOC_CACHE;
 	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
@@ -310,9 +304,6 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 	loff_t pos = iocb->ki_pos;
 	int ret = 0;
 
-	if (blkdev_dio_unaligned(bdev, pos, iter))
-		return -EINVAL;
-
 	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
 		opf |= REQ_ALLOC_CACHE;
 	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
@@ -365,11 +356,16 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 
 static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 {
+	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+	loff_t pos = iocb->ki_pos;
 	unsigned int nr_pages;
 
 	if (!iov_iter_count(iter))
 		return 0;
 
+	if (blkdev_dio_unaligned(bdev, pos, iter))
+		return -EINVAL;
+
 	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
 	if (likely(nr_pages <= BIO_MAX_VECS)) {
 		if (is_sync_kiocb(iocb))
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
  2024-02-19 13:00 ` [PATCH v4 01/11] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
  2024-02-19 13:01 ` [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-19 19:16   ` David Sterba
                     ` (2 more replies)
  2024-02-19 13:01 ` [PATCH v4 04/11] fs: Add initial atomic write support info to statx John Garry
                   ` (7 subsequent siblings)
  10 siblings, 3 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, Prasad Singamsetty, John Garry

From: Prasad Singamsetty <[email protected]>

An atomic write is a write issued with torn-write protection, meaning
that for a power failure or any other hardware failure, all or none of the
data from the write will be stored, but never a mix of old and new data.

Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the
write is to be issued with torn-write prevention, according to special
alignment and length rules.

For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for
iocb->ki_flags field to indicate the same.

A call to statx will give the relevant atomic write info for a file:
- atomic_write_unit_min
- atomic_write_unit_max
- atomic_write_segments_max

Both min and max values must be a power-of-2.

Applications can avail of atomic write feature by ensuring that the total
length of a write is a power-of-2 in size and also sized between
atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications
must ensure that the write is at a naturally-aligned offset in the file
wrt the total write length. The value in atomic_write_segments_max
indicates the upper limit for IOV_ITER iovcnt.

Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the
flag set will have RWF_ATOMIC rejected and not just ignored.

Add a type argument to kiocb_set_rw_flags() to allows reads which have
RWF_ATOMIC set to be rejected.

Helper function atomic_write_valid() can be used by FSes to verify
compliant writes.

Signed-off-by: Prasad Singamsetty <[email protected]>
#jpg: merge into single patch and much rewrite
Signed-off-by: John Garry <[email protected]>
---
 fs/aio.c                |  8 ++++----
 fs/btrfs/ioctl.c        |  2 +-
 fs/read_write.c         |  2 +-
 include/linux/fs.h      | 36 +++++++++++++++++++++++++++++++++++-
 include/uapi/linux/fs.h |  5 ++++-
 io_uring/rw.c           |  4 ++--
 6 files changed, 47 insertions(+), 10 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index bb2ff48991f3..21bcbc076fd0 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1502,7 +1502,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res)
 	iocb_put(iocb);
 }
 
-static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int type)
 {
 	int ret;
 
@@ -1528,7 +1528,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
 	} else
 		req->ki_ioprio = get_current_ioprio();
 
-	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
+	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags, type);
 	if (unlikely(ret))
 		return ret;
 
@@ -1580,7 +1580,7 @@ static int aio_read(struct kiocb *req, const struct iocb *iocb,
 	struct file *file;
 	int ret;
 
-	ret = aio_prep_rw(req, iocb);
+	ret = aio_prep_rw(req, iocb, READ);
 	if (ret)
 		return ret;
 	file = req->ki_filp;
@@ -1607,7 +1607,7 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb,
 	struct file *file;
 	int ret;
 
-	ret = aio_prep_rw(req, iocb);
+	ret = aio_prep_rw(req, iocb, WRITE);
 	if (ret)
 		return ret;
 	file = req->ki_filp;
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index ac3316e0d11c..455f06d94b11 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -4555,7 +4555,7 @@ static int btrfs_ioctl_encoded_write(struct file *file, void __user *argp, bool
 		goto out_iov;
 
 	init_sync_kiocb(&kiocb, file);
-	ret = kiocb_set_rw_flags(&kiocb, 0);
+	ret = kiocb_set_rw_flags(&kiocb, 0, WRITE);
 	if (ret)
 		goto out_iov;
 	kiocb.ki_pos = pos;
diff --git a/fs/read_write.c b/fs/read_write.c
index d4c036e82b6c..a7dc1819192d 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -730,7 +730,7 @@ static ssize_t do_iter_readv_writev(struct file *filp, struct iov_iter *iter,
 	ssize_t ret;
 
 	init_sync_kiocb(&kiocb, filp);
-	ret = kiocb_set_rw_flags(&kiocb, flags);
+	ret = kiocb_set_rw_flags(&kiocb, flags, type);
 	if (ret)
 		return ret;
 	kiocb.ki_pos = (ppos ? *ppos : 0);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 023f37c60709..7271640fd600 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -43,6 +43,7 @@
 #include <linux/cred.h>
 #include <linux/mnt_idmapping.h>
 #include <linux/slab.h>
+#include <linux/uio.h>
 
 #include <asm/byteorder.h>
 #include <uapi/linux/fs.h>
@@ -119,6 +120,10 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
 #define FMODE_PWRITE		((__force fmode_t)0x10)
 /* File is opened for execution with sys_execve / sys_uselib */
 #define FMODE_EXEC		((__force fmode_t)0x20)
+
+/* File supports atomic writes */
+#define FMODE_CAN_ATOMIC_WRITE	((__force fmode_t)0x40)
+
 /* 32bit hashes as llseek() offset (for directories) */
 #define FMODE_32BITHASH         ((__force fmode_t)0x200)
 /* 64bit hashes as llseek() offset (for directories) */
@@ -328,6 +333,7 @@ enum rw_hint {
 #define IOCB_SYNC		(__force int) RWF_SYNC
 #define IOCB_NOWAIT		(__force int) RWF_NOWAIT
 #define IOCB_APPEND		(__force int) RWF_APPEND
+#define IOCB_ATOMIC		(__force int) RWF_ATOMIC
 
 /* non-RWF related bits - start at 16 */
 #define IOCB_EVENTFD		(1 << 16)
@@ -3321,7 +3327,7 @@ static inline int iocb_flags(struct file *file)
 	return res;
 }
 
-static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
+static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags, int type)
 {
 	int kiocb_flags = 0;
 
@@ -3338,6 +3344,12 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
 			return -EOPNOTSUPP;
 		kiocb_flags |= IOCB_NOIO;
 	}
+	if (flags & RWF_ATOMIC) {
+		if (type == READ)
+			return -EOPNOTSUPP;
+		if (!(ki->ki_filp->f_mode & FMODE_CAN_ATOMIC_WRITE))
+			return -EOPNOTSUPP;
+	}
 	kiocb_flags |= (__force int) (flags & RWF_SUPPORTED);
 	if (flags & RWF_SYNC)
 		kiocb_flags |= IOCB_DSYNC;
@@ -3523,4 +3535,26 @@ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
 extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
 			   int advice);
 
+static inline bool atomic_write_valid(loff_t pos, struct iov_iter *iter,
+			   unsigned int unit_min, unsigned int unit_max)
+{
+	size_t len = iov_iter_count(iter);
+
+	if (!iter_is_ubuf(iter))
+		return false;
+
+	if (len == unit_min || len == unit_max) {
+		/* ok if exactly min or max */
+	} else if (len < unit_min || len > unit_max) {
+		return false;
+	} else if (!is_power_of_2(len)) {
+		return false;
+	}
+
+	if (pos & (len - 1))
+		return false;
+
+	return true;
+}
+
 #endif /* _LINUX_FS_H */
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 48ad69f7722e..a0975ae81e64 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -301,9 +301,12 @@ typedef int __bitwise __kernel_rwf_t;
 /* per-IO O_APPEND */
 #define RWF_APPEND	((__force __kernel_rwf_t)0x00000010)
 
+/* Atomic Write */
+#define RWF_ATOMIC	((__force __kernel_rwf_t)0x00000040)
+
 /* mask of flags supported by the kernel */
 #define RWF_SUPPORTED	(RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
-			 RWF_APPEND)
+			 RWF_APPEND | RWF_ATOMIC)
 
 /* Pagemap ioctl */
 #define PAGEMAP_SCAN	_IOWR('f', 16, struct pm_scan_arg)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index d5e79d9bdc71..f8c022301cf4 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -719,7 +719,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
 	struct kiocb *kiocb = &rw->kiocb;
 	struct io_ring_ctx *ctx = req->ctx;
 	struct file *file = req->file;
-	int ret;
+	int ret, type = (mode == FMODE_WRITE) ? WRITE : READ;
 
 	if (unlikely(!file || !(file->f_mode & mode)))
 		return -EBADF;
@@ -728,7 +728,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
 		req->flags |= io_file_get_flags(file);
 
 	kiocb->ki_flags = file->f_iocb_flags;
-	ret = kiocb_set_rw_flags(kiocb, rw->flags);
+	ret = kiocb_set_rw_flags(kiocb, rw->flags, type);
 	if (unlikely(ret))
 		return ret;
 	kiocb->ki_flags |= IOCB_ALLOC_CACHE;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 04/11] fs: Add initial atomic write support info to statx
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (2 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 03/11] fs: Initial atomic write support John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-19 22:28   ` Dave Chinner
                     ` (2 more replies)
  2024-02-19 13:01 ` [PATCH v4 05/11] block: Add core atomic write support John Garry
                   ` (6 subsequent siblings)
  10 siblings, 3 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, Prasad Singamsetty, John Garry

From: Prasad Singamsetty <[email protected]>

Extend statx system call to return additional info for atomic write support
support for a file.

Helper function generic_fill_statx_atomic_writes() can be used by FSes to
fill in the relevant statx fields.

Signed-off-by: Prasad Singamsetty <[email protected]>
#jpg: relocate bdev support to another patch
Signed-off-by: John Garry <[email protected]>
---
 fs/stat.c                 | 34 ++++++++++++++++++++++++++++++++++
 include/linux/fs.h        |  3 +++
 include/linux/stat.h      |  3 +++
 include/uapi/linux/stat.h |  9 ++++++++-
 4 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/fs/stat.c b/fs/stat.c
index 77cdc69eb422..522787a4ab6a 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
 }
 EXPORT_SYMBOL(generic_fill_statx_attr);
 
+/**
+ * generic_fill_statx_atomic_writes - Fill in the atomic writes statx attributes
+ * @stat:	Where to fill in the attribute flags
+ * @unit_min:	Minimum supported atomic write length
+ * @unit_max:	Maximum supported atomic write length
+ *
+ * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
+ * atomic write unit_min and unit_max values.
+ */
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+				      unsigned int unit_min,
+				      unsigned int unit_max)
+{
+	/* Confirm that the request type is known */
+	stat->result_mask |= STATX_WRITE_ATOMIC;
+
+	/* Confirm that the file attribute type is known */
+	stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
+
+	if (unit_min) {
+		stat->atomic_write_unit_min = unit_min;
+		stat->atomic_write_unit_max = unit_max;
+		/* Initially only allow 1x segment */
+		stat->atomic_write_segments_max = 1;
+
+		/* Confirm atomic writes are actually supported */
+		stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
+	}
+}
+EXPORT_SYMBOL(generic_fill_statx_atomic_writes);
+
 /**
  * vfs_getattr_nosec - getattr without security checks
  * @path: file to get attributes from
@@ -658,6 +689,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
 	tmp.stx_mnt_id = stat->mnt_id;
 	tmp.stx_dio_mem_align = stat->dio_mem_align;
 	tmp.stx_dio_offset_align = stat->dio_offset_align;
+	tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
+	tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
+	tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
 
 	return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
 }
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 7271640fd600..531140a7e27a 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3167,6 +3167,9 @@ extern const struct inode_operations page_symlink_inode_operations;
 extern void kfree_link(void *);
 void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
 void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+				      unsigned int unit_min,
+				      unsigned int unit_max);
 extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
 extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
 void __inode_add_bytes(struct inode *inode, loff_t bytes);
diff --git a/include/linux/stat.h b/include/linux/stat.h
index 52150570d37a..2c5e2b8c6559 100644
--- a/include/linux/stat.h
+++ b/include/linux/stat.h
@@ -53,6 +53,9 @@ struct kstat {
 	u32		dio_mem_align;
 	u32		dio_offset_align;
 	u64		change_cookie;
+	u32		atomic_write_unit_min;
+	u32		atomic_write_unit_max;
+	u32		atomic_write_segments_max;
 };
 
 /* These definitions are internal to the kernel for now. Mainly used by nfsd. */
diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
index 2f2ee82d5517..c0e8e10d1de6 100644
--- a/include/uapi/linux/stat.h
+++ b/include/uapi/linux/stat.h
@@ -127,7 +127,12 @@ struct statx {
 	__u32	stx_dio_mem_align;	/* Memory buffer alignment for direct I/O */
 	__u32	stx_dio_offset_align;	/* File offset alignment for direct I/O */
 	/* 0xa0 */
-	__u64	__spare3[12];	/* Spare space for future expansion */
+	__u32	stx_atomic_write_unit_min;
+	__u32	stx_atomic_write_unit_max;
+	__u32   stx_atomic_write_segments_max;
+	__u32   __spare1;
+	/* 0xb0 */
+	__u64	__spare3[10];	/* Spare space for future expansion */
 	/* 0x100 */
 };
 
@@ -155,6 +160,7 @@ struct statx {
 #define STATX_MNT_ID		0x00001000U	/* Got stx_mnt_id */
 #define STATX_DIOALIGN		0x00002000U	/* Want/got direct I/O alignment info */
 #define STATX_MNT_ID_UNIQUE	0x00004000U	/* Want/got extended stx_mount_id */
+#define STATX_WRITE_ATOMIC	0x00008000U	/* Want/got atomic_write_* fields */
 
 #define STATX__RESERVED		0x80000000U	/* Reserved for future struct statx expansion */
 
@@ -190,6 +196,7 @@ struct statx {
 #define STATX_ATTR_MOUNT_ROOT		0x00002000 /* Root of a mount */
 #define STATX_ATTR_VERITY		0x00100000 /* [I] Verity protected file */
 #define STATX_ATTR_DAX			0x00200000 /* File is currently in DAX state */
+#define STATX_ATTR_WRITE_ATOMIC		0x00400000 /* File supports atomic write operations */
 
 
 #endif /* _UAPI_LINUX_STAT_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 05/11] block: Add core atomic write support
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (3 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 04/11] fs: Add initial atomic write support info to statx John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-19 22:58   ` Dave Chinner
  2024-02-25 12:09   ` Ritesh Harjani
  2024-02-19 13:01 ` [PATCH v4 06/11] block: Add atomic write support for statx John Garry
                   ` (5 subsequent siblings)
  10 siblings, 2 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, John Garry

Add atomic write support as follows:
- report request_queue atomic write support limits to sysfs and udpate Doc
- add helper functions to get request_queue atomic write limits
- support to safely merge atomic writes
- add a per-request atomic write flag
- deal with splitting atomic writes
- misc helper functions

New sysfs files are added to report the following atomic write limits:
- atomic_write_boundary_bytes
- atomic_write_max_bytes
- atomic_write_unit_max_bytes
- atomic_write_unit_min_bytes

atomic_write_unit_{min,max}_bytes report the min and max atomic write
support size, inclusive, and are primarily dictated by HW capability. Both
values must be a power-of-2. atomic_write_boundary_bytes, if non-zero,
indicates an LBA space boundary at which an atomic write straddles no
longer is atomically executed by the disk. atomic_write_max_bytes is the
maximum merged size for an atomic write. Often it will be the same value as
atomic_write_unit_max_bytes.

atomic_write_unit_max_bytes is capped at the maximum data size which we are
guaranteed to be able to fit in a BIO, as an atomic write must always be
submitted as a single BIO. This BIO max size is dictated by the number of
segments allowed which the request queue can support and the number of
bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace issuing a
write with iovcnt=1 for IOV_ITER - as such, we can rely on each segment
containing PAGE_SIZE of data, apart from the first+last, which each can
fit logical block size of data. Note that here we rely on the direct IO
rule for alignment, that each iovec needs to be logical block size
aligned/length multiple. Atomic writes may be supported for buffered IO in
future, but it would still make sense to apply that direct IO rule there.

atomic_write_max_sectors is capped at max_hw_sectors, but is not also
capped at max_sectors. The value in max_sectors can be controlled from
userspace, and it would only cause trouble if userspace could limit
atomic_write_unit_max_bytes and the other atomic write limits.

Atomic writes may be merged under the following conditions:
- total request length <= atomic_write_max_bytes
- the merged write does not straddle a boundary, if any

It is only permissible to merge an atomic writes with another atomic
write, i.e. it is not possible to merge an atomic and non-atomic write.
There are many reasons for this, like:
- SCSI has a dedicated atomic write command, so a merged atomic and
  non-atomic needs to be issued as an atomic write, putting an unnecessary
  burden on the disk to issue the merged write atomically
- Dimensions of the merged non-atomic write need to be checked for size/
  offset to conform to atomic write rules, which adds overhead
- Typically only atomic writes or non-atomic writes are expected for a
  file during normal processing, so not any expected use-case to cater for.

Functions get_max_io_size() and blk_queue_get_max_sectors() are modified to
handle atomic writes max length - those functions are used by the merge
code.

An atomic write cannot be split under any circumstances. In the case that
an atomic write needs to be split, we reject the IO. If any atomic write
needs to be split, it is most likely because of either:
- atomic_write_unit_max_bytes reported is incorrect.
- whoever submitted the atomic write BIO did not properly adhere to the
  request_queue limits.

All atomic writes limits are by default set 0 to indicate no atomic write
support. Even though it is assumed by Linux that a logical block can always
be atomically written, we ignore this as it is not of particular interest.
Stacked devices are just not supported either for now.

Flag REQ_ATOMIC is used for indicating an atomic write.

Helper function bdev_can_atomic_write() is added to indicate whether
atomic writes may be issued to a bdev. It ensures that if the bdev is a
partition, that the partition is properly aligned with
atomic_write_unit_min_sectors and atomic_write_hw_boundary_sectors.

Contains significant contributions from:
Himanshu Madhani <[email protected]>

Signed-off-by: John Garry <[email protected]>
---
 Documentation/ABI/stable/sysfs-block |  52 ++++++++++++++
 block/blk-merge.c                    |  91 ++++++++++++++++++++++-
 block/blk-settings.c                 | 103 +++++++++++++++++++++++++++
 block/blk-sysfs.c                    |  33 +++++++++
 block/blk.h                          |   3 +
 include/linux/blk_types.h            |   2 +
 include/linux/blkdev.h               |  60 ++++++++++++++++
 7 files changed, 343 insertions(+), 1 deletion(-)

diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 1fe9a553c37b..4c775f4bdefe 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -21,6 +21,58 @@ Description:
 		device is offset from the internal allocation unit's
 		natural alignment.
 
+What:		/sys/block/<disk>/atomic_write_max_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] This parameter specifies the maximum atomic write
+		size reported by the device. This parameter is relevant
+		for merging of writes, where a merged atomic write
+		operation must not exceed this number of bytes.
+		This parameter may be greater to the value in
+		atomic_write_unit_max_bytes as
+		atomic_write_unit_max_bytes will be rounded down to a
+		power-of-two and atomic_write_unit_max_bytes may also be
+		limited by some other queue limits, such as max_segments.
+		This parameter - along with atomic_write_unit_min_bytes
+		and atomic_write_unit_max_bytes - will not be larger than
+		max_hw_sectors_kb, but may be larger than max_sectors_kb.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_min_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] This parameter specifies the smallest block which can
+		be written atomically with an atomic write operation. All
+		atomic write operations must begin at a
+		atomic_write_unit_min boundary and must be multiples of
+		atomic_write_unit_min. This value must be a power-of-two.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_max_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] This parameter defines the largest block which can be
+		written atomically with an atomic write operation. This
+		value must be a multiple of atomic_write_unit_min and must
+		be a power-of-two. This value will not be larger than
+		atomic_write_max_bytes.
+
+
+What:		/sys/block/<disk>/atomic_write_boundary_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] A device may need to internally split I/Os which
+		straddle a given logical block address boundary. In that
+		case a single atomic write operation will be processed as
+		one of more sub-operations which each complete atomically.
+		This parameter specifies the size in bytes of the atomic
+		boundary if one is reported by the device. This value must
+		be a power-of-two.
+
 
 What:		/sys/block/<disk>/diskseq
 Date:		February 2021
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 74e9e775f13d..12a75a252ca2 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -18,6 +18,42 @@
 #include "blk-rq-qos.h"
 #include "blk-throttle.h"
 
+static bool rq_straddles_atomic_write_boundary(struct request *rq,
+					unsigned int front,
+					unsigned int back)
+{
+	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
+	unsigned int mask, imask;
+	loff_t start, end;
+
+	if (!boundary)
+		return false;
+
+	start = rq->__sector << SECTOR_SHIFT;
+	end = start + rq->__data_len;
+
+	start -= front;
+	end += back;
+
+	/* We're longer than the boundary, so must be crossing it */
+	if (end - start > boundary)
+		return true;
+
+	mask = boundary - 1;
+
+	/* start/end are boundary-aligned, so cannot be crossing */
+	if (!(start & mask) || !(end & mask))
+		return false;
+
+	imask = ~mask;
+
+	/* Top bits are different, so crossed a boundary */
+	if ((start & imask) != (end & imask))
+		return true;
+
+	return false;
+}
+
 static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
 {
 	*bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
@@ -167,7 +203,16 @@ static inline unsigned get_max_io_size(struct bio *bio,
 {
 	unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
 	unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
-	unsigned max_sectors = lim->max_sectors, start, end;
+	unsigned max_sectors, start, end;
+
+	/*
+	 * We ignore lim->max_sectors for atomic writes simply because
+	 * it may less than the bio size, which we cannot tolerate.
+	 */
+	if (bio->bi_opf & REQ_ATOMIC)
+		max_sectors = lim->atomic_write_max_sectors;
+	else
+		max_sectors = lim->max_sectors;
 
 	if (lim->chunk_sectors) {
 		max_sectors = min(max_sectors,
@@ -305,6 +350,11 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
 	*segs = nsegs;
 	return NULL;
 split:
+	if (bio->bi_opf & REQ_ATOMIC) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+		return ERR_PTR(-EINVAL);
+	}
 	/*
 	 * We can't sanely support splitting for a REQ_NOWAIT bio. End it
 	 * with EAGAIN if splitting is required and return an error pointer.
@@ -645,6 +695,13 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, bio->bi_iter.bi_size)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -664,6 +721,13 @@ static int ll_front_merge_fn(struct request *req, struct bio *bio,
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				bio->bi_iter.bi_size, 0)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -700,6 +764,13 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req)))
 		return 0;
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, blk_rq_bytes(next))) {
+			return 0;
+		}
+	}
+
 	total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
 	if (total_phys_segments > blk_rq_get_max_segments(req))
 		return 0;
@@ -795,6 +866,18 @@ static enum elv_merge blk_try_req_merge(struct request *req,
 	return ELEVATOR_NO_MERGE;
 }
 
+static bool blk_atomic_write_mergeable_rq_bio(struct request *rq,
+					      struct bio *bio)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC);
+}
+
+static bool blk_atomic_write_mergeable_rqs(struct request *rq,
+					   struct request *next)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC);
+}
+
 /*
  * For non-mq, this has to be called with the request spinlock acquired.
  * For mq with scheduling, the appropriate queue wide lock should be held.
@@ -814,6 +897,9 @@ static struct request *attempt_merge(struct request_queue *q,
 	if (req->ioprio != next->ioprio)
 		return NULL;
 
+	if (!blk_atomic_write_mergeable_rqs(req, next))
+		return NULL;
+
 	/*
 	 * If we are allowed to merge, then append bio list
 	 * from next to rq and release next. merge_requests_fn
@@ -941,6 +1027,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (rq->ioprio != bio_prio(bio))
 		return false;
 
+	if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false)
+		return false;
+
 	return true;
 }
 
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 06ea91e51b8b..176f26374abc 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -59,6 +59,13 @@ void blk_set_default_limits(struct queue_limits *lim)
 	lim->zoned = false;
 	lim->zone_write_granularity = 0;
 	lim->dma_alignment = 511;
+	lim->atomic_write_hw_max_sectors = 0;
+	lim->atomic_write_max_sectors = 0;
+	lim->atomic_write_hw_boundary_sectors = 0;
+	lim->atomic_write_hw_unit_min_sectors = 0;
+	lim->atomic_write_unit_min_sectors = 0;
+	lim->atomic_write_hw_unit_max_sectors = 0;
+	lim->atomic_write_unit_max_sectors = 0;
 }
 
 /**
@@ -101,6 +108,44 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
 }
 EXPORT_SYMBOL(blk_queue_bounce_limit);
 
+
+/*
+ * Returns max guaranteed sectors which we can fit in a bio. For convenience of
+ * users, rounddown_pow_of_two() the return value.
+ *
+ * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
+ * from first and last segments.
+ */
+static unsigned int blk_queue_max_guaranteed_bio_sectors(
+					struct queue_limits *limits,
+					struct request_queue *q)
+{
+	unsigned int max_segments = min(BIO_MAX_VECS, limits->max_segments);
+	unsigned int length;
+
+	length = min(max_segments, 2) * queue_logical_block_size(q);
+	if (max_segments > 2)
+		length += (max_segments - 2) * PAGE_SIZE;
+
+	return rounddown_pow_of_two(length >> SECTOR_SHIFT);
+}
+
+static void blk_atomic_writes_update_limits(struct request_queue *q)
+{
+	struct queue_limits *limits = &q->limits;
+	unsigned int max_hw_sectors =
+		rounddown_pow_of_two(limits->max_hw_sectors);
+	unsigned int unit_limit = min(max_hw_sectors,
+		blk_queue_max_guaranteed_bio_sectors(limits, q));
+
+	limits->atomic_write_max_sectors =
+		min(limits->atomic_write_hw_max_sectors, max_hw_sectors);
+	limits->atomic_write_unit_min_sectors =
+		min(limits->atomic_write_hw_unit_min_sectors, unit_limit);
+	limits->atomic_write_unit_max_sectors =
+		min(limits->atomic_write_hw_unit_max_sectors, unit_limit);
+}
+
 /**
  * blk_queue_max_hw_sectors - set max sectors for a request for this queue
  * @q:  the request queue for the device
@@ -145,6 +190,8 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
 				 limits->logical_block_size >> SECTOR_SHIFT);
 	limits->max_sectors = max_sectors;
 
+	blk_atomic_writes_update_limits(q);
+
 	if (!q->disk)
 		return;
 	q->disk->bdi->io_pages = max_sectors >> (PAGE_SHIFT - 9);
@@ -182,6 +229,62 @@ void blk_queue_max_discard_sectors(struct request_queue *q,
 }
 EXPORT_SYMBOL(blk_queue_max_discard_sectors);
 
+/**
+ * blk_queue_atomic_write_max_bytes - set max bytes supported by
+ * the device for atomic write operations.
+ * @q:  the request queue for the device
+ * @bytes: maximum bytes supported
+ */
+void blk_queue_atomic_write_max_bytes(struct request_queue *q,
+				      unsigned int bytes)
+{
+	q->limits.atomic_write_hw_max_sectors = bytes >> SECTOR_SHIFT;
+	blk_atomic_writes_update_limits(q);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_max_bytes);
+
+/**
+ * blk_queue_atomic_write_boundary_bytes - Device's logical block address space
+ * which an atomic write should not cross.
+ * @q:  the request queue for the device
+ * @bytes: must be a power-of-two.
+ */
+void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
+					   unsigned int bytes)
+{
+	q->limits.atomic_write_hw_boundary_sectors = bytes >> SECTOR_SHIFT;
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_boundary_bytes);
+
+/**
+ * blk_queue_atomic_write_unit_min_sectors - smallest unit that can be written
+ * atomically to the device.
+ * @q:  the request queue for the device
+ * @sectors: must be a power-of-two.
+ */
+void blk_queue_atomic_write_unit_min_sectors(struct request_queue *q,
+					     unsigned int sectors)
+{
+
+	q->limits.atomic_write_hw_unit_min_sectors = sectors;
+	blk_atomic_writes_update_limits(q);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_unit_min_sectors);
+
+/*
+ * blk_queue_atomic_write_unit_max_sectors - largest unit that can be written
+ * atomically to the device.
+ * @q: the request queue for the device
+ * @sectors: must be a power-of-two.
+ */
+void blk_queue_atomic_write_unit_max_sectors(struct request_queue *q,
+					     unsigned int sectors)
+{
+	q->limits.atomic_write_hw_unit_max_sectors = sectors;
+	blk_atomic_writes_update_limits(q);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_unit_max_sectors);
+
 /**
  * blk_queue_max_secure_erase_sectors - set max sectors for a secure erase
  * @q:  the request queue for the device
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 6b2429cad81a..3978f14f9769 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -118,6 +118,30 @@ static ssize_t queue_max_discard_segments_show(struct request_queue *q,
 	return queue_var_show(queue_max_discard_segments(q), page);
 }
 
+static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_max_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_boundary_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_boundary_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_min_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_max_bytes(q), page);
+}
+
 static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
 {
 	return queue_var_show(q->limits.max_integrity_segments, page);
@@ -502,6 +526,11 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes");
 QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes");
 QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
 
+QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
+
 QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
 QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
 QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
@@ -629,6 +658,10 @@ static struct attribute *queue_attrs[] = {
 	&queue_discard_max_entry.attr,
 	&queue_discard_max_hw_entry.attr,
 	&queue_discard_zeroes_data_entry.attr,
+	&queue_atomic_write_max_bytes_entry.attr,
+	&queue_atomic_write_boundary_entry.attr,
+	&queue_atomic_write_unit_min_entry.attr,
+	&queue_atomic_write_unit_max_entry.attr,
 	&queue_write_same_max_entry.attr,
 	&queue_write_zeroes_max_entry.attr,
 	&queue_zone_append_max_entry.attr,
diff --git a/block/blk.h b/block/blk.h
index 050696131329..6ba8333fcf26 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -178,6 +178,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
 	if (unlikely(op == REQ_OP_WRITE_ZEROES))
 		return q->limits.max_write_zeroes_sectors;
 
+	if (rq->cmd_flags & REQ_ATOMIC)
+		return q->limits.atomic_write_max_sectors;
+
 	return q->limits.max_sectors;
 }
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index f288c94374b3..cd7cceb8565d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -422,6 +422,7 @@ enum req_flag_bits {
 	__REQ_DRV,		/* for driver use */
 	__REQ_FS_PRIVATE,	/* for file system (submitter) use */
 
+	__REQ_ATOMIC,		/* for atomic write operations */
 	/*
 	 * Command specific flags, keep last:
 	 */
@@ -448,6 +449,7 @@ enum req_flag_bits {
 #define REQ_RAHEAD	(__force blk_opf_t)(1ULL << __REQ_RAHEAD)
 #define REQ_BACKGROUND	(__force blk_opf_t)(1ULL << __REQ_BACKGROUND)
 #define REQ_NOWAIT	(__force blk_opf_t)(1ULL << __REQ_NOWAIT)
+#define REQ_ATOMIC	(__force blk_opf_t)(1ULL << __REQ_ATOMIC)
 #define REQ_POLLED	(__force blk_opf_t)(1ULL << __REQ_POLLED)
 #define REQ_ALLOC_CACHE	(__force blk_opf_t)(1ULL << __REQ_ALLOC_CACHE)
 #define REQ_SWAP	(__force blk_opf_t)(1ULL << __REQ_SWAP)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 99e4f5e72213..40ed56ef4937 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -299,6 +299,14 @@ struct queue_limits {
 	unsigned int		discard_alignment;
 	unsigned int		zone_write_granularity;
 
+	unsigned int		atomic_write_hw_max_sectors;
+	unsigned int		atomic_write_max_sectors;
+	unsigned int		atomic_write_hw_boundary_sectors;
+	unsigned int		atomic_write_hw_unit_min_sectors;
+	unsigned int		atomic_write_unit_min_sectors;
+	unsigned int		atomic_write_hw_unit_max_sectors;
+	unsigned int		atomic_write_unit_max_sectors;
+
 	unsigned short		max_segments;
 	unsigned short		max_integrity_segments;
 	unsigned short		max_discard_segments;
@@ -885,6 +893,14 @@ void blk_queue_zone_write_granularity(struct request_queue *q,
 				      unsigned int size);
 extern void blk_queue_alignment_offset(struct request_queue *q,
 				       unsigned int alignment);
+void blk_queue_atomic_write_max_bytes(struct request_queue *q,
+				unsigned int bytes);
+void blk_queue_atomic_write_unit_max_sectors(struct request_queue *q,
+				unsigned int sectors);
+void blk_queue_atomic_write_unit_min_sectors(struct request_queue *q,
+				unsigned int sectors);
+void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
+				unsigned int bytes);
 void disk_update_readahead(struct gendisk *disk);
 extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min);
 extern void blk_queue_io_min(struct request_queue *q, unsigned int min);
@@ -1291,6 +1307,30 @@ static inline int queue_dma_alignment(const struct request_queue *q)
 	return q ? q->limits.dma_alignment : 511;
 }
 
+static inline unsigned int
+queue_atomic_write_unit_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_max_sectors << SECTOR_SHIFT;
+}
+
+static inline unsigned int
+queue_atomic_write_unit_min_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_min_sectors << SECTOR_SHIFT;
+}
+
+static inline unsigned int
+queue_atomic_write_boundary_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_hw_boundary_sectors << SECTOR_SHIFT;
+}
+
+static inline unsigned int
+queue_atomic_write_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_max_sectors << SECTOR_SHIFT;
+}
+
 static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
 {
 	return queue_dma_alignment(bdev_get_queue(bdev));
@@ -1540,6 +1580,26 @@ struct io_comp_batch {
 	void (*complete)(struct io_comp_batch *);
 };
 
+static inline bool bdev_can_atomic_write(struct block_device *bdev)
+{
+	struct request_queue *bd_queue = bdev->bd_queue;
+	struct queue_limits *limits = &bd_queue->limits;
+
+	if (!limits->atomic_write_unit_min_sectors)
+		return false;
+
+	if (bdev_is_partition(bdev)) {
+		sector_t bd_start_sect = bdev->bd_start_sect;
+		unsigned int granularity = max(
+				limits->atomic_write_unit_min_sectors,
+				limits->atomic_write_hw_boundary_sectors);
+		if (do_div(bd_start_sect, granularity))
+			return false;
+	}
+
+	return true;
+}
+
 #define DEFINE_IO_COMP_BATCH(name)	struct io_comp_batch name = { }
 
 #endif /* _LINUX_BLKDEV_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 06/11] block: Add atomic write support for statx
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (4 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 05/11] block: Add core atomic write support John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-20  8:29   ` Christoph Hellwig
  2024-02-25 14:20   ` Ritesh Harjani
  2024-02-19 13:01 ` [PATCH v4 07/11] block: Add fops atomic write support John Garry
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, Prasad Singamsetty, John Garry

From: Prasad Singamsetty <[email protected]>

Extend statx system call to return additional info for atomic write support
support if the specified file is a block device.

Signed-off-by: Prasad Singamsetty <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
 block/bdev.c           | 37 +++++++++++++++++++++++++++----------
 fs/stat.c              | 13 ++++++-------
 include/linux/blkdev.h |  5 +++--
 3 files changed, 36 insertions(+), 19 deletions(-)

diff --git a/block/bdev.c b/block/bdev.c
index e9f1b12bd75c..0dada9902bd4 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -1116,24 +1116,41 @@ void sync_bdevs(bool wait)
 	iput(old_inode);
 }
 
+#define BDEV_STATX_SUPPORTED_MASK (STATX_DIOALIGN | STATX_WRITE_ATOMIC)
+
 /*
- * Handle STATX_DIOALIGN for block devices.
- *
- * Note that the inode passed to this is the inode of a block device node file,
- * not the block device's internal inode.  Therefore it is *not* valid to use
- * I_BDEV() here; the block device has to be looked up by i_rdev instead.
+ * Handle STATX_{DIOALIGN, WRITE_ATOMIC} for block devices.
  */
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask)
 {
 	struct block_device *bdev;
 
-	bdev = blkdev_get_no_open(inode->i_rdev);
+	if (!(request_mask & BDEV_STATX_SUPPORTED_MASK))
+		return;
+
+	/*
+	 * Note that d_backing_inode() returns the inode of a block device node
+	 * file, not the block device's internal inode.  Therefore it is *not*
+	 * valid to use I_BDEV() here; the block device has to be looked up by
+	 * i_rdev instead.
+	 */
+	bdev = blkdev_get_no_open(d_backing_inode(dentry)->i_rdev);
 	if (!bdev)
 		return;
 
-	stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
-	stat->dio_offset_align = bdev_logical_block_size(bdev);
-	stat->result_mask |= STATX_DIOALIGN;
+	if (request_mask & STATX_DIOALIGN) {
+		stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
+		stat->dio_offset_align = bdev_logical_block_size(bdev);
+		stat->result_mask |= STATX_DIOALIGN;
+	}
+
+	if (request_mask & STATX_WRITE_ATOMIC && bdev_can_atomic_write(bdev)) {
+		struct request_queue *bd_queue = bdev->bd_queue;
+
+		generic_fill_statx_atomic_writes(stat,
+			queue_atomic_write_unit_min_bytes(bd_queue),
+			queue_atomic_write_unit_max_bytes(bd_queue));
+	}
 
 	blkdev_put_no_open(bdev);
 }
diff --git a/fs/stat.c b/fs/stat.c
index 522787a4ab6a..bd0618477702 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -290,13 +290,12 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
 		stat->attributes |= STATX_ATTR_MOUNT_ROOT;
 	stat->attributes_mask |= STATX_ATTR_MOUNT_ROOT;
 
-	/* Handle STATX_DIOALIGN for block devices. */
-	if (request_mask & STATX_DIOALIGN) {
-		struct inode *inode = d_backing_inode(path.dentry);
-
-		if (S_ISBLK(inode->i_mode))
-			bdev_statx_dioalign(inode, stat);
-	}
+	/* If this is a block device inode, override the filesystem
+	 * attributes with the block device specific parameters
+	 * that need to be obtained from the bdev backing inode
+	 */
+	if (S_ISBLK(d_backing_inode(path.dentry)->i_mode))
+		bdev_statx(path.dentry, stat, request_mask);
 
 	path_put(&path);
 	if (retry_estale(error, lookup_flags)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 40ed56ef4937..4f04456f1250 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1541,7 +1541,7 @@ int sync_blockdev(struct block_device *bdev);
 int sync_blockdev_range(struct block_device *bdev, loff_t lstart, loff_t lend);
 int sync_blockdev_nowait(struct block_device *bdev);
 void sync_bdevs(bool wait);
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat);
+void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask);
 void printk_all_partitions(void);
 int __init early_lookup_bdev(const char *pathname, dev_t *dev);
 #else
@@ -1559,7 +1559,8 @@ static inline int sync_blockdev_nowait(struct block_device *bdev)
 static inline void sync_bdevs(bool wait)
 {
 }
-static inline void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+static inline void bdev_statx(struct dentry *dentry, struct kstat *stat,
+				u32 request_mask)
 {
 }
 static inline void printk_all_partitions(void)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 07/11] block: Add fops atomic write support
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (5 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 06/11] block: Add atomic write support for statx John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-25 14:46   ` Ritesh Harjani
  2024-02-19 13:01 ` [PATCH v4 08/11] scsi: sd: Atomic " John Garry
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, John Garry

Support atomic writes by submitting a single BIO with the REQ_ATOMIC set.

It must be ensured that the atomic write adheres to its rules, like
naturally aligned offset, so call blkdev_dio_invalid() ->
blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to
blkdev_dio_invalid()] for this purpose.

In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot
produce a single BIO, so error in this case.

Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes
and the associated file flag is for O_DIRECT.

Signed-off-by: John Garry <[email protected]>
---
 block/fops.c | 31 ++++++++++++++++++++++++++++---
 1 file changed, 28 insertions(+), 3 deletions(-)

diff --git a/block/fops.c b/block/fops.c
index 28382b4d097a..563189c2fc5a 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -34,13 +34,27 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
 	return opf;
 }
 
-static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos,
-			      struct iov_iter *iter)
+static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos,
+				      struct iov_iter *iter)
 {
+	struct request_queue *q = bdev_get_queue(bdev);
+	unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q);
+	unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q);
+
+	return atomic_write_valid(pos, iter, min_bytes, max_bytes);
+}
+
+static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
+				struct iov_iter *iter, bool atomic_write)
+{
+	if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter))
+		return true;
+
 	return pos & (bdev_logical_block_size(bdev) - 1) ||
 		!bdev_iter_is_aligned(bdev, iter);
 }
 
+
 #define DIO_INLINE_BIO_VECS 4
 
 static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
@@ -71,6 +85,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
 	}
 	bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT;
 	bio.bi_ioprio = iocb->ki_ioprio;
+	if (iocb->ki_flags & IOCB_ATOMIC)
+		bio.bi_opf |= REQ_ATOMIC;
 
 	ret = bio_iov_iter_get_pages(&bio, iter);
 	if (unlikely(ret))
@@ -341,6 +357,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 		task_io_account_write(bio->bi_iter.bi_size);
 	}
 
+	if (iocb->ki_flags & IOCB_ATOMIC)
+		bio->bi_opf |= REQ_ATOMIC;
+
 	if (iocb->ki_flags & IOCB_NOWAIT)
 		bio->bi_opf |= REQ_NOWAIT;
 
@@ -357,13 +376,14 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 {
 	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+	bool atomic_write = iocb->ki_flags & IOCB_ATOMIC;
 	loff_t pos = iocb->ki_pos;
 	unsigned int nr_pages;
 
 	if (!iov_iter_count(iter))
 		return 0;
 
-	if (blkdev_dio_unaligned(bdev, pos, iter))
+	if (blkdev_dio_invalid(bdev, pos, iter, atomic_write))
 		return -EINVAL;
 
 	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
@@ -371,6 +391,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 		if (is_sync_kiocb(iocb))
 			return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
 		return __blkdev_direct_IO_async(iocb, iter, nr_pages);
+	} else if (atomic_write) {
+		return -EINVAL;
 	}
 	return __blkdev_direct_IO(iocb, iter, bio_max_segs(nr_pages));
 }
@@ -616,6 +638,9 @@ static int blkdev_open(struct inode *inode, struct file *filp)
 	if (bdev_nowait(handle->bdev))
 		filp->f_mode |= FMODE_NOWAIT;
 
+	if (bdev_can_atomic_write(handle->bdev) && filp->f_flags & O_DIRECT)
+		filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
+
 	filp->f_mapping = handle->bdev->bd_inode->i_mapping;
 	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
 	filp->private_data = handle;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 08/11] scsi: sd: Atomic write support
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (6 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 07/11] block: Add fops atomic write support John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-19 13:01 ` [PATCH v4 09/11] scsi: scsi_debug: " John Garry
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, John Garry

Support is divided into two main areas:
- reading VPD pages and setting sdev request_queue limits
- support WRITE ATOMIC (16) command and tracing

The relevant block limits VPD page need to be read to allow the block layer
request_queue atomic write limits to be set. These VPD page limits are
described in sbc4r22 section 6.6.4 - Block limits VPD page.

There are five limits of interest:
- MAXIMUM ATOMIC TRANSFER LENGTH
- ATOMIC ALIGNMENT
- ATOMIC TRANSFER LENGTH GRANULARITY
- MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY
- MAXIMUM ATOMIC BOUNDARY SIZE

MAXIMUM ATOMIC TRANSFER LENGTH is the maximum length for a WRITE ATOMIC
(16) command. It will not be greater than the device MAXIMUM TRANSFER
LENGTH.

ATOMIC ALIGNMENT and ATOMIC TRANSFER LENGTH GRANULARITY are the minimum
alignment and length values for an atomic write in terms of logical blocks.

Unlike NVMe, SCSI does not specify an LBA space boundary, but does specify
a per-IO boundary granularity. The maximum boundary size is specified in
MAXIMUM ATOMIC BOUNDARY SIZE. When used, this boundary value is set in the
WRITE ATOMIC (16) ATOMIC BOUNDARY field - layout for the WRITE_ATOMIC_16
command can be found in sbc4r22 section 5.48. This boundary value is the
granularity size at which the device may atomically write the data. A value
of zero in WRITE ATOMIC (16) ATOMIC BOUNDARY field means that all data must
be atomically written together.

MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY is the maximum atomic write
length if a non-zero boundary value is set.

For atomic write support, the WRITE ATOMIC (16) boundary is not of much
interest, as the block layer expects each request submitted to be executed
atomically. However, the SCSI spec does leave itself open to a quirky
scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero, yet MAXIMUM ATOMIC
TRANSFER LENGTH WITH BOUNDARY and MAXIMUM ATOMIC BOUNDARY SIZE are both
non-zero. This case will be supported.

To set the block layer request_queue atomic write capabilities, sanitize
the VPD page limits and set limits as follows:
- atomic_write_unit_min is derived from granularity and alignment values.
  If no granularity value is not set, use physical block size
- atomic_write_unit_max is derived from MAXIMUM ATOMIC TRANSFER LENGTH. In
  the scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero and boundary
  limits are non-zero, use MAXIMUM ATOMIC BOUNDARY SIZE for
  atomic_write_unit_max. New flag scsi_disk.use_atomic_write_boundary is
  set for this scenario.
- atomic_write_boundary_bytes is set to zero always

SCSI also supports a WRITE ATOMIC (32) command, which is for type 2
protection enabled. This is not going to be supported now, so check for
T10_PI_TYPE2_PROTECTION when setting any request_queue limits.

To handle an atomic write request, add support for WRITE ATOMIC (16)
command in handler sd_setup_atomic_cmnd(). Flag use_atomic_write_boundary
is checked here for encoding ATOMIC BOUNDARY field.

Trace info is also added for WRITE_ATOMIC_16 command.

Signed-off-by: John Garry <[email protected]>
---
 drivers/scsi/scsi_trace.c   | 22 +++++++++
 drivers/scsi/sd.c           | 93 ++++++++++++++++++++++++++++++++++++-
 drivers/scsi/sd.h           |  8 ++++
 include/scsi/scsi_proto.h   |  1 +
 include/trace/events/scsi.h |  1 +
 5 files changed, 124 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/scsi_trace.c b/drivers/scsi/scsi_trace.c
index 41a950075913..3e47c4472a80 100644
--- a/drivers/scsi/scsi_trace.c
+++ b/drivers/scsi/scsi_trace.c
@@ -325,6 +325,26 @@ scsi_trace_zbc_out(struct trace_seq *p, unsigned char *cdb, int len)
 	return ret;
 }
 
+static const char *
+scsi_trace_atomic_write16_out(struct trace_seq *p, unsigned char *cdb, int len)
+{
+	const char *ret = trace_seq_buffer_ptr(p);
+	unsigned int boundary_size;
+	unsigned int nr_blocks;
+	sector_t lba;
+
+	lba = get_unaligned_be64(&cdb[2]);
+	boundary_size = get_unaligned_be16(&cdb[10]);
+	nr_blocks = get_unaligned_be16(&cdb[12]);
+
+	trace_seq_printf(p, "lba=%llu txlen=%u boundary_size=%u",
+			  lba, nr_blocks, boundary_size);
+
+	trace_seq_putc(p, 0);
+
+	return ret;
+}
+
 static const char *
 scsi_trace_varlen(struct trace_seq *p, unsigned char *cdb, int len)
 {
@@ -385,6 +405,8 @@ scsi_trace_parse_cdb(struct trace_seq *p, unsigned char *cdb, int len)
 		return scsi_trace_zbc_in(p, cdb, len);
 	case ZBC_OUT:
 		return scsi_trace_zbc_out(p, cdb, len);
+	case WRITE_ATOMIC_16:
+		return scsi_trace_atomic_write16_out(p, cdb, len);
 	default:
 		return scsi_trace_misc(p, cdb, len);
 	}
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 0833b3e6aa6e..7df05d796387 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -916,6 +916,65 @@ static blk_status_t sd_setup_unmap_cmnd(struct scsi_cmnd *cmd)
 	return scsi_alloc_sgtables(cmd);
 }
 
+static void sd_config_atomic(struct scsi_disk *sdkp)
+{
+	unsigned int logical_block_size = sdkp->device->sector_size,
+		physical_block_size_sectors, max_atomic, unit_min, unit_max;
+	struct request_queue *q = sdkp->disk->queue;
+
+	if ((!sdkp->max_atomic && !sdkp->max_atomic_with_boundary) ||
+	    sdkp->protection_type == T10_PI_TYPE2_PROTECTION)
+		return;
+
+	physical_block_size_sectors = sdkp->physical_block_size /
+					sdkp->device->sector_size;
+
+	unit_min = rounddown_pow_of_two(sdkp->atomic_granularity ?
+					sdkp->atomic_granularity :
+					physical_block_size_sectors);
+
+	/*
+	 * Only use atomic boundary when we have the odd scenario of
+	 * sdkp->max_atomic == 0, which the spec does permit.
+	 */
+	if (sdkp->max_atomic) {
+		max_atomic = sdkp->max_atomic;
+		unit_max = rounddown_pow_of_two(sdkp->max_atomic);
+		sdkp->use_atomic_write_boundary = 0;
+	} else {
+		max_atomic = sdkp->max_atomic_with_boundary;
+		unit_max = rounddown_pow_of_two(sdkp->max_atomic_boundary);
+		sdkp->use_atomic_write_boundary = 1;
+	}
+
+	/*
+	 * Ensure compliance with granularity and alignment. For now, keep it
+	 * simple and just don't support atomic writes for values mismatched
+	 * with max_{boundary}atomic, physical block size, and
+	 * atomic_granularity itself.
+	 *
+	 * We're really being distrustful by checking unit_max also...
+	 */
+	if (sdkp->atomic_granularity > 1) {
+		if (unit_min > 1 && unit_min % sdkp->atomic_granularity)
+			return;
+		if (unit_max > 1 && unit_max % sdkp->atomic_granularity)
+			return;
+	}
+
+	if (sdkp->atomic_alignment > 1) {
+		if (unit_min > 1 && unit_min % sdkp->atomic_alignment)
+			return;
+		if (unit_max > 1 && unit_max % sdkp->atomic_alignment)
+			return;
+	}
+
+	blk_queue_atomic_write_max_bytes(q, max_atomic * logical_block_size);
+	blk_queue_atomic_write_unit_min_sectors(q, unit_min);
+	blk_queue_atomic_write_unit_max_sectors(q, unit_max);
+	blk_queue_atomic_write_boundary_bytes(q, 0);
+}
+
 static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd,
 		bool unmap)
 {
@@ -1181,6 +1240,26 @@ static int sd_cdl_dld(struct scsi_disk *sdkp, struct scsi_cmnd *scmd)
 	return (hint - IOPRIO_HINT_DEV_DURATION_LIMIT_1) + 1;
 }
 
+static blk_status_t sd_setup_atomic_cmnd(struct scsi_cmnd *cmd,
+					sector_t lba, unsigned int nr_blocks,
+					bool boundary, unsigned char flags)
+{
+	cmd->cmd_len  = 16;
+	cmd->cmnd[0]  = WRITE_ATOMIC_16;
+	cmd->cmnd[1]  = flags;
+	put_unaligned_be64(lba, &cmd->cmnd[2]);
+	put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+	if (boundary)
+		put_unaligned_be16(nr_blocks, &cmd->cmnd[10]);
+	else
+		put_unaligned_be16(0, &cmd->cmnd[10]);
+	put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+	cmd->cmnd[14] = 0;
+	cmd->cmnd[15] = 0;
+
+	return BLK_STS_OK;
+}
+
 static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
 {
 	struct request *rq = scsi_cmd_to_rq(cmd);
@@ -1252,6 +1331,10 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
 	if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) {
 		ret = sd_setup_rw32_cmnd(cmd, write, lba, nr_blocks,
 					 protect | fua, dld);
+	} else if (rq->cmd_flags & REQ_ATOMIC && write) {
+		ret = sd_setup_atomic_cmnd(cmd, lba, nr_blocks,
+				sdkp->use_atomic_write_boundary,
+				protect | fua);
 	} else if (sdp->use_16_for_rw || (nr_blocks > 0xffff)) {
 		ret = sd_setup_rw16_cmnd(cmd, write, lba, nr_blocks,
 					 protect | fua, dld);
@@ -3071,7 +3154,7 @@ static void sd_read_block_limits(struct scsi_disk *sdkp)
 		sdkp->max_ws_blocks = (u32)get_unaligned_be64(&vpd->data[36]);
 
 		if (!sdkp->lbpme)
-			goto out;
+			goto read_atomics;
 
 		lba_count = get_unaligned_be32(&vpd->data[20]);
 		desc_count = get_unaligned_be32(&vpd->data[24]);
@@ -3102,6 +3185,14 @@ static void sd_read_block_limits(struct scsi_disk *sdkp)
 			else
 				sd_config_discard(sdkp, SD_LBP_DISABLE);
 		}
+read_atomics:
+		sdkp->max_atomic = get_unaligned_be32(&vpd->data[44]);
+		sdkp->atomic_alignment = get_unaligned_be32(&vpd->data[48]);
+		sdkp->atomic_granularity = get_unaligned_be32(&vpd->data[52]);
+		sdkp->max_atomic_with_boundary = get_unaligned_be32(&vpd->data[56]);
+		sdkp->max_atomic_boundary = get_unaligned_be32(&vpd->data[60]);
+
+		sd_config_atomic(sdkp);
 	}
 
  out:
diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
index 409dda5350d1..990188a56b51 100644
--- a/drivers/scsi/sd.h
+++ b/drivers/scsi/sd.h
@@ -121,6 +121,13 @@ struct scsi_disk {
 	u32		max_unmap_blocks;
 	u32		unmap_granularity;
 	u32		unmap_alignment;
+
+	u32		max_atomic;
+	u32		atomic_alignment;
+	u32		atomic_granularity;
+	u32		max_atomic_with_boundary;
+	u32		max_atomic_boundary;
+
 	u32		index;
 	unsigned int	physical_block_size;
 	unsigned int	max_medium_access_timeouts;
@@ -151,6 +158,7 @@ struct scsi_disk {
 	unsigned	urswrz : 1;
 	unsigned	security : 1;
 	unsigned	ignore_medium_access_errors : 1;
+	unsigned	use_atomic_write_boundary : 1;
 };
 #define to_scsi_disk(obj) container_of(obj, struct scsi_disk, disk_dev)
 
diff --git a/include/scsi/scsi_proto.h b/include/scsi/scsi_proto.h
index 07d65c1f59db..833de67305b5 100644
--- a/include/scsi/scsi_proto.h
+++ b/include/scsi/scsi_proto.h
@@ -119,6 +119,7 @@
 #define WRITE_SAME_16	      0x93
 #define ZBC_OUT		      0x94
 #define ZBC_IN		      0x95
+#define WRITE_ATOMIC_16	0x9c
 #define SERVICE_ACTION_BIDIRECTIONAL 0x9d
 #define SERVICE_ACTION_IN_16  0x9e
 #define SERVICE_ACTION_OUT_16 0x9f
diff --git a/include/trace/events/scsi.h b/include/trace/events/scsi.h
index 8e2d9b1b0e77..05f1945ed204 100644
--- a/include/trace/events/scsi.h
+++ b/include/trace/events/scsi.h
@@ -102,6 +102,7 @@
 		scsi_opcode_name(WRITE_32),			\
 		scsi_opcode_name(WRITE_SAME_32),		\
 		scsi_opcode_name(ATA_16),			\
+		scsi_opcode_name(WRITE_ATOMIC_16),		\
 		scsi_opcode_name(ATA_12))
 
 #define scsi_hostbyte_name(result)	{ result, #result }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 09/11] scsi: scsi_debug: Atomic write support
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (7 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 08/11] scsi: sd: Atomic " John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-20  7:12   ` Ojaswin Mujoo
  2024-02-19 13:01 ` [PATCH v4 10/11] nvme: " John Garry
  2024-02-19 13:01 ` [PATCH v4 11/11] nvme: Ensure atomic writes will be executed atomically John Garry
  10 siblings, 1 reply; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, John Garry

Add initial support for atomic writes.

As is standard method, feed device properties via modules param, those
being:
- atomic_max_size_blks
- atomic_alignment_blks
- atomic_granularity_blks
- atomic_max_size_with_boundary_blks
- atomic_max_boundary_blks

These just match sbc4r22 section 6.6.4 - Block limits VPD page.

We just support ATOMIC WRITE (16).

The major change in the driver is how we lock the device for RW accesses.

Currently the driver uses a per-device lock for accessing device metadata
and "media" data (calls to do_device_access()) atomically for the duration
of the whole read/write command.

This should not suit verifying atomic writes. Reason being that currently
all reads/writes are atomic, so using atomic writes does not prove
anything.

Change device access model to basis that regular writes only atomic on a
per-sector basis, while reads and atomic writes are fully atomic.

As mentioned, since accessing metadata and device media is atomic,
continue to have regular writes involving metadata - like discard or PI -
as atomic. We can improve this later.

Currently we only support model where overlapping going reads or writes
wait for current access to complete before commencing an atomic write.
This is described in 4.29.3.2 section of the SBC. However, we simplify,
things and wait for all accesses to complete (when issuing an atomic
write).

Signed-off-by: John Garry <[email protected]>
---
 drivers/scsi/scsi_debug.c | 589 +++++++++++++++++++++++++++++---------
 1 file changed, 456 insertions(+), 133 deletions(-)

diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 9070c0dc05ef..9568bcbf0821 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -68,6 +68,8 @@ static const char *sdebug_version_date = "20210520";
 
 /* Additional Sense Code (ASC) */
 #define NO_ADDITIONAL_SENSE 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASC 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASCQ 0x23
 #define LOGICAL_UNIT_NOT_READY 0x4
 #define LOGICAL_UNIT_COMMUNICATION_FAILURE 0x8
 #define UNRECOVERED_READ_ERR 0x11
@@ -102,6 +104,7 @@ static const char *sdebug_version_date = "20210520";
 #define READ_BOUNDARY_ASCQ 0x7
 #define ATTEMPT_ACCESS_GAP 0x9
 #define INSUFF_ZONE_ASCQ 0xe
+/* see drivers/scsi/sense_codes.h */
 
 /* Additional Sense Code Qualifier (ASCQ) */
 #define ACK_NAK_TO 0x3
@@ -151,6 +154,12 @@ static const char *sdebug_version_date = "20210520";
 #define DEF_VIRTUAL_GB   0
 #define DEF_VPD_USE_HOSTNO 1
 #define DEF_WRITESAME_LENGTH 0xFFFF
+#define DEF_ATOMIC_WR 0
+#define DEF_ATOMIC_WR_MAX_LENGTH 8192
+#define DEF_ATOMIC_WR_ALIGN 2
+#define DEF_ATOMIC_WR_GRAN 2
+#define DEF_ATOMIC_WR_MAX_LENGTH_BNDRY (DEF_ATOMIC_WR_MAX_LENGTH)
+#define DEF_ATOMIC_WR_MAX_BNDRY 128
 #define DEF_STRICT 0
 #define DEF_STATISTICS false
 #define DEF_SUBMIT_QUEUES 1
@@ -373,7 +382,9 @@ struct sdebug_host_info {
 
 /* There is an xarray of pointers to this struct's objects, one per host */
 struct sdeb_store_info {
-	rwlock_t macc_lck;	/* for atomic media access on this store */
+	rwlock_t macc_data_lck;	/* for media data access on this store */
+	rwlock_t macc_meta_lck;	/* for atomic media meta access on this store */
+	rwlock_t macc_sector_lck;	/* per-sector media data access on this store */
 	u8 *storep;		/* user data storage (ram) */
 	struct t10_pi_tuple *dif_storep; /* protection info */
 	void *map_storep;	/* provisioning map */
@@ -397,12 +408,20 @@ struct sdebug_defer {
 	enum sdeb_defer_type defer_t;
 };
 
+struct sdebug_device_access_info {
+	bool atomic_write;
+	u64 lba;
+	u32 num;
+	struct scsi_cmnd *self;
+};
+
 struct sdebug_queued_cmd {
 	/* corresponding bit set in in_use_bm[] in owning struct sdebug_queue
 	 * instance indicates this slot is in use.
 	 */
 	struct sdebug_defer sd_dp;
 	struct scsi_cmnd *scmd;
+	struct sdebug_device_access_info *i;
 };
 
 struct sdebug_scsi_cmd {
@@ -462,7 +481,8 @@ enum sdeb_opcode_index {
 	SDEB_I_PRE_FETCH = 29,		/* 10, 16 */
 	SDEB_I_ZONE_OUT = 30,		/* 0x94+SA; includes no data xfer */
 	SDEB_I_ZONE_IN = 31,		/* 0x95+SA; all have data-in */
-	SDEB_I_LAST_ELEM_P1 = 32,	/* keep this last (previous + 1) */
+	SDEB_I_ATOMIC_WRITE_16 = 32,
+	SDEB_I_LAST_ELEM_P1 = 33,	/* keep this last (previous + 1) */
 };
 
 
@@ -496,7 +516,8 @@ static const unsigned char opcode_ind_arr[256] = {
 	0, 0, 0, SDEB_I_VERIFY,
 	SDEB_I_PRE_FETCH, SDEB_I_SYNC_CACHE, 0, SDEB_I_WRITE_SAME,
 	SDEB_I_ZONE_OUT, SDEB_I_ZONE_IN, 0, 0,
-	0, 0, 0, 0, 0, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
+	0, 0, 0, 0,
+	SDEB_I_ATOMIC_WRITE_16, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
 /* 0xa0; 0xa0->0xbf: 12 byte cdbs */
 	SDEB_I_REPORT_LUNS, SDEB_I_ATA_PT, 0, SDEB_I_MAINT_IN,
 	     SDEB_I_MAINT_OUT, 0, 0, 0,
@@ -544,6 +565,7 @@ static int resp_write_buffer(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_sync_cache(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_pre_fetch(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_report_zones(struct scsi_cmnd *, struct sdebug_dev_info *);
+static int resp_atomic_write(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_open_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_close_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_finish_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
@@ -782,6 +804,11 @@ static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = {
 	    resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */
 		{16,  0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
 		 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} },
+/* 31 */
+	{0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO,
+	    resp_atomic_write, NULL, /* ATOMIC WRITE 16 */
+		{16,  0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} },
 /* sentinel */
 	{0xff, 0, 0, 0, NULL, NULL,		/* terminating element */
 	    {0,  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
@@ -829,6 +856,13 @@ static unsigned int sdebug_unmap_granularity = DEF_UNMAP_GRANULARITY;
 static unsigned int sdebug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS;
 static unsigned int sdebug_unmap_max_desc = DEF_UNMAP_MAX_DESC;
 static unsigned int sdebug_write_same_length = DEF_WRITESAME_LENGTH;
+static unsigned int sdebug_atomic_wr = DEF_ATOMIC_WR;
+static unsigned int sdebug_atomic_wr_max_length = DEF_ATOMIC_WR_MAX_LENGTH;
+static unsigned int sdebug_atomic_wr_align = DEF_ATOMIC_WR_ALIGN;
+static unsigned int sdebug_atomic_wr_gran = DEF_ATOMIC_WR_GRAN;
+static unsigned int sdebug_atomic_wr_max_length_bndry =
+			DEF_ATOMIC_WR_MAX_LENGTH_BNDRY;
+static unsigned int sdebug_atomic_wr_max_bndry = DEF_ATOMIC_WR_MAX_BNDRY;
 static int sdebug_uuid_ctl = DEF_UUID_CTL;
 static bool sdebug_random = DEF_RANDOM;
 static bool sdebug_per_host_store = DEF_PER_HOST_STORE;
@@ -1180,6 +1214,11 @@ static inline bool scsi_debug_lbp(void)
 		(sdebug_lbpu || sdebug_lbpws || sdebug_lbpws10);
 }
 
+static inline bool scsi_debug_atomic_write(void)
+{
+	return 0 == sdebug_fake_rw && sdebug_atomic_wr;
+}
+
 static void *lba2fake_store(struct sdeb_store_info *sip,
 			    unsigned long long lba)
 {
@@ -1807,6 +1846,14 @@ static int inquiry_vpd_b0(unsigned char *arr)
 	/* Maximum WRITE SAME Length */
 	put_unaligned_be64(sdebug_write_same_length, &arr[32]);
 
+	if (sdebug_atomic_wr) {
+		put_unaligned_be32(sdebug_atomic_wr_max_length, &arr[40]);
+		put_unaligned_be32(sdebug_atomic_wr_align, &arr[44]);
+		put_unaligned_be32(sdebug_atomic_wr_gran, &arr[48]);
+		put_unaligned_be32(sdebug_atomic_wr_max_length_bndry, &arr[52]);
+		put_unaligned_be32(sdebug_atomic_wr_max_bndry, &arr[56]);
+	}
+
 	return 0x3c; /* Mandatory page length for Logical Block Provisioning */
 }
 
@@ -3304,15 +3351,239 @@ static inline struct sdeb_store_info *devip2sip(struct sdebug_dev_info *devip,
 	return xa_load(per_store_ap, devip->sdbg_host->si_idx);
 }
 
+
+static inline void
+sdeb_read_lock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__acquire(lock);
+	else
+		read_lock(lock);
+}
+
+static inline void
+sdeb_read_unlock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__release(lock);
+	else
+		read_unlock(lock);
+}
+
+static inline void
+sdeb_write_lock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__acquire(lock);
+	else
+		write_lock(lock);
+}
+
+static inline void
+sdeb_write_unlock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__release(lock);
+	else
+		write_unlock(lock);
+}
+
+static inline void
+sdeb_data_read_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_read_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_sector_read_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_read_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_unlock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_unlock(&sip->macc_sector_lck);
+}
+
+/*
+ * Atomic locking:
+ * We simplify the atomic model to allow only 1x atomic write and many non-
+ * atomic reads or writes for all LBAs.
+
+ * A RW lock has a similar bahaviour:
+ * Only 1x writer and many readers.
+
+ * So use a RW lock for per-device read and write locking:
+ * An atomic access grabs the lock as a writer and non-atomic grabs the lock
+ * as a reader.
+ */
+
+static inline void
+sdeb_data_lock(struct sdeb_store_info *sip, bool atomic_write)
+{
+	if (atomic_write)
+		sdeb_data_write_lock(sip);
+	else
+		sdeb_data_read_lock(sip);
+}
+
+static inline void
+sdeb_data_unlock(struct sdeb_store_info *sip, bool atomic_write)
+{
+	if (atomic_write)
+		sdeb_data_write_unlock(sip);
+	else
+		sdeb_data_read_unlock(sip);
+}
+
+/* Allow many reads but only 1x write per sector */
+static inline void
+sdeb_data_sector_lock(struct sdeb_store_info *sip, bool do_write)
+{
+	if (do_write)
+		sdeb_data_sector_write_lock(sip);
+	else
+		sdeb_data_sector_read_lock(sip);
+}
+
+static inline void
+sdeb_data_sector_unlock(struct sdeb_store_info *sip, bool do_write)
+{
+	if (do_write)
+		sdeb_data_sector_write_unlock(sip);
+	else
+		sdeb_data_sector_read_unlock(sip);
+}
+
+static inline void
+sdeb_meta_read_lock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__acquire(&sip->macc_meta_lck);
+		else
+			__acquire(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			read_lock(&sip->macc_meta_lck);
+		else
+			read_lock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_read_unlock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__release(&sip->macc_meta_lck);
+		else
+			__release(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			read_unlock(&sip->macc_meta_lck);
+		else
+			read_unlock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_write_lock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__acquire(&sip->macc_meta_lck);
+		else
+			__acquire(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			write_lock(&sip->macc_meta_lck);
+		else
+			write_lock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_write_unlock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__release(&sip->macc_meta_lck);
+		else
+			__release(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			write_unlock(&sip->macc_meta_lck);
+		else
+			write_unlock(&sdeb_fake_rw_lck);
+	}
+}
+
 /* Returns number of bytes copied or -1 if error. */
 static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
-			    u32 sg_skip, u64 lba, u32 num, bool do_write)
+			    u32 sg_skip, u64 lba, u32 num, bool do_write,
+			    bool atomic_write)
 {
 	int ret;
-	u64 block, rest = 0;
+	u64 block;
 	enum dma_data_direction dir;
 	struct scsi_data_buffer *sdb = &scp->sdb;
 	u8 *fsp;
+	int i;
+
+	/*
+	 * Even though reads are inherently atomic (in this driver), we expect
+	 * the atomic flag only for writes.
+	 */
+	if (!do_write && atomic_write)
+		return -1;
 
 	if (do_write) {
 		dir = DMA_TO_DEVICE;
@@ -3328,21 +3599,26 @@ static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
 	fsp = sip->storep;
 
 	block = do_div(lba, sdebug_store_sectors);
-	if (block + num > sdebug_store_sectors)
-		rest = block + num - sdebug_store_sectors;
 
-	ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
+	/* Only allow 1x atomic write or multiple non-atomic writes at any given time */
+	sdeb_data_lock(sip, atomic_write);
+	for (i = 0; i < num; i++) {
+		/* We shouldn't need to lock for atomic writes, but do it anyway */
+		sdeb_data_sector_lock(sip, do_write);
+		ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
 		   fsp + (block * sdebug_sector_size),
-		   (num - rest) * sdebug_sector_size, sg_skip, do_write);
-	if (ret != (num - rest) * sdebug_sector_size)
-		return ret;
-
-	if (rest) {
-		ret += sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
-			    fsp, rest * sdebug_sector_size,
-			    sg_skip + ((num - rest) * sdebug_sector_size),
-			    do_write);
+		   sdebug_sector_size, sg_skip, do_write);
+		sdeb_data_sector_unlock(sip, do_write);
+		if (ret != sdebug_sector_size) {
+			ret += (i * sdebug_sector_size);
+			break;
+		}
+		sg_skip += sdebug_sector_size;
+		if (++block >= sdebug_store_sectors)
+			block = 0;
 	}
+	ret = num * sdebug_sector_size;
+	sdeb_data_unlock(sip, atomic_write);
 
 	return ret;
 }
@@ -3518,70 +3794,6 @@ static int prot_verify_read(struct scsi_cmnd *scp, sector_t start_sec,
 	return ret;
 }
 
-static inline void
-sdeb_read_lock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__acquire(&sip->macc_lck);
-		else
-			__acquire(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			read_lock(&sip->macc_lck);
-		else
-			read_lock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_read_unlock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__release(&sip->macc_lck);
-		else
-			__release(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			read_unlock(&sip->macc_lck);
-		else
-			read_unlock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_write_lock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__acquire(&sip->macc_lck);
-		else
-			__acquire(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			write_lock(&sip->macc_lck);
-		else
-			write_lock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_write_unlock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__release(&sip->macc_lck);
-		else
-			__release(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			write_unlock(&sip->macc_lck);
-		else
-			write_unlock(&sdeb_fake_rw_lck);
-	}
-}
-
 static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 {
 	bool check_prot;
@@ -3591,6 +3803,7 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	u64 lba;
 	struct sdeb_store_info *sip = devip2sip(devip, true);
 	u8 *cmd = scp->cmnd;
+	bool meta_data_locked = false;
 
 	switch (cmd[0]) {
 	case READ_16:
@@ -3649,6 +3862,10 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		atomic_set(&sdeb_inject_pending, 0);
 	}
 
+	/*
+	 * When checking device access params, for reads we only check data
+	 * versus what is set at init time, so no need to lock.
+	 */
 	ret = check_device_access_params(scp, lba, num, false);
 	if (ret)
 		return ret;
@@ -3668,29 +3885,33 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 
-	sdeb_read_lock(sip);
+	if (sdebug_dev_is_zoned(devip) ||
+	    (sdebug_dix && scsi_prot_sg_count(scp)))  {
+		sdeb_meta_read_lock(sip);
+		meta_data_locked = true;
+	}
 
 	/* DIX + T10 DIF */
 	if (unlikely(sdebug_dix && scsi_prot_sg_count(scp))) {
 		switch (prot_verify_read(scp, lba, num, ei_lba)) {
 		case 1: /* Guard tag error */
 			if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
 				return check_condition_result;
 			} else if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
 				return illegal_condition_result;
 			}
 			break;
 		case 3: /* Reference tag error */
 			if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
 				return check_condition_result;
 			} else if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
 				return illegal_condition_result;
 			}
@@ -3698,8 +3919,9 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		}
 	}
 
-	ret = do_device_access(sip, scp, 0, lba, num, false);
-	sdeb_read_unlock(sip);
+	ret = do_device_access(sip, scp, 0, lba, num, false, false);
+	if (meta_data_locked)
+		sdeb_meta_read_unlock(sip);
 	if (unlikely(ret == -1))
 		return DID_ERROR << 16;
 
@@ -3888,6 +4110,7 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	u64 lba;
 	struct sdeb_store_info *sip = devip2sip(devip, true);
 	u8 *cmd = scp->cmnd;
+	bool meta_data_locked = false;
 
 	switch (cmd[0]) {
 	case WRITE_16:
@@ -3941,10 +4164,17 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 				    "to DIF device\n");
 	}
 
-	sdeb_write_lock(sip);
+	if (sdebug_dev_is_zoned(devip) ||
+	    (sdebug_dix && scsi_prot_sg_count(scp)) ||
+	    scsi_debug_lbp())  {
+		sdeb_meta_write_lock(sip);
+		meta_data_locked = true;
+	}
+
 	ret = check_device_access_params(scp, lba, num, true);
 	if (ret) {
-		sdeb_write_unlock(sip);
+		if (meta_data_locked)
+			sdeb_meta_write_unlock(sip);
 		return ret;
 	}
 
@@ -3953,22 +4183,22 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		switch (prot_verify_write(scp, lba, num, ei_lba)) {
 		case 1: /* Guard tag error */
 			if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
 				return illegal_condition_result;
 			} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
 				return check_condition_result;
 			}
 			break;
 		case 3: /* Reference tag error */
 			if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
 				return illegal_condition_result;
 			} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
 				return check_condition_result;
 			}
@@ -3976,13 +4206,16 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		}
 	}
 
-	ret = do_device_access(sip, scp, 0, lba, num, true);
+	ret = do_device_access(sip, scp, 0, lba, num, true, false);
 	if (unlikely(scsi_debug_lbp()))
 		map_region(sip, lba, num);
+
 	/* If ZBC zone then bump its write pointer */
 	if (sdebug_dev_is_zoned(devip))
 		zbc_inc_wp(devip, lba, num);
-	sdeb_write_unlock(sip);
+	if (meta_data_locked)
+		sdeb_meta_write_unlock(sip);
+
 	if (unlikely(-1 == ret))
 		return DID_ERROR << 16;
 	else if (unlikely(sdebug_verbose &&
@@ -4089,7 +4322,8 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 		goto err_out;
 	}
 
-	sdeb_write_lock(sip);
+	/* Just keep it simple and always lock for now */
+	sdeb_meta_write_lock(sip);
 	sg_off = lbdof_blen;
 	/* Spec says Buffer xfer Length field in number of LBs in dout */
 	cum_lb = 0;
@@ -4132,7 +4366,11 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 			}
 		}
 
-		ret = do_device_access(sip, scp, sg_off, lba, num, true);
+		/*
+		 * Write ranges atomically to keep as close to pre-atomic
+		 * writes behaviour as possible.
+		 */
+		ret = do_device_access(sip, scp, sg_off, lba, num, true, true);
 		/* If ZBC zone then bump its write pointer */
 		if (sdebug_dev_is_zoned(devip))
 			zbc_inc_wp(devip, lba, num);
@@ -4171,7 +4409,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 	}
 	ret = 0;
 err_out_unlock:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 err_out:
 	kfree(lrdp);
 	return ret;
@@ -4190,14 +4428,16 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 						scp->device->hostdata, true);
 	u8 *fs1p;
 	u8 *fsp;
+	bool meta_data_locked = false;
 
-	sdeb_write_lock(sip);
+	if (sdebug_dev_is_zoned(devip) || scsi_debug_lbp()) {
+		sdeb_meta_write_lock(sip);
+		meta_data_locked = true;
+	}
 
 	ret = check_device_access_params(scp, lba, num, true);
-	if (ret) {
-		sdeb_write_unlock(sip);
-		return ret;
-	}
+	if (ret)
+		goto out;
 
 	if (unmap && scsi_debug_lbp()) {
 		unmap_region(sip, lba, num);
@@ -4208,6 +4448,7 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 	/* if ndob then zero 1 logical block, else fetch 1 logical block */
 	fsp = sip->storep;
 	fs1p = fsp + (block * lb_size);
+	sdeb_data_write_lock(sip);
 	if (ndob) {
 		memset(fs1p, 0, lb_size);
 		ret = 0;
@@ -4215,8 +4456,8 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 		ret = fetch_to_dev_buffer(scp, fs1p, lb_size);
 
 	if (-1 == ret) {
-		sdeb_write_unlock(sip);
-		return DID_ERROR << 16;
+		ret = DID_ERROR << 16;
+		goto out;
 	} else if (sdebug_verbose && !ndob && (ret < lb_size))
 		sdev_printk(KERN_INFO, scp->device,
 			    "%s: %s: lb size=%u, IO sent=%d bytes\n",
@@ -4233,10 +4474,12 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 	/* If ZBC zone then bump its write pointer */
 	if (sdebug_dev_is_zoned(devip))
 		zbc_inc_wp(devip, lba, num);
+	sdeb_data_write_unlock(sip);
+	ret = 0;
 out:
-	sdeb_write_unlock(sip);
-
-	return 0;
+	if (meta_data_locked)
+		sdeb_meta_write_unlock(sip);
+	return ret;
 }
 
 static int resp_write_same_10(struct scsi_cmnd *scp,
@@ -4379,25 +4622,30 @@ static int resp_comp_write(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
-
 	ret = do_dout_fetch(scp, dnum, arr);
 	if (ret == -1) {
 		retval = DID_ERROR << 16;
-		goto cleanup;
+		goto cleanup_free;
 	} else if (sdebug_verbose && (ret < (dnum * lb_size)))
 		sdev_printk(KERN_INFO, scp->device, "%s: compare_write: cdb "
 			    "indicated=%u, IO sent=%d bytes\n", my_name,
 			    dnum * lb_size, ret);
+
+	sdeb_data_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 	if (!comp_write_worker(sip, lba, num, arr, false)) {
 		mk_sense_buffer(scp, MISCOMPARE, MISCOMPARE_VERIFY_ASC, 0);
 		retval = check_condition_result;
-		goto cleanup;
+		goto cleanup_unlock;
 	}
+
+	/* Cover sip->map_storep (which map_region()) sets with data lock */
 	if (scsi_debug_lbp())
 		map_region(sip, lba, num);
-cleanup:
-	sdeb_write_unlock(sip);
+cleanup_unlock:
+	sdeb_meta_write_unlock(sip);
+	sdeb_data_write_unlock(sip);
+cleanup_free:
 	kfree(arr);
 	return retval;
 }
@@ -4441,7 +4689,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	desc = (void *)&buf[8];
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	for (i = 0 ; i < descriptors ; i++) {
 		unsigned long long lba = get_unaligned_be64(&desc[i].lba);
@@ -4457,7 +4705,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	ret = 0;
 
 out:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	kfree(buf);
 
 	return ret;
@@ -4570,12 +4818,13 @@ static int resp_pre_fetch(struct scsi_cmnd *scp,
 		rest = block + nblks - sdebug_store_sectors;
 
 	/* Try to bring the PRE-FETCH range into CPU's cache */
-	sdeb_read_lock(sip);
+	sdeb_data_read_lock(sip);
 	prefetch_range(fsp + (sdebug_sector_size * block),
 		       (nblks - rest) * sdebug_sector_size);
 	if (rest)
 		prefetch_range(fsp, rest * sdebug_sector_size);
-	sdeb_read_unlock(sip);
+
+	sdeb_data_read_unlock(sip);
 fini:
 	if (cmd[1] & 0x2)
 		res = SDEG_RES_IMMED_MASK;
@@ -4734,7 +4983,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 	/* Not changing store, so only need read access */
-	sdeb_read_lock(sip);
+	sdeb_data_read_lock(sip);
 
 	ret = do_dout_fetch(scp, a_num, arr);
 	if (ret == -1) {
@@ -4756,7 +5005,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		goto cleanup;
 	}
 cleanup:
-	sdeb_read_unlock(sip);
+	sdeb_data_read_unlock(sip);
 	kfree(arr);
 	return ret;
 }
@@ -4802,7 +5051,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_read_lock(sip);
+	sdeb_meta_read_lock(sip);
 
 	desc = arr + 64;
 	for (lba = zs_lba; lba < sdebug_capacity;
@@ -4900,11 +5149,70 @@ static int resp_report_zones(struct scsi_cmnd *scp,
 	ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
 
 fini:
-	sdeb_read_unlock(sip);
+	sdeb_meta_read_unlock(sip);
 	kfree(arr);
 	return ret;
 }
 
+static int resp_atomic_write(struct scsi_cmnd *scp,
+			     struct sdebug_dev_info *devip)
+{
+	struct sdeb_store_info *sip;
+	u8 *cmd = scp->cmnd;
+	u16 boundary, len;
+	u64 lba, lba_tmp;
+	int ret;
+
+	if (!scsi_debug_atomic_write()) {
+		mk_sense_invalid_opcode(scp);
+		return check_condition_result;
+	}
+
+	sip = devip2sip(devip, true);
+
+	lba = get_unaligned_be64(cmd + 2);
+	boundary = get_unaligned_be16(cmd + 10);
+	len = get_unaligned_be16(cmd + 12);
+
+	lba_tmp = lba;
+	if (sdebug_atomic_wr_align &&
+	    do_div(lba_tmp, sdebug_atomic_wr_align)) {
+		/* Does not meet alignment requirement */
+		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+		return check_condition_result;
+	}
+
+	if (sdebug_atomic_wr_gran && len % sdebug_atomic_wr_gran) {
+		/* Does not meet alignment requirement */
+		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+		return check_condition_result;
+	}
+
+	if (boundary > 0) {
+		if (boundary > sdebug_atomic_wr_max_bndry) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+
+		if (len > sdebug_atomic_wr_max_length_bndry) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+	} else {
+		if (len > sdebug_atomic_wr_max_length) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+	}
+
+	ret = do_device_access(sip, scp, 0, lba, len, true, true);
+	if (unlikely(ret == -1))
+		return DID_ERROR << 16;
+	if (unlikely(ret != len * sdebug_sector_size))
+		return DID_ERROR << 16;
+	return 0;
+}
+
 /* Logic transplanted from tcmu-runner, file_zbc.c */
 static void zbc_open_all(struct sdebug_dev_info *devip)
 {
@@ -4931,8 +5239,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		mk_sense_invalid_opcode(scp);
 		return check_condition_result;
 	}
-
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		/* Check if all closed zones can be open */
@@ -4981,7 +5288,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	zbc_open_zone(devip, zsp, true);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5008,7 +5315,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_close_all(devip);
@@ -5037,7 +5344,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
 
 	zbc_close_zone(devip, zsp);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5080,7 +5387,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_finish_all(devip);
@@ -5109,7 +5416,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
 
 	zbc_finish_zone(devip, zsp, true);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5160,7 +5467,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_rwp_all(devip);
@@ -5188,7 +5495,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	zbc_rwp_zone(devip, zsp);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5215,6 +5522,7 @@ static void sdebug_q_cmd_complete(struct sdebug_defer *sd_dp)
 	if (!scp) {
 		pr_err("scmd=NULL\n");
 		goto out;
+
 	}
 
 	sdsc = scsi_cmd_priv(scp);
@@ -6152,6 +6460,7 @@ module_param_named(lbprz, sdebug_lbprz, int, S_IRUGO);
 module_param_named(lbpu, sdebug_lbpu, int, S_IRUGO);
 module_param_named(lbpws, sdebug_lbpws, int, S_IRUGO);
 module_param_named(lbpws10, sdebug_lbpws10, int, S_IRUGO);
+module_param_named(atomic_wr, sdebug_atomic_wr, int, S_IRUGO);
 module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
 module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
 module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
@@ -6186,6 +6495,11 @@ module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO);
 module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO);
 module_param_named(unmap_max_blocks, sdebug_unmap_max_blocks, int, S_IRUGO);
 module_param_named(unmap_max_desc, sdebug_unmap_max_desc, int, S_IRUGO);
+module_param_named(atomic_wr_max_length, sdebug_atomic_wr_max_length, int, S_IRUGO);
+module_param_named(atomic_wr_align, sdebug_atomic_wr_align, int, S_IRUGO);
+module_param_named(atomic_wr_gran, sdebug_atomic_wr_gran, int, S_IRUGO);
+module_param_named(atomic_wr_max_length_bndry, sdebug_atomic_wr_max_length_bndry, int, S_IRUGO);
+module_param_named(atomic_wr_max_bndry, sdebug_atomic_wr_max_bndry, int, S_IRUGO);
 module_param_named(uuid_ctl, sdebug_uuid_ctl, int, S_IRUGO);
 module_param_named(virtual_gb, sdebug_virtual_gb, int, S_IRUGO | S_IWUSR);
 module_param_named(vpd_use_hostno, sdebug_vpd_use_hostno, int,
@@ -6229,6 +6543,7 @@ MODULE_PARM_DESC(lbprz,
 MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)");
 MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)");
 MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)");
+MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=1)");
 MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
 MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
 MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
@@ -6260,6 +6575,11 @@ MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"
 MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)");
 MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)");
 MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)");
+MODULE_PARM_DESC(atomic_wr_max_length, "max # of blocks can be atomically written in one cmd (def=8192)");
+MODULE_PARM_DESC(atomic_wr_align, "minimum alignment of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_gran, "minimum granularity of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_max_length_bndry, "max # of blocks can be atomically written in one cmd with boundary set (def=8192)");
+MODULE_PARM_DESC(atomic_wr_max_bndry, "max # boundaries per atomic write (def=128)");
 MODULE_PARM_DESC(uuid_ctl,
 		 "1->use uuid for lu name, 0->don't, 2->all use same (def=0)");
 MODULE_PARM_DESC(virtual_gb, "virtual gigabyte (GiB) size (def=0 -> use dev_size_mb)");
@@ -7406,6 +7726,7 @@ static int __init scsi_debug_init(void)
 			return -EINVAL;
 		}
 	}
+
 	xa_init_flags(per_store_ap, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
 	if (want_store) {
 		idx = sdebug_add_store();
@@ -7613,7 +7934,9 @@ static int sdebug_add_store(void)
 			map_region(sip, 0, 2);
 	}
 
-	rwlock_init(&sip->macc_lck);
+	rwlock_init(&sip->macc_data_lck);
+	rwlock_init(&sip->macc_meta_lck);
+	rwlock_init(&sip->macc_sector_lck);
 	return (int)n_idx;
 err:
 	sdebug_erase_store((int)n_idx, sip);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 10/11] nvme: Atomic write support
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (8 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 09/11] scsi: scsi_debug: " John Garry
@ 2024-02-19 13:01 ` John Garry
  2024-02-19 19:21   ` Keith Busch
  2024-02-20  8:31   ` Christoph Hellwig
  2024-02-19 13:01 ` [PATCH v4 11/11] nvme: Ensure atomic writes will be executed atomically John Garry
  10 siblings, 2 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, Alan Adamson, John Garry

From: Alan Adamson <[email protected]>

Add support to set block layer request_queue atomic write limits. The
limits will be derived from either the namespace or controller atomic
parameters.

NVMe atomic-related parameters are grouped into "normal" and "power-fail"
(or PF) class of parameter. For atomic write support, only PF parameters
are of interest. The "normal" parameters are concerned with racing reads
and writes (which also applies to PF). See NVM Command Set Specification
Revision 1.0d section 2.1.4 for reference.

Whether to use per namespace or controller atomic parameters is decided by
NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data Structure,
#NVM Command Set.

NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
are provided for a write which straddles this per-lba space boundary. The
block layer merging policy is such that no merges may occur in which the
resultant request would straddle such a boundary.

Unlike SCSI, NVMe specifies no granularity or alignment rules. In addition,
again unlike SCSI, there is no dedicated atomic write command - a write
which adheres to the atomic size limit and boundary is implicitly atomic.

If NSFEAT bit 1 is set, the following parameters are of interest:
- NAWUPF (Namespace Atomic Write Unit Power Fail)
- NABSPF (Namespace Atomic Boundary Size Power Fail)
- NABO (Namespace Atomic Boundary Offset)

and we set request_queue limits as follows:
- atomic_write_unit_max = rounddown_pow_of_two(NAWUPF)
- atomic_write_max_bytes = NAWUPF
- atomic_write_boundary = NABSPF

If in the unlikely scenario that NABO is non-zero, then atomic writes will
not be supported at all as dealing with this adds extra complexity. This
policy may change in future.

In all cases, atomic_write_unit_min is set to the logical block size.

If NSFEAT bit 1 is unset, the following parameter is of interest:
- AWUPF (Atomic Write Unit Power Fail)

and we set request_queue limits as follows:
- atomic_write_unit_max = rounddown_pow_of_two(AWUPF)
- atomic_write_max_bytes = AWUPF
- atomic_write_boundary = 0

The block layer requires that the atomic_write_boundary value is a
power-of-2. However, it is really only required that atomic_write_boundary
be a multiple of atomic_write_unit_max. As such, if NABSPF were not a
power-of-2, atomic_write_unit_max could be reduced such that it was
divisible into NABSPF. However, this complexity will not be yet supported.

A helper function, nvme_valid_atomic_write(), is also added for the
submission path to verify that a request has been submitted to the driver
will actually be executed atomically.

Note on NABSPF:
There seems to be some vagueness in the spec as to whether NABSPF applies
for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF
and how it is affected by bit 1. However Figure 4 does tell to check Figure
97 for info about per-namespace parameters, which NABSPF is, so it is
implied. However currently nvme_update_disk_info() does check namespace
parameter NABO regardless of this bit.

Signed-off-by: Alan Adamson <[email protected]>
#jpg: total rewrite
Signed-off-by: John Garry <[email protected]>
---
 drivers/nvme/host/core.c | 67 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0a96362912ce..c5bc663c8582 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -934,6 +934,31 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
 	return BLK_STS_OK;
 }
 
+__maybe_unused
+static bool nvme_valid_atomic_write(struct request *req)
+{
+	struct request_queue *q = req->q;
+	u32 boundary_bytes = queue_atomic_write_boundary_bytes(q);
+
+	if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q))
+		return false;
+
+	if (boundary_bytes) {
+		u64 mask = boundary_bytes - 1, imask = ~mask;
+		u64 start = blk_rq_pos(req) << SECTOR_SHIFT;
+		u64 end = start + blk_rq_bytes(req) - 1;
+
+		/* If greater then must be crossing a boundary */
+		if (blk_rq_bytes(req) > boundary_bytes)
+			return false;
+
+		if ((start & imask) != (end & imask))
+			return false;
+	}
+
+	return true;
+}
+
 static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
 		struct request *req, struct nvme_command *cmnd,
 		enum nvme_opcode op)
@@ -1960,6 +1985,45 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
 	blk_queue_write_cache(q, vwc, vwc);
 }
 
+static void nvme_update_atomic_write_disk_info(struct nvme_ctrl *ctrl,
+		 struct gendisk *disk, struct nvme_id_ns *id, u32 bs,
+		 u32 atomic_bs)
+{
+	unsigned int unit_min = 0, unit_max = 0, boundary = 0, max_bytes = 0;
+	struct request_queue *q = disk->queue;
+
+	if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) {
+		if (le16_to_cpu(id->nabspf))
+			boundary = (le16_to_cpu(id->nabspf) + 1) * bs;
+
+		/*
+		 * The boundary size just needs to be a multiple of unit_max
+		 * (and not necessarily a power-of-2), so this could be relaxed
+		 * in the block layer in future.
+		 * Furthermore, if needed, unit_max could be reduced so that the
+		 * boundary size was compliant.
+		 */
+		if (!boundary || is_power_of_2(boundary)) {
+			max_bytes = atomic_bs;
+			unit_min = bs;
+			unit_max = rounddown_pow_of_two(atomic_bs);
+		} else {
+			dev_notice(ctrl->device, "Unsupported atomic write boundary (%d)\n",
+				boundary);
+			boundary = 0;
+		}
+	} else if (ctrl->subsys->awupf) {
+		max_bytes = atomic_bs;
+		unit_min = bs;
+		unit_max = rounddown_pow_of_two(atomic_bs);
+	}
+
+	blk_queue_atomic_write_max_bytes(q, max_bytes);
+	blk_queue_atomic_write_unit_min_sectors(q, unit_min >> SECTOR_SHIFT);
+	blk_queue_atomic_write_unit_max_sectors(q, unit_max >> SECTOR_SHIFT);
+	blk_queue_atomic_write_boundary_bytes(q, boundary);
+}
+
 static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
 		struct nvme_ns_head *head, struct nvme_id_ns *id)
 {
@@ -1990,6 +2054,9 @@ static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
 			atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
 		else
 			atomic_bs = (1 + ctrl->subsys->awupf) * bs;
+
+		nvme_update_atomic_write_disk_info(ctrl, disk, id, bs,
+						atomic_bs);
 	}
 
 	if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 11/11] nvme: Ensure atomic writes will be executed atomically
  2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
                   ` (9 preceding siblings ...)
  2024-02-19 13:01 ` [PATCH v4 10/11] nvme: " John Garry
@ 2024-02-19 13:01 ` John Garry
  10 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-19 13:01 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, Alan Adamson, John Garry

From: Alan Adamson <[email protected]>

There is no dedicated NVMe atomic write command (which may error for a
command which exceeds the controller atomic write limits).

As an insurance policy against the block layer sending requests which
cannot be executed atomically, add a check in the queue path.

Signed-off-by: Alan Adamson <[email protected]>
#jpg: some rewrite
Signed-off-by: John Garry <[email protected]>
---
 drivers/nvme/host/core.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c5bc663c8582..60dc20864dc7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -934,7 +934,6 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
 	return BLK_STS_OK;
 }
 
-__maybe_unused
 static bool nvme_valid_atomic_write(struct request *req)
 {
 	struct request_queue *q = req->q;
@@ -974,6 +973,13 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
 	if (req->cmd_flags & REQ_RAHEAD)
 		dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
 
+	/*
+	 * Ensure that nothing has been sent which cannot be executed
+	 * atomically.
+	 */
+	if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req))
+		return BLK_STS_IOERR;
+
 	cmnd->rw.opcode = op;
 	cmnd->rw.flags = 0;
 	cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  2024-02-19 13:01 ` [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
@ 2024-02-19 18:57   ` Keith Busch
  2024-02-20  8:31     ` John Garry
  2024-02-20  6:54   ` Christoph Hellwig
  1 sibling, 1 reply; 51+ messages in thread
From: Keith Busch @ 2024-02-19 18:57 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, hch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Mon, Feb 19, 2024 at 01:01:00PM +0000, John Garry wrote:
> @@ -53,9 +53,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
>  	struct bio bio;
>  	ssize_t ret;
>  
> -	if (blkdev_dio_unaligned(bdev, pos, iter))
> -		return -EINVAL;
> -
>  	if (nr_pages <= DIO_INLINE_BIO_VECS)
>  		vecs = inline_vecs;
>  	else {
> @@ -171,9 +168,6 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
>  	loff_t pos = iocb->ki_pos;
>  	int ret = 0;
>  
> -	if (blkdev_dio_unaligned(bdev, pos, iter))
> -		return -EINVAL;
> -
>  	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
>  		opf |= REQ_ALLOC_CACHE;
>  	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
> @@ -310,9 +304,6 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>  	loff_t pos = iocb->ki_pos;
>  	int ret = 0;
>  
> -	if (blkdev_dio_unaligned(bdev, pos, iter))
> -		return -EINVAL;
> -
>  	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
>  		opf |= REQ_ALLOC_CACHE;
>  	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
> @@ -365,11 +356,16 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>  
>  static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
>  {
> +	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
> +	loff_t pos = iocb->ki_pos;
>  	unsigned int nr_pages;

All three of the changed functions also want 'bdev' and 'pos', so maybe
pass on the savings to them? Unless you think the extended argument list
would harm readibilty, or perhaps the compiler optimizes the 2nd access
out anyway. Either way, this looks good to me.

Reviewed-by: Keith Busch <[email protected]>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 01/11] block: Pass blk_queue_get_max_sectors() a request pointer
  2024-02-19 13:00 ` [PATCH v4 01/11] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
@ 2024-02-19 18:57   ` Keith Busch
  0 siblings, 0 replies; 51+ messages in thread
From: Keith Busch @ 2024-02-19 18:57 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, hch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Mon, Feb 19, 2024 at 01:00:59PM +0000, John Garry wrote:
> Currently blk_queue_get_max_sectors() is passed a enum req_op. In future
> the value returned from blk_queue_get_max_sectors() may depend on certain
> request flags, so pass a request pointer.
> 
> Also use rq->cmd_flags instead of rq->bio->bi_opf when possible.

Looks good.

Reviewed-by: Keith Busch <[email protected]>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-19 13:01 ` [PATCH v4 03/11] fs: Initial atomic write support John Garry
@ 2024-02-19 19:16   ` David Sterba
  2024-02-20  8:13     ` John Garry
  2024-02-19 22:44   ` Dave Chinner
  2024-02-24 18:16   ` Ritesh Harjani
  2 siblings, 1 reply; 51+ messages in thread
From: David Sterba @ 2024-02-19 19:16 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On Mon, Feb 19, 2024 at 01:01:01PM +0000, John Garry wrote:
> From: Prasad Singamsetty <[email protected]>
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -301,9 +301,12 @@ typedef int __bitwise __kernel_rwf_t;
>  /* per-IO O_APPEND */
>  #define RWF_APPEND	((__force __kernel_rwf_t)0x00000010)
>  
> +/* Atomic Write */
> +#define RWF_ATOMIC	((__force __kernel_rwf_t)0x00000040)

Should this be 0x20 so it's the next bit after RWF_APPEND?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 10/11] nvme: Atomic write support
  2024-02-19 13:01 ` [PATCH v4 10/11] nvme: " John Garry
@ 2024-02-19 19:21   ` Keith Busch
  2024-02-20  6:55     ` Christoph Hellwig
  2024-02-20  8:31   ` Christoph Hellwig
  1 sibling, 1 reply; 51+ messages in thread
From: Keith Busch @ 2024-02-19 19:21 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, hch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Alan Adamson

On Mon, Feb 19, 2024 at 01:01:08PM +0000, John Garry wrote:
> From: Alan Adamson <[email protected]>
> 
> Add support to set block layer request_queue atomic write limits. The
> limits will be derived from either the namespace or controller atomic
> parameters.
> 
> NVMe atomic-related parameters are grouped into "normal" and "power-fail"
> (or PF) class of parameter. For atomic write support, only PF parameters
> are of interest. The "normal" parameters are concerned with racing reads
> and writes (which also applies to PF). See NVM Command Set Specification
> Revision 1.0d section 2.1.4 for reference.
> 
> Whether to use per namespace or controller atomic parameters is decided by
> NSFEAT bit 1 - see Figure 97: Identify - Identify Namespace Data Structure,
> #NVM Command Set.
> 
> NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
> are provided for a write which straddles this per-lba space boundary. The
> block layer merging policy is such that no merges may occur in which the
> resultant request would straddle such a boundary.
> 
> Unlike SCSI, NVMe specifies no granularity or alignment rules. In addition,
> again unlike SCSI, there is no dedicated atomic write command - a write
> which adheres to the atomic size limit and boundary is implicitly atomic.
> 
> If NSFEAT bit 1 is set, the following parameters are of interest:
> - NAWUPF (Namespace Atomic Write Unit Power Fail)
> - NABSPF (Namespace Atomic Boundary Size Power Fail)
> - NABO (Namespace Atomic Boundary Offset)
> 
> and we set request_queue limits as follows:
> - atomic_write_unit_max = rounddown_pow_of_two(NAWUPF)
> - atomic_write_max_bytes = NAWUPF
> - atomic_write_boundary = NABSPF
> 
> If in the unlikely scenario that NABO is non-zero, then atomic writes will
> not be supported at all as dealing with this adds extra complexity. This
> policy may change in future.
> 
> In all cases, atomic_write_unit_min is set to the logical block size.
> 
> If NSFEAT bit 1 is unset, the following parameter is of interest:
> - AWUPF (Atomic Write Unit Power Fail)
> 
> and we set request_queue limits as follows:
> - atomic_write_unit_max = rounddown_pow_of_two(AWUPF)
> - atomic_write_max_bytes = AWUPF
> - atomic_write_boundary = 0
> 
> The block layer requires that the atomic_write_boundary value is a
> power-of-2. However, it is really only required that atomic_write_boundary
> be a multiple of atomic_write_unit_max. As such, if NABSPF were not a
> power-of-2, atomic_write_unit_max could be reduced such that it was
> divisible into NABSPF. However, this complexity will not be yet supported.
> 
> A helper function, nvme_valid_atomic_write(), is also added for the
> submission path to verify that a request has been submitted to the driver
> will actually be executed atomically.

Maybe patch 11 should be folded into this one. No bigged, the series as
a whole looks good.

Reviewed-by: Keith Busch <[email protected]>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 04/11] fs: Add initial atomic write support info to statx
  2024-02-19 13:01 ` [PATCH v4 04/11] fs: Add initial atomic write support info to statx John Garry
@ 2024-02-19 22:28   ` Dave Chinner
  2024-02-20  9:40     ` John Garry
  2024-02-20  8:20   ` Christoph Hellwig
  2024-02-24 18:46   ` Ritesh Harjani
  2 siblings, 1 reply; 51+ messages in thread
From: Dave Chinner @ 2024-02-19 22:28 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On Mon, Feb 19, 2024 at 01:01:02PM +0000, John Garry wrote:
> From: Prasad Singamsetty <[email protected]>
> 
> Extend statx system call to return additional info for atomic write support
> support for a file.
> 
> Helper function generic_fill_statx_atomic_writes() can be used by FSes to
> fill in the relevant statx fields.
> 
> Signed-off-by: Prasad Singamsetty <[email protected]>
> #jpg: relocate bdev support to another patch
> Signed-off-by: John Garry <[email protected]>
> ---
>  fs/stat.c                 | 34 ++++++++++++++++++++++++++++++++++
>  include/linux/fs.h        |  3 +++
>  include/linux/stat.h      |  3 +++
>  include/uapi/linux/stat.h |  9 ++++++++-
>  4 files changed, 48 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/stat.c b/fs/stat.c
> index 77cdc69eb422..522787a4ab6a 100644
> --- a/fs/stat.c
> +++ b/fs/stat.c
> @@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
>  }
>  EXPORT_SYMBOL(generic_fill_statx_attr);
>  
> +/**
> + * generic_fill_statx_atomic_writes - Fill in the atomic writes statx attributes
> + * @stat:	Where to fill in the attribute flags
> + * @unit_min:	Minimum supported atomic write length
> + * @unit_max:	Maximum supported atomic write length
> + *
> + * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
> + * atomic write unit_min and unit_max values.
> + */
> +void generic_fill_statx_atomic_writes(struct kstat *stat,
> +				      unsigned int unit_min,
> +				      unsigned int unit_max)
> +{
> +	/* Confirm that the request type is known */
> +	stat->result_mask |= STATX_WRITE_ATOMIC;
> +
> +	/* Confirm that the file attribute type is known */
> +	stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
> +
> +	if (unit_min) {
> +		stat->atomic_write_unit_min = unit_min;
> +		stat->atomic_write_unit_max = unit_max;
> +		/* Initially only allow 1x segment */
> +		stat->atomic_write_segments_max = 1;
> +
> +		/* Confirm atomic writes are actually supported */
> +		stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
> +	}
> +}
> +EXPORT_SYMBOL(generic_fill_statx_atomic_writes);

What units are these in? Nothing in the patch or commit description
tells us....

-Dave.
-- 
Dave Chinner
[email protected]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-19 13:01 ` [PATCH v4 03/11] fs: Initial atomic write support John Garry
  2024-02-19 19:16   ` David Sterba
@ 2024-02-19 22:44   ` Dave Chinner
  2024-02-20  9:52     ` John Garry
  2024-02-24 18:16   ` Ritesh Harjani
  2 siblings, 1 reply; 51+ messages in thread
From: Dave Chinner @ 2024-02-19 22:44 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On Mon, Feb 19, 2024 at 01:01:01PM +0000, John Garry wrote:
> @@ -3523,4 +3535,26 @@ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
>  extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
>  			   int advice);
>  
> +static inline bool atomic_write_valid(loff_t pos, struct iov_iter *iter,
> +			   unsigned int unit_min, unsigned int unit_max)
> +{
> +	size_t len = iov_iter_count(iter);
> +
> +	if (!iter_is_ubuf(iter))
> +		return false;
> +
> +	if (len == unit_min || len == unit_max) {
> +		/* ok if exactly min or max */
> +	} else if (len < unit_min || len > unit_max) {
> +		return false;
> +	} else if (!is_power_of_2(len)) {
> +		return false;
> +	}

This doesn't need if else if else if and it doesn't need to check
for exact unit min/max matches. The exact matches require the
length to be a power of 2, so the checks are simply:

	if (len < unit_min || len > unit_max)
		return false;
	if (!is_power_of_2(len))
		return false;

> +	if (pos & (len - 1))
> +		return false;

This has typing issues - 64 bit value, 32 bit mask. probably should
use:

	if (!IS_ALIGNED(pos, len))
		return false;

-Dave.

-- 
Dave Chinner
[email protected]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 05/11] block: Add core atomic write support
  2024-02-19 13:01 ` [PATCH v4 05/11] block: Add core atomic write support John Garry
@ 2024-02-19 22:58   ` Dave Chinner
  2024-02-20  8:22     ` Christoph Hellwig
  2024-02-20 10:01     ` John Garry
  2024-02-25 12:09   ` Ritesh Harjani
  1 sibling, 2 replies; 51+ messages in thread
From: Dave Chinner @ 2024-02-19 22:58 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Mon, Feb 19, 2024 at 01:01:03PM +0000, John Garry wrote:
> Add atomic write support as follows:
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 74e9e775f13d..12a75a252ca2 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -18,6 +18,42 @@
>  #include "blk-rq-qos.h"
>  #include "blk-throttle.h"
>  
> +static bool rq_straddles_atomic_write_boundary(struct request *rq,
> +					unsigned int front,
> +					unsigned int back)
> +{
> +	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
> +	unsigned int mask, imask;
> +	loff_t start, end;
> +
> +	if (!boundary)
> +		return false;
> +
> +	start = rq->__sector << SECTOR_SHIFT;
> +	end = start + rq->__data_len;
> +
> +	start -= front;
> +	end += back;
> +
> +	/* We're longer than the boundary, so must be crossing it */
> +	if (end - start > boundary)
> +		return true;
> +
> +	mask = boundary - 1;
> +
> +	/* start/end are boundary-aligned, so cannot be crossing */
> +	if (!(start & mask) || !(end & mask))
> +		return false;
> +
> +	imask = ~mask;
> +
> +	/* Top bits are different, so crossed a boundary */
> +	if ((start & imask) != (end & imask))
> +		return true;
> +
> +	return false;
> +}

I have no way of verifying this function is doing what it is
supposed to because it's function is undocumented. I have no idea
what the front/back variables are supposed to represent, and so no
clue if they are being applied properly.

That said, it's also applying unsigned 32 bit mask variables to
signed 64 bit quantities and trying to do things like "high bit changed"
checks on the 64 bit variable. This just smells like a future
source of "large offsets don't work like we expected!" bugs.

> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 06ea91e51b8b..176f26374abc 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -59,6 +59,13 @@ void blk_set_default_limits(struct queue_limits *lim)
>  	lim->zoned = false;
>  	lim->zone_write_granularity = 0;
>  	lim->dma_alignment = 511;
> +	lim->atomic_write_hw_max_sectors = 0;
> +	lim->atomic_write_max_sectors = 0;
> +	lim->atomic_write_hw_boundary_sectors = 0;
> +	lim->atomic_write_hw_unit_min_sectors = 0;
> +	lim->atomic_write_unit_min_sectors = 0;
> +	lim->atomic_write_hw_unit_max_sectors = 0;
> +	lim->atomic_write_unit_max_sectors = 0;
>  }

Seems to me this function would do better to just

	memset(lim, 0, sizeof(*lim));

and then set all the non-zero fields.

>  
>  /**
> @@ -101,6 +108,44 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
>  }
>  EXPORT_SYMBOL(blk_queue_bounce_limit);
>  
> +
> +/*
> + * Returns max guaranteed sectors which we can fit in a bio. For convenience of
> + * users, rounddown_pow_of_two() the return value.
> + *
> + * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
> + * from first and last segments.
> + */
> +static unsigned int blk_queue_max_guaranteed_bio_sectors(
> +					struct queue_limits *limits,
> +					struct request_queue *q)
> +{
> +	unsigned int max_segments = min(BIO_MAX_VECS, limits->max_segments);
> +	unsigned int length;
> +
> +	length = min(max_segments, 2) * queue_logical_block_size(q);
> +	if (max_segments > 2)
> +		length += (max_segments - 2) * PAGE_SIZE;
> +
> +	return rounddown_pow_of_two(length >> SECTOR_SHIFT);
> +}
> +
> +static void blk_atomic_writes_update_limits(struct request_queue *q)
> +{
> +	struct queue_limits *limits = &q->limits;
> +	unsigned int max_hw_sectors =
> +		rounddown_pow_of_two(limits->max_hw_sectors);
> +	unsigned int unit_limit = min(max_hw_sectors,
> +		blk_queue_max_guaranteed_bio_sectors(limits, q));
> +
> +	limits->atomic_write_max_sectors =
> +		min(limits->atomic_write_hw_max_sectors, max_hw_sectors);
> +	limits->atomic_write_unit_min_sectors =
> +		min(limits->atomic_write_hw_unit_min_sectors, unit_limit);
> +	limits->atomic_write_unit_max_sectors =
> +		min(limits->atomic_write_hw_unit_max_sectors, unit_limit);
> +}
> +
>  /**
>   * blk_queue_max_hw_sectors - set max sectors for a request for this queue
>   * @q:  the request queue for the device
> @@ -145,6 +190,8 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
>  				 limits->logical_block_size >> SECTOR_SHIFT);
>  	limits->max_sectors = max_sectors;
>  
> +	blk_atomic_writes_update_limits(q);
> +
>  	if (!q->disk)
>  		return;
>  	q->disk->bdi->io_pages = max_sectors >> (PAGE_SHIFT - 9);
> @@ -182,6 +229,62 @@ void blk_queue_max_discard_sectors(struct request_queue *q,
>  }
>  EXPORT_SYMBOL(blk_queue_max_discard_sectors);
>  
> +/**
> + * blk_queue_atomic_write_max_bytes - set max bytes supported by
> + * the device for atomic write operations.
> + * @q:  the request queue for the device
> + * @bytes: maximum bytes supported
> + */
> +void blk_queue_atomic_write_max_bytes(struct request_queue *q,
> +				      unsigned int bytes)
> +{
> +	q->limits.atomic_write_hw_max_sectors = bytes >> SECTOR_SHIFT;
> +	blk_atomic_writes_update_limits(q);
> +}
> +EXPORT_SYMBOL(blk_queue_atomic_write_max_bytes);

Ok, so this can silently set a limit that is different to what the
caller asked to have set?

How is the caller supposed to find this out if the smaller limit
that was set is not compatible with their configuration?

i.e. shouldn't this return an error if the requested size cannot
be set exactly as specified?

Same concern about the other limits that can be trimmed to be
smaller than requested.

-Dave.
-- 
Dave Chinner
[email protected]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  2024-02-19 13:01 ` [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
  2024-02-19 18:57   ` Keith Busch
@ 2024-02-20  6:54   ` Christoph Hellwig
  1 sibling, 0 replies; 51+ messages in thread
From: Christoph Hellwig @ 2024-02-20  6:54 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 10/11] nvme: Atomic write support
  2024-02-19 19:21   ` Keith Busch
@ 2024-02-20  6:55     ` Christoph Hellwig
  2024-02-20  8:19       ` John Garry
  0 siblings, 1 reply; 51+ messages in thread
From: Christoph Hellwig @ 2024-02-20  6:55 UTC (permalink / raw)
  To: Keith Busch
  Cc: John Garry, axboe, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Alan Adamson

On Mon, Feb 19, 2024 at 12:21:14PM -0700, Keith Busch wrote:
> Maybe patch 11 should be folded into this one. No bigged, the series as
> a whole looks good.

I did suggest just that last round already..

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 09/11] scsi: scsi_debug: Atomic write support
  2024-02-19 13:01 ` [PATCH v4 09/11] scsi: scsi_debug: " John Garry
@ 2024-02-20  7:12   ` Ojaswin Mujoo
  2024-02-20  9:01     ` John Garry
  0 siblings, 1 reply; 51+ messages in thread
From: Ojaswin Mujoo @ 2024-02-20  7:12 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, linux-aio, linux-btrfs,
	io-uring, nilay, ritesh.list

On Mon, Feb 19, 2024 at 01:01:07PM +0000, John Garry wrote:
> Add initial support for atomic writes.
> 
> As is standard method, feed device properties via modules param, those
> being:
> - atomic_max_size_blks
> - atomic_alignment_blks
> - atomic_granularity_blks
> - atomic_max_size_with_boundary_blks
> - atomic_max_boundary_blks
> 
> These just match sbc4r22 section 6.6.4 - Block limits VPD page.
> 
> We just support ATOMIC WRITE (16).
> 
> The major change in the driver is how we lock the device for RW accesses.
> 
> Currently the driver uses a per-device lock for accessing device metadata
> and "media" data (calls to do_device_access()) atomically for the duration
> of the whole read/write command.
> 
> This should not suit verifying atomic writes. Reason being that currently
> all reads/writes are atomic, so using atomic writes does not prove
> anything.
> 
> Change device access model to basis that regular writes only atomic on a
> per-sector basis, while reads and atomic writes are fully atomic.
> 
> As mentioned, since accessing metadata and device media is atomic,
> continue to have regular writes involving metadata - like discard or PI -
> as atomic. We can improve this later.
> 
> Currently we only support model where overlapping going reads or writes
> wait for current access to complete before commencing an atomic write.
> This is described in 4.29.3.2 section of the SBC. However, we simplify,
> things and wait for all accesses to complete (when issuing an atomic
> write).
> 
> Signed-off-by: John Garry <[email protected]>
> ---

<snip>

> +#define DEF_ATOMIC_WR 0

<snip>

> +static unsigned int sdebug_atomic_wr = DEF_ATOMIC_WR;

<snip>

> +MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=1)");
Hi John,

The default value here seems to be 0 and not 1. Got me a bit confused
while testing :) 

Regards,
ojaswin

>  MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
>  MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
>  MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
> @@ -6260,6 +6575,11 @@ MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"
>  MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)");
>  MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)");
>  MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)");
> +MODULE_PARM_DESC(atomic_wr_max_length, "max # of blocks can be atomically written in one cmd (def=8192)");
> +MODULE_PARM_DESC(atomic_wr_align, "minimum alignment of atomic write in blocks (def=2)");
> +MODULE_PARM_DESC(atomic_wr_gran, "minimum granularity of atomic write in blocks (def=2)");
> +MODULE_PARM_DESC(atomic_wr_max_length_bndry, "max # of blocks can be atomically written in one cmd with boundary set (def=8192)");
> +MODULE_PARM_DESC(atomic_wr_max_bndry, "max # boundaries per atomic write (def=128)");
>  MODULE_PARM_DESC(uuid_ctl,
>  		 "1->use uuid for lu name, 0->don't, 2->all use same (def=0)");
>  MODULE_PARM_DESC(virtual_gb, "virtual gigabyte (GiB) size (def=0 -> use dev_size_mb)");
> @@ -7406,6 +7726,7 @@ static int __init scsi_debug_init(void)
>  			return -EINVAL;
>  		}
>  	}
> +
>  	xa_init_flags(per_store_ap, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
>  	if (want_store) {
>  		idx = sdebug_add_store();
> @@ -7613,7 +7934,9 @@ static int sdebug_add_store(void)
>  			map_region(sip, 0, 2);
>  	}
>  
> -	rwlock_init(&sip->macc_lck);
> +	rwlock_init(&sip->macc_data_lck);
> +	rwlock_init(&sip->macc_meta_lck);
> +	rwlock_init(&sip->macc_sector_lck);
>  	return (int)n_idx;
>  err:
>  	sdebug_erase_store((int)n_idx, sip);
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-19 19:16   ` David Sterba
@ 2024-02-20  8:13     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  8:13 UTC (permalink / raw)
  To: dsterba
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On 19/02/2024 19:16, David Sterba wrote:
> On Mon, Feb 19, 2024 at 01:01:01PM +0000, John Garry wrote:
>> From: Prasad Singamsetty<[email protected]>
>> --- a/include/uapi/linux/fs.h
>> +++ b/include/uapi/linux/fs.h
>> @@ -301,9 +301,12 @@ typedef int __bitwise __kernel_rwf_t;
>>   /* per-IO O_APPEND */
>>   #define RWF_APPEND	((__force __kernel_rwf_t)0x00000010)
>>   
>> +/* Atomic Write */
>> +#define RWF_ATOMIC	((__force __kernel_rwf_t)0x00000040)
> Should this be 0x20 so it's the next bit after RWF_APPEND?

Support for new flag RWF_NOAPPEND - which has value 0x20 - has been 
picked up on the vfs tree for 6.9 .

I had been just basing my series on Linus' release so far while also 
trying to anticipate any merge issue.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 10/11] nvme: Atomic write support
  2024-02-20  6:55     ` Christoph Hellwig
@ 2024-02-20  8:19       ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  8:19 UTC (permalink / raw)
  To: Christoph Hellwig, Keith Busch
  Cc: axboe, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Alan Adamson

On 20/02/2024 06:55, Christoph Hellwig wrote:
> On Mon, Feb 19, 2024 at 12:21:14PM -0700, Keith Busch wrote:
>> Maybe patch 11 should be folded into this one. No bigged, the series as
>> a whole looks good.
> 
> I did suggest just that last round already..
> 

I created the helper function and folded it in, but not the callsite. 
Not folding in the callsite change did not make sense to me, but that 
was my understanding of the request. Anyway, I can fold everything into 
this patch.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 04/11] fs: Add initial atomic write support info to statx
  2024-02-19 13:01 ` [PATCH v4 04/11] fs: Add initial atomic write support info to statx John Garry
  2024-02-19 22:28   ` Dave Chinner
@ 2024-02-20  8:20   ` Christoph Hellwig
  2024-02-20  9:01     ` John Garry
  2024-02-24 18:46   ` Ritesh Harjani
  2 siblings, 1 reply; 51+ messages in thread
From: Christoph Hellwig @ 2024-02-20  8:20 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

> +EXPORT_SYMBOL(generic_fill_statx_atomic_writes);

EXPORT_SYMBOL_GPL for any new feature, please.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 05/11] block: Add core atomic write support
  2024-02-19 22:58   ` Dave Chinner
@ 2024-02-20  8:22     ` Christoph Hellwig
  2024-02-20 10:01     ` John Garry
  1 sibling, 0 replies; 51+ messages in thread
From: Christoph Hellwig @ 2024-02-20  8:22 UTC (permalink / raw)
  To: Dave Chinner
  Cc: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
	linux-aio, linux-btrfs, io-uring, nilay, ritesh.list

On Tue, Feb 20, 2024 at 09:58:39AM +1100, Dave Chinner wrote:
> > +	lim->atomic_write_hw_max_sectors = 0;
> > +	lim->atomic_write_max_sectors = 0;
> > +	lim->atomic_write_hw_boundary_sectors = 0;
> > +	lim->atomic_write_hw_unit_min_sectors = 0;
> > +	lim->atomic_write_unit_min_sectors = 0;
> > +	lim->atomic_write_hw_unit_max_sectors = 0;
> > +	lim->atomic_write_unit_max_sectors = 0;
> >  }
> 
> Seems to me this function would do better to just
> 
> 	memset(lim, 0, sizeof(*lim));
> 
> and then set all the non-zero fields.

.. which the caller already has done :)  In the block tree this
function looks completely different now and relies on the caller
provided zeroing.

> > +void blk_queue_atomic_write_max_bytes(struct request_queue *q,
> > +				      unsigned int bytes)
> > +{
> > +	q->limits.atomic_write_hw_max_sectors = bytes >> SECTOR_SHIFT;
> > +	blk_atomic_writes_update_limits(q);
> > +}
> > +EXPORT_SYMBOL(blk_queue_atomic_write_max_bytes);
> 
> Ok, so this can silently set a limit that is different to what the
> caller asked to have set?
> 
> How is the caller supposed to find this out if the smaller limit
> that was set is not compatible with their configuration?
> 
> i.e. shouldn't this return an error if the requested size cannot
> be set exactly as specified?

That's how the blk limits all work.  The driver provides the hardware
capabilities for a given value, and the block layer ensures it
works with other limits imposed by the block layer or other parts
of the device limits.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 06/11] block: Add atomic write support for statx
  2024-02-19 13:01 ` [PATCH v4 06/11] block: Add atomic write support for statx John Garry
@ 2024-02-20  8:29   ` Christoph Hellwig
  2024-02-20  9:35     ` John Garry
  2024-02-25 14:20   ` Ritesh Harjani
  1 sibling, 1 reply; 51+ messages in thread
From: Christoph Hellwig @ 2024-02-20  8:29 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

> +#define BDEV_STATX_SUPPORTED_MASK (STATX_DIOALIGN | STATX_WRITE_ATOMIC)

> +	if (!(request_mask & BDEV_STATX_SUPPORTED_MASK))
> +		return;

BDEV_STATX_SUPPORTED_MASK is misleading here.  bdevs support a lot more
fields, these are just the ones needing special attention.  I'd do away
with the extra define and just open code it.

> +	/* If this is a block device inode, override the filesystem
> +	 * attributes with the block device specific parameters
> +	 * that need to be obtained from the bdev backing inode
> +	 */

This is not the normal kernel multi-line comment format.

> +	if (S_ISBLK(d_backing_inode(path.dentry)->i_mode))
> +		bdev_statx(path.dentry, stat, request_mask);

I know I touched this last, but does anyone remember why we have
various random fixups in vfs_statx and not in vfs_getattr_nosec, where
they we have more of them and also the inode at hand?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  2024-02-19 18:57   ` Keith Busch
@ 2024-02-20  8:31     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  8:31 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, hch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On 19/02/2024 18:57, Keith Busch wrote:
> On Mon, Feb 19, 2024 at 01:01:00PM +0000, John Garry wrote:
>> @@ -53,9 +53,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
>>   	struct bio bio;
>>   	ssize_t ret;
>>   
>> -	if (blkdev_dio_unaligned(bdev, pos, iter))
>> -		return -EINVAL;
>> -
>>   	if (nr_pages <= DIO_INLINE_BIO_VECS)
>>   		vecs = inline_vecs;
>>   	else {
>> @@ -171,9 +168,6 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
>>   	loff_t pos = iocb->ki_pos;
>>   	int ret = 0;
>>   
>> -	if (blkdev_dio_unaligned(bdev, pos, iter))
>> -		return -EINVAL;
>> -
>>   	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
>>   		opf |= REQ_ALLOC_CACHE;
>>   	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
>> @@ -310,9 +304,6 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>>   	loff_t pos = iocb->ki_pos;
>>   	int ret = 0;
>>   
>> -	if (blkdev_dio_unaligned(bdev, pos, iter))
>> -		return -EINVAL;
>> -
>>   	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
>>   		opf |= REQ_ALLOC_CACHE;
>>   	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
>> @@ -365,11 +356,16 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>>   
>>   static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
>>   {
>> +	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
>> +	loff_t pos = iocb->ki_pos;
>>   	unsigned int nr_pages;
> 
> All three of the changed functions also want 'bdev' and 'pos', so maybe
> pass on the savings to them? Unless you think the extended argument list
> would harm readibilty, or perhaps the compiler optimizes the 2nd access
> out anyway. Either way, this looks good to me.

Yeah, I was thinking about changing the arg lists. Specifically adding 
bdev, as that lookup takes many loads, so maybe I will make that change.

> 
> Reviewed-by: Keith Busch <[email protected]>

cheers


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 10/11] nvme: Atomic write support
  2024-02-19 13:01 ` [PATCH v4 10/11] nvme: " John Garry
  2024-02-19 19:21   ` Keith Busch
@ 2024-02-20  8:31   ` Christoph Hellwig
  2024-02-20  8:50     ` John Garry
  1 sibling, 1 reply; 51+ messages in thread
From: Christoph Hellwig @ 2024-02-20  8:31 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Alan Adamson

Thanks for writing a good commit message!

> NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
> are provided for a write which straddles this per-lba space boundary. The
> block layer merging policy is such that no merges may occur in which the
> resultant request would straddle such a boundary.
> 
> Unlike SCSI, NVMe specifies no granularity or alignment rules.

Well, the boundary really is sort of a granularity and alignment,
isn't it?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 10/11] nvme: Atomic write support
  2024-02-20  8:31   ` Christoph Hellwig
@ 2024-02-20  8:50     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  8:50 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Alan Adamson

On 20/02/2024 08:31, Christoph Hellwig wrote:
> Thanks for writing a good commit message!
> 
>> NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
>> are provided for a write which straddles this per-lba space boundary. The
>> block layer merging policy is such that no merges may occur in which the
>> resultant request would straddle such a boundary.
>>
>> Unlike SCSI, NVMe specifies no granularity or alignment rules.
> 
> Well, the boundary really is sort of a granularity and alignment,
> isn't it?

NVMe does indeed have the boundary rule, but it is not really the same 
as SCSI granularity and alignment.

Anyway, I can word that statement to be clearer.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 09/11] scsi: scsi_debug: Atomic write support
  2024-02-20  7:12   ` Ojaswin Mujoo
@ 2024-02-20  9:01     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  9:01 UTC (permalink / raw)
  To: Ojaswin Mujoo
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, linux-aio, linux-btrfs,
	io-uring, nilay, ritesh.list

On 20/02/2024 07:12, Ojaswin Mujoo wrote:
>> +MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=1)");
> Hi John,
> 
> The default value here seems to be 0 and not 1. Got me a bit confused
> while testing 🙂

I can fix that MODULE_PARM_DESC() text.

I don't think that many disks support atomic writes, so I was leaving 
this disabled by default.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 04/11] fs: Add initial atomic write support info to statx
  2024-02-20  8:20   ` Christoph Hellwig
@ 2024-02-20  9:01     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  9:01 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On 20/02/2024 08:20, Christoph Hellwig wrote:
>> +EXPORT_SYMBOL(generic_fill_statx_atomic_writes);
> EXPORT_SYMBOL_GPL for any new feature, please.
ok, thanks.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 06/11] block: Add atomic write support for statx
  2024-02-20  8:29   ` Christoph Hellwig
@ 2024-02-20  9:35     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  9:35 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, kbusch, sagi, jejb, martin.petersen, djwong, viro, brauner,
	dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On 20/02/2024 08:29, Christoph Hellwig wrote:
>> +#define BDEV_STATX_SUPPORTED_MASK (STATX_DIOALIGN | STATX_WRITE_ATOMIC)
> 
>> +	if (!(request_mask & BDEV_STATX_SUPPORTED_MASK))
>> +		return;
> 
> BDEV_STATX_SUPPORTED_MASK is misleading here.  bdevs support a lot more
> fields, these are just the ones needing special attention.  I'd do away
> with the extra define and just open code it.

ok, fine

> 
>> +	/* If this is a block device inode, override the filesystem
>> +	 * attributes with the block device specific parameters
>> +	 * that need to be obtained from the bdev backing inode
>> +	 */
> 
> This is not the normal kernel multi-line comment format.
> 

will fix

>> +	if (S_ISBLK(d_backing_inode(path.dentry)->i_mode))
>> +		bdev_statx(path.dentry, stat, request_mask);
> 
> I know I touched this last, but does anyone remember why we have
> various random fixups in vfs_statx and not in vfs_getattr_nosec, where
> they we have more of them and also the inode at hand?


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 04/11] fs: Add initial atomic write support info to statx
  2024-02-19 22:28   ` Dave Chinner
@ 2024-02-20  9:40     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  9:40 UTC (permalink / raw)
  To: Dave Chinner
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On 19/02/2024 22:28, Dave Chinner wrote:
>>   
>> +/**
>> + * generic_fill_statx_atomic_writes - Fill in the atomic writes statx attributes
>> + * @stat:	Where to fill in the attribute flags
>> + * @unit_min:	Minimum supported atomic write length
>> + * @unit_max:	Maximum supported atomic write length
>> + *
>> + * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
>> + * atomic write unit_min and unit_max values.
>> + */
>> +void generic_fill_statx_atomic_writes(struct kstat *stat,
>> +				      unsigned int unit_min,
>> +				      unsigned int unit_max)
>> +{
>> +	/* Confirm that the request type is known */
>> +	stat->result_mask |= STATX_WRITE_ATOMIC;
>> +
>> +	/* Confirm that the file attribute type is known */
>> +	stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
>> +
>> +	if (unit_min) {
>> +		stat->atomic_write_unit_min = unit_min;
>> +		stat->atomic_write_unit_max = unit_max;
>> +		/* Initially only allow 1x segment */
>> +		stat->atomic_write_segments_max = 1;
>> +
>> +		/* Confirm atomic writes are actually supported */
>> +		stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
>> +	}
>> +}
>> +EXPORT_SYMBOL(generic_fill_statx_atomic_writes);
> What units are these in? Nothing in the patch or commit description
> tells us....

I can append the current comments to mention that the unit is bytes.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-19 22:44   ` Dave Chinner
@ 2024-02-20  9:52     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20  9:52 UTC (permalink / raw)
  To: Dave Chinner
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, Prasad Singamsetty

On 19/02/2024 22:44, Dave Chinner wrote:
> On Mon, Feb 19, 2024 at 01:01:01PM +0000, John Garry wrote:
>> @@ -3523,4 +3535,26 @@ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
>>   extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
>>   			   int advice);
>>   
>> +static inline bool atomic_write_valid(loff_t pos, struct iov_iter *iter,
>> +			   unsigned int unit_min, unsigned int unit_max)
>> +{
>> +	size_t len = iov_iter_count(iter);
>> +
>> +	if (!iter_is_ubuf(iter))
>> +		return false;
>> +
>> +	if (len == unit_min || len == unit_max) {
>> +		/* ok if exactly min or max */
>> +	} else if (len < unit_min || len > unit_max) {
>> +		return false;
>> +	} else if (!is_power_of_2(len)) {
>> +		return false;
>> +	}
> This doesn't need if else if else if and it doesn't need to check
> for exact unit min/max matches. 

This is fastpath code, and I thought it quicker to just check if min/max 
first. Based on recent discussions, for FS support I expect typically 
len == unit_max.

But I can change to your simpler checking and later change to the 
current method if those FS assumptions hold true.

> The exact matches require the
> length to be a power of 2, so the checks are simply:
> 
> 	if (len < unit_min || len > unit_max)
> 		return false;
> 	if (!is_power_of_2(len))
> 		return false;
> 
>> +	if (pos & (len - 1))
>> +		return false;
> This has typing issues - 64 bit value, 32 bit mask. probably should
> use:
> 
> 	if (!IS_ALIGNED(pos, len))
> 		return false;

ok, good idea.

Thanks,
John


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 05/11] block: Add core atomic write support
  2024-02-19 22:58   ` Dave Chinner
  2024-02-20  8:22     ` Christoph Hellwig
@ 2024-02-20 10:01     ` John Garry
  1 sibling, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-20 10:01 UTC (permalink / raw)
  To: Dave Chinner
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On 19/02/2024 22:58, Dave Chinner wrote:
> On Mon, Feb 19, 2024 at 01:01:03PM +0000, John Garry wrote:
>> Add atomic write support as follows:
>> diff --git a/block/blk-merge.c b/block/blk-merge.c
>> index 74e9e775f13d..12a75a252ca2 100644
>> --- a/block/blk-merge.c
>> +++ b/block/blk-merge.c
>> @@ -18,6 +18,42 @@
>>   #include "blk-rq-qos.h"
>>   #include "blk-throttle.h"
>>   
>> +static bool rq_straddles_atomic_write_boundary(struct request *rq,
>> +					unsigned int front,
>> +					unsigned int back)
>> +{
>> +	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>> +	unsigned int mask, imask;
>> +	loff_t start, end;
>> +
>> +	if (!boundary)
>> +		return false;
>> +
>> +	start = rq->__sector << SECTOR_SHIFT;
>> +	end = start + rq->__data_len;
>> +
>> +	start -= front;
>> +	end += back;
>> +
>> +	/* We're longer than the boundary, so must be crossing it */
>> +	if (end - start > boundary)
>> +		return true;
>> +
>> +	mask = boundary - 1;
>> +
>> +	/* start/end are boundary-aligned, so cannot be crossing */
>> +	if (!(start & mask) || !(end & mask))
>> +		return false;
>> +
>> +	imask = ~mask;
>> +
>> +	/* Top bits are different, so crossed a boundary */
>> +	if ((start & imask) != (end & imask))
>> +		return true;
>> +
>> +	return false;
>> +}
> I have no way of verifying this function is doing what it is
> supposed to because it's function is undocumented. I have no idea
> what the front/back variables are supposed to represent, and so no
> clue if they are being applied properly.

I'll add proper function header documentation.

> 
> That said, it's also applying unsigned 32 bit mask variables to
> signed 64 bit quantities and trying to do things like "high bit changed"
> checks on the 64 bit variable. This just smells like a future
> source of "large offsets don't work like we expected!" bugs.

I'll change variables to be unsigned and also all the same size.

> 
>> diff --git a/block/blk-settings.c b/block/blk-settings.c
>> index 06ea91e51b8b..176f26374abc 100644
>> --- a/block/blk-settings.c
>> +++ b/block/blk-settings.c
>> @@ -59,6 +59,13 @@ void blk_set_default_limits(struct queue_limits *lim)
>>   	lim->zoned = false;
>>   	lim->zone_write_granularity = 0;
>>   	lim->dma_alignment = 511;
>> +	lim->atomic_write_hw_max_sectors = 0;
>> +	lim->atomic_write_max_sectors = 0;
>> +	lim->atomic_write_hw_boundary_sectors = 0;
>> +	lim->atomic_write_hw_unit_min_sectors = 0;
>> +	lim->atomic_write_unit_min_sectors = 0;
>> +	lim->atomic_write_hw_unit_max_sectors = 0;
>> +	lim->atomic_write_unit_max_sectors = 0;
>>   }
> Seems to me this function would do better to just
> 
> 	memset(lim, 0, sizeof(*lim));
> 
> and then set all the non-zero fields.

Christoph responded about limits here and in 
blk_queue_atomic_write_max_bytes(), so please let me know if still some 
concerns.

Thanks,
John


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-19 13:01 ` [PATCH v4 03/11] fs: Initial atomic write support John Garry
  2024-02-19 19:16   ` David Sterba
  2024-02-19 22:44   ` Dave Chinner
@ 2024-02-24 18:16   ` Ritesh Harjani
  2024-02-24 18:20     ` Ritesh Harjani
  2024-02-26  8:51     ` John Garry
  2 siblings, 2 replies; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-24 18:16 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty, John Garry

John Garry <[email protected]> writes:

> From: Prasad Singamsetty <[email protected]>
>
> An atomic write is a write issued with torn-write protection, meaning
> that for a power failure or any other hardware failure, all or none of the
> data from the write will be stored, but never a mix of old and new data.
>
> Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the
> write is to be issued with torn-write prevention, according to special
> alignment and length rules.
>
> For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for
> iocb->ki_flags field to indicate the same.
>
> A call to statx will give the relevant atomic write info for a file:
> - atomic_write_unit_min
> - atomic_write_unit_max
> - atomic_write_segments_max
>
> Both min and max values must be a power-of-2.
>
> Applications can avail of atomic write feature by ensuring that the total
> length of a write is a power-of-2 in size and also sized between
> atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications
> must ensure that the write is at a naturally-aligned offset in the file
> wrt the total write length. The value in atomic_write_segments_max
> indicates the upper limit for IOV_ITER iovcnt.
>
> Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the
> flag set will have RWF_ATOMIC rejected and not just ignored.
>
> Add a type argument to kiocb_set_rw_flags() to allows reads which have
> RWF_ATOMIC set to be rejected.
>
> Helper function atomic_write_valid() can be used by FSes to verify
> compliant writes.
>
> Signed-off-by: Prasad Singamsetty <[email protected]>
> #jpg: merge into single patch and much rewrite

^^^ this might be a miss I guess.

> Signed-off-by: John Garry <[email protected]>
> ---
>  fs/aio.c                |  8 ++++----
>  fs/btrfs/ioctl.c        |  2 +-
>  fs/read_write.c         |  2 +-
>  include/linux/fs.h      | 36 +++++++++++++++++++++++++++++++++++-
>  include/uapi/linux/fs.h |  5 ++++-
>  io_uring/rw.c           |  4 ++--
>  6 files changed, 47 insertions(+), 10 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index bb2ff48991f3..21bcbc076fd0 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -1502,7 +1502,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res)
>  	iocb_put(iocb);
>  }
>  
> -static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
> +static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int type)

maybe rw_type?

>  {
>  	int ret;
>  
> @@ -1528,7 +1528,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
>  	} else
>  		req->ki_ioprio = get_current_ioprio();
>  
> -	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
> +	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags, type);
>  	if (unlikely(ret))
>  		return ret;
>  
> @@ -1580,7 +1580,7 @@ static int aio_read(struct kiocb *req, const struct iocb *iocb,
>  	struct file *file;
>  	int ret;
>  
> -	ret = aio_prep_rw(req, iocb);
> +	ret = aio_prep_rw(req, iocb, READ);
>  	if (ret)
>  		return ret;
>  	file = req->ki_filp;
> @@ -1607,7 +1607,7 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb,
>  	struct file *file;
>  	int ret;
>  
> -	ret = aio_prep_rw(req, iocb);
> +	ret = aio_prep_rw(req, iocb, WRITE);
>  	if (ret)
>  		return ret;
>  	file = req->ki_filp;
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index ac3316e0d11c..455f06d94b11 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -4555,7 +4555,7 @@ static int btrfs_ioctl_encoded_write(struct file *file, void __user *argp, bool
>  		goto out_iov;
>  
>  	init_sync_kiocb(&kiocb, file);
> -	ret = kiocb_set_rw_flags(&kiocb, 0);
> +	ret = kiocb_set_rw_flags(&kiocb, 0, WRITE);
>  	if (ret)
>  		goto out_iov;
>  	kiocb.ki_pos = pos;
> diff --git a/fs/read_write.c b/fs/read_write.c
> index d4c036e82b6c..a7dc1819192d 100644
> --- a/fs/read_write.c
> +++ b/fs/read_write.c
> @@ -730,7 +730,7 @@ static ssize_t do_iter_readv_writev(struct file *filp, struct iov_iter *iter,
>  	ssize_t ret;
>  
>  	init_sync_kiocb(&kiocb, filp);
> -	ret = kiocb_set_rw_flags(&kiocb, flags);
> +	ret = kiocb_set_rw_flags(&kiocb, flags, type);
>  	if (ret)
>  		return ret;
>  	kiocb.ki_pos = (ppos ? *ppos : 0);
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 023f37c60709..7271640fd600 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -43,6 +43,7 @@
>  #include <linux/cred.h>
>  #include <linux/mnt_idmapping.h>
>  #include <linux/slab.h>
> +#include <linux/uio.h>
>  
>  #include <asm/byteorder.h>
>  #include <uapi/linux/fs.h>
> @@ -119,6 +120,10 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
>  #define FMODE_PWRITE		((__force fmode_t)0x10)
>  /* File is opened for execution with sys_execve / sys_uselib */
>  #define FMODE_EXEC		((__force fmode_t)0x20)
> +
> +/* File supports atomic writes */
> +#define FMODE_CAN_ATOMIC_WRITE	((__force fmode_t)0x40)
> +
>  /* 32bit hashes as llseek() offset (for directories) */
>  #define FMODE_32BITHASH         ((__force fmode_t)0x200)
>  /* 64bit hashes as llseek() offset (for directories) */
> @@ -328,6 +333,7 @@ enum rw_hint {
>  #define IOCB_SYNC		(__force int) RWF_SYNC
>  #define IOCB_NOWAIT		(__force int) RWF_NOWAIT
>  #define IOCB_APPEND		(__force int) RWF_APPEND
> +#define IOCB_ATOMIC		(__force int) RWF_ATOMIC
>  

You might also want to add this definition in here too

#define TRACE_IOCB_STRINGS \
<...>
<...>
{ IOCB_ATOMIC, "ATOMIC" }


>  /* non-RWF related bits - start at 16 */
>  #define IOCB_EVENTFD		(1 << 16)
> @@ -3321,7 +3327,7 @@ static inline int iocb_flags(struct file *file)
>  	return res;
>  }
>  
> -static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
> +static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags, int type)

maybe rw_type? 

>  {
>  	int kiocb_flags = 0;
>  
> @@ -3338,6 +3344,12 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
>  			return -EOPNOTSUPP;
>  		kiocb_flags |= IOCB_NOIO;
>  	}
> +	if (flags & RWF_ATOMIC) {
> +		if (type == READ)
> +			return -EOPNOTSUPP;
> +		if (!(ki->ki_filp->f_mode & FMODE_CAN_ATOMIC_WRITE))
> +			return -EOPNOTSUPP;
> +	}
>  	kiocb_flags |= (__force int) (flags & RWF_SUPPORTED);
>  	if (flags & RWF_SYNC)
>  		kiocb_flags |= IOCB_DSYNC;
> @@ -3523,4 +3535,26 @@ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
>  extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
>  			   int advice);
>  
> +static inline bool atomic_write_valid(loff_t pos, struct iov_iter *iter,
> +			   unsigned int unit_min, unsigned int unit_max)
> +{
> +	size_t len = iov_iter_count(iter);
> +
> +	if (!iter_is_ubuf(iter))
> +		return false;

There is no mention about this limitation in the commit message of this
patch. Maybe it will be good to capture why this limitation to only
support ubuf and/or any plans to lift this restriction in future
in the commit message?


> +
> +	if (len == unit_min || len == unit_max) {
> +		/* ok if exactly min or max */
> +	} else if (len < unit_min || len > unit_max) {
> +		return false;
> +	} else if (!is_power_of_2(len)) {
> +		return false;
> +	}

Checking for len == unit_min || len == unit_max is redundant when
unit_min and unit_max are already power of 2.


> +
> +	if (pos & (len - 1))
> +		return false;
> +
> +	return true;
> +}
> +
>  #endif /* _LINUX_FS_H */
> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index 48ad69f7722e..a0975ae81e64 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -301,9 +301,12 @@ typedef int __bitwise __kernel_rwf_t;
>  /* per-IO O_APPEND */
>  #define RWF_APPEND	((__force __kernel_rwf_t)0x00000010)
>  
> +/* Atomic Write */
> +#define RWF_ATOMIC	((__force __kernel_rwf_t)0x00000040)
> +
>  /* mask of flags supported by the kernel */
>  #define RWF_SUPPORTED	(RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
> -			 RWF_APPEND)
> +			 RWF_APPEND | RWF_ATOMIC)
>  
>  /* Pagemap ioctl */
>  #define PAGEMAP_SCAN	_IOWR('f', 16, struct pm_scan_arg)
> diff --git a/io_uring/rw.c b/io_uring/rw.c
> index d5e79d9bdc71..f8c022301cf4 100644
> --- a/io_uring/rw.c
> +++ b/io_uring/rw.c
> @@ -719,7 +719,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
>  	struct kiocb *kiocb = &rw->kiocb;
>  	struct io_ring_ctx *ctx = req->ctx;
>  	struct file *file = req->file;
> -	int ret;
> +	int ret, type = (mode == FMODE_WRITE) ? WRITE : READ;
>  
>  	if (unlikely(!file || !(file->f_mode & mode)))
>  		return -EBADF;
> @@ -728,7 +728,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
>  		req->flags |= io_file_get_flags(file);
>  
>  	kiocb->ki_flags = file->f_iocb_flags;
> -	ret = kiocb_set_rw_flags(kiocb, rw->flags);
> +	ret = kiocb_set_rw_flags(kiocb, rw->flags, type);
>  	if (unlikely(ret))
>  		return ret;
>  	kiocb->ki_flags |= IOCB_ALLOC_CACHE;
> -- 
> 2.31.1

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-24 18:16   ` Ritesh Harjani
@ 2024-02-24 18:20     ` Ritesh Harjani
  2024-02-26  8:58       ` John Garry
  2024-02-26  8:51     ` John Garry
  1 sibling, 1 reply; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-24 18:20 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty, John Garry

Ritesh Harjani (IBM) <[email protected]> writes:

> John Garry <[email protected]> writes:
>
>> From: Prasad Singamsetty <[email protected]>
>>
>> An atomic write is a write issued with torn-write protection, meaning
>> that for a power failure or any other hardware failure, all or none of the
>> data from the write will be stored, but never a mix of old and new data.
>>
>> Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the
>> write is to be issued with torn-write prevention, according to special
>> alignment and length rules.
>>
>> For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for
>> iocb->ki_flags field to indicate the same.
>>
>> A call to statx will give the relevant atomic write info for a file:
>> - atomic_write_unit_min
>> - atomic_write_unit_max
>> - atomic_write_segments_max
>>
>> Both min and max values must be a power-of-2.
>>
>> Applications can avail of atomic write feature by ensuring that the total
>> length of a write is a power-of-2 in size and also sized between
>> atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications
>> must ensure that the write is at a naturally-aligned offset in the file
>> wrt the total write length. The value in atomic_write_segments_max
>> indicates the upper limit for IOV_ITER iovcnt.
>>
>> Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the
>> flag set will have RWF_ATOMIC rejected and not just ignored.
>>
>> Add a type argument to kiocb_set_rw_flags() to allows reads which have
>> RWF_ATOMIC set to be rejected.
>>
>> Helper function atomic_write_valid() can be used by FSes to verify
>> compliant writes.

Minor nit. 
maybe generic_atomic_write_valid()? 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 04/11] fs: Add initial atomic write support info to statx
  2024-02-19 13:01 ` [PATCH v4 04/11] fs: Add initial atomic write support info to statx John Garry
  2024-02-19 22:28   ` Dave Chinner
  2024-02-20  8:20   ` Christoph Hellwig
@ 2024-02-24 18:46   ` Ritesh Harjani
  2024-02-26  9:07     ` John Garry
  2 siblings, 1 reply; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-24 18:46 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty, John Garry

John Garry <[email protected]> writes:

> From: Prasad Singamsetty <[email protected]>
>
> Extend statx system call to return additional info for atomic write support
> support for a file.
>
> Helper function generic_fill_statx_atomic_writes() can be used by FSes to
> fill in the relevant statx fields.
>
> Signed-off-by: Prasad Singamsetty <[email protected]>
> #jpg: relocate bdev support to another patch

^^^ miss maybe?
> Signed-off-by: John Garry <[email protected]>
> ---
>  fs/stat.c                 | 34 ++++++++++++++++++++++++++++++++++
>  include/linux/fs.h        |  3 +++
>  include/linux/stat.h      |  3 +++
>  include/uapi/linux/stat.h |  9 ++++++++-
>  4 files changed, 48 insertions(+), 1 deletion(-)
>
> diff --git a/fs/stat.c b/fs/stat.c
> index 77cdc69eb422..522787a4ab6a 100644
> --- a/fs/stat.c
> +++ b/fs/stat.c
> @@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
>  }
>  EXPORT_SYMBOL(generic_fill_statx_attr);
>  
> +/**
> + * generic_fill_statx_atomic_writes - Fill in the atomic writes statx attributes
> + * @stat:	Where to fill in the attribute flags
> + * @unit_min:	Minimum supported atomic write length
+ * @unit_min:	Minimum supported atomic write length in bytes


> + * @unit_max:	Maximum supported atomic write length
+ * @unit_max:	Maximum supported atomic write length in bytes

mentioning unit of the length might be useful here.

> + *
> + * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
> + * atomic write unit_min and unit_max values.
> + */
> +void generic_fill_statx_atomic_writes(struct kstat *stat,
> +				      unsigned int unit_min,

This (unit_min) can still go above in the same line.

> +				      unsigned int unit_max)
> +{
> +	/* Confirm that the request type is known */
> +	stat->result_mask |= STATX_WRITE_ATOMIC;
> +
> +	/* Confirm that the file attribute type is known */
> +	stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
> +
> +	if (unit_min) {
> +		stat->atomic_write_unit_min = unit_min;
> +		stat->atomic_write_unit_max = unit_max;
> +		/* Initially only allow 1x segment */
> +		stat->atomic_write_segments_max = 1;

Please log info about this in commit message about where this limit came
from? Is it since we only support ubuf (which IIUC, only supports 1
segment)? Later when we will add support for iovec, this limit can be
lifted?

> +
> +		/* Confirm atomic writes are actually supported */
> +		stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
> +	}
> +}
> +EXPORT_SYMBOL(generic_fill_statx_atomic_writes);
> +
>  /**
>   * vfs_getattr_nosec - getattr without security checks
>   * @path: file to get attributes from
> @@ -658,6 +689,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
>  	tmp.stx_mnt_id = stat->mnt_id;
>  	tmp.stx_dio_mem_align = stat->dio_mem_align;
>  	tmp.stx_dio_offset_align = stat->dio_offset_align;
> +	tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
> +	tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
> +	tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
>  
>  	return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
>  }
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 7271640fd600..531140a7e27a 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -3167,6 +3167,9 @@ extern const struct inode_operations page_symlink_inode_operations;
>  extern void kfree_link(void *);
>  void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
>  void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
> +void generic_fill_statx_atomic_writes(struct kstat *stat,
> +				      unsigned int unit_min,
> +				      unsigned int unit_max);

We can make 80 col. width even with unit_min in the same first line as of *stat.


>  extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
>  extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
>  void __inode_add_bytes(struct inode *inode, loff_t bytes);
> diff --git a/include/linux/stat.h b/include/linux/stat.h
> index 52150570d37a..2c5e2b8c6559 100644
> --- a/include/linux/stat.h
> +++ b/include/linux/stat.h
> @@ -53,6 +53,9 @@ struct kstat {
>  	u32		dio_mem_align;
>  	u32		dio_offset_align;
>  	u64		change_cookie;
> +	u32		atomic_write_unit_min;
> +	u32		atomic_write_unit_max;
> +	u32		atomic_write_segments_max;
>  };
>  
>  /* These definitions are internal to the kernel for now. Mainly used by nfsd. */
> diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
> index 2f2ee82d5517..c0e8e10d1de6 100644
> --- a/include/uapi/linux/stat.h
> +++ b/include/uapi/linux/stat.h
> @@ -127,7 +127,12 @@ struct statx {
>  	__u32	stx_dio_mem_align;	/* Memory buffer alignment for direct I/O */
>  	__u32	stx_dio_offset_align;	/* File offset alignment for direct I/O */
>  	/* 0xa0 */
> -	__u64	__spare3[12];	/* Spare space for future expansion */
> +	__u32	stx_atomic_write_unit_min;
> +	__u32	stx_atomic_write_unit_max;
> +	__u32   stx_atomic_write_segments_max;

Let's add one liner for each of these fields similar to how it was done
for others?

/* Minimum supported atomic write length in bytes */
/* Maximum supported atomic write length in bytes */
/* Maximum no. of segments (iovecs?) supported for atomic write */


> +	__u32   __spare1;
> +	/* 0xb0 */
> +	__u64	__spare3[10];	/* Spare space for future expansion */
>  	/* 0x100 */
>  };
>  
> @@ -155,6 +160,7 @@ struct statx {
>  #define STATX_MNT_ID		0x00001000U	/* Got stx_mnt_id */
>  #define STATX_DIOALIGN		0x00002000U	/* Want/got direct I/O alignment info */
>  #define STATX_MNT_ID_UNIQUE	0x00004000U	/* Want/got extended stx_mount_id */
> +#define STATX_WRITE_ATOMIC	0x00008000U	/* Want/got atomic_write_* fields */
>  
>  #define STATX__RESERVED		0x80000000U	/* Reserved for future struct statx expansion */
>  
> @@ -190,6 +196,7 @@ struct statx {
>  #define STATX_ATTR_MOUNT_ROOT		0x00002000 /* Root of a mount */
>  #define STATX_ATTR_VERITY		0x00100000 /* [I] Verity protected file */
>  #define STATX_ATTR_DAX			0x00200000 /* File is currently in DAX state */
> +#define STATX_ATTR_WRITE_ATOMIC		0x00400000 /* File supports atomic write operations */
>  
>  
>  #endif /* _UAPI_LINUX_STAT_H */
> -- 
> 2.31.1

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 05/11] block: Add core atomic write support
  2024-02-19 13:01 ` [PATCH v4 05/11] block: Add core atomic write support John Garry
  2024-02-19 22:58   ` Dave Chinner
@ 2024-02-25 12:09   ` Ritesh Harjani
  2024-02-25 12:21     ` Ritesh Harjani
  2024-02-26  9:23     ` John Garry
  1 sibling, 2 replies; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-25 12:09 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, John Garry

John Garry <[email protected]> writes:

> Add atomic write support as follows:
> - report request_queue atomic write support limits to sysfs and udpate Doc
> - add helper functions to get request_queue atomic write limits
> - support to safely merge atomic writes
> - add a per-request atomic write flag
> - deal with splitting atomic writes
> - misc helper functions
>
> New sysfs files are added to report the following atomic write limits:
> - atomic_write_boundary_bytes
> - atomic_write_max_bytes
> - atomic_write_unit_max_bytes
> - atomic_write_unit_min_bytes
>
> atomic_write_unit_{min,max}_bytes report the min and max atomic write
> support size, inclusive, and are primarily dictated by HW capability. Both
> values must be a power-of-2. atomic_write_boundary_bytes, if non-zero,
> indicates an LBA space boundary at which an atomic write straddles no
> longer is atomically executed by the disk. atomic_write_max_bytes is the
> maximum merged size for an atomic write. Often it will be the same value as
> atomic_write_unit_max_bytes.

Instead of explaining sysfs outputs which are deriviatives of HW
and request_queue limits (and also defined in Documentation), maybe we
could explain how those sysfs values are derived instead -

struct queue_limits {
<...>
	unsigned int		atomic_write_hw_max_sectors;
	unsigned int		atomic_write_max_sectors;
	unsigned int		atomic_write_hw_boundary_sectors;
	unsigned int		atomic_write_hw_unit_min_sectors;
	unsigned int		atomic_write_unit_min_sectors;
	unsigned int		atomic_write_hw_unit_max_sectors;
	unsigned int		atomic_write_unit_max_sectors;
<...>

1. atomic_write_unit_hw_max_sectors comes directly from hw and it need
not be a power of 2.

2. atomic_write_hw_unit_min_sectors and atomic_write_hw_unit_max_sectors
is again defined/derived from hw limits, but it is rounded down so that
it is always a power of 2.

3. atomic_write_hw_boundary_sectors again comes from HW boundary limit.
It could either be 0 (which means the device specify no boundary limit) or a
multiple of unit_max. It need not be power of 2, however the current
code assumes it to be a power of 2 (check callers of blk_queue_atomic_write_boundary_bytes())

4. atomic_write_max_sectors, atomic_write_unit_min_sectors
and atomic_write_unit_max_sectors are all derived out of above hw limits
inside function blk_atomic_writes_update_limits() based on request_queue
limits.
    a. atomic_write_max_sectors is derived from atomic_write_hw_unit_max_sectors and
       request_queue's max_hw_sectors limit. It also guarantees max
       sectors that can be fit in a single bio.
    b. atomic_write_unit_[min|max]_sectors are derived from atomic_write_hw_unit_[min|max]_sectors,
       request_queue's max_hw_sectors & blk_queue_max_guaranteed_bio_sectors(). Both of these limits
       are kept as a power of 2.

Now coming to sysfs outputs -
1. atomic_write_unit_max_bytes: Same as atomic_write_unix_max_sectors in bytes
2. atomic_write_unit_min_bytes: Same as atomic_write_unit_min_sectors in bytes
3. atomic_write_boundary_bytes: same as atomic_write_hw_boundary_sectors
in bytes
4. atomic_write_max_bytes: Same as atomic_write_max_sectors in bytes

>
> atomic_write_unit_max_bytes is capped at the maximum data size which we are
> guaranteed to be able to fit in a BIO, as an atomic write must always be
> submitted as a single BIO. This BIO max size is dictated by the number of

Here it says that the atomic write must always be submitted as a single
bio. From where to where? I think you meant from FS to block layer.
Because otherwise we still allow request/bio merging inside block layer
based on the request queue limits we defined above. i.e. bio can be
chained to form
      rq->biotail->bi_next = next_rq->bio
as long as the merged requests is within the queue_limits.

i.e. atomic write requests can be merged as long as -
    - both rqs have REQ_ATOMIC set
    - blk_rq_sectors(final_rq) <= q->limits.atomic_write_max_sectors
    - final rq formed should not straddle limits->atomic_write_hw_boundary_sectors

However, splitting of an atomic write requests is not allowed. And if it
happens, we fail the I/O req & return -EINVAL.

> segments allowed which the request queue can support and the number of
> bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace issuing a
> write with iovcnt=1 for IOV_ITER - as such, we can rely on each segment
> containing PAGE_SIZE of data, apart from the first+last, which each can
> fit logical block size of data. Note that here we rely on the direct IO
> rule for alignment, that each iovec needs to be logical block size
> aligned/length multiple. Atomic writes may be supported for buffered IO in
> future, but it would still make sense to apply that direct IO rule there.
>
> atomic_write_max_sectors is capped at max_hw_sectors, but is not also
> capped at max_sectors. The value in max_sectors can be controlled from
> userspace, and it would only cause trouble if userspace could limit
> atomic_write_unit_max_bytes and the other atomic write limits.
>
> Atomic writes may be merged under the following conditions:
> - total request length <= atomic_write_max_bytes
> - the merged write does not straddle a boundary, if any
>
> It is only permissible to merge an atomic writes with another atomic
> write, i.e. it is not possible to merge an atomic and non-atomic write.
> There are many reasons for this, like:
> - SCSI has a dedicated atomic write command, so a merged atomic and
>   non-atomic needs to be issued as an atomic write, putting an unnecessary
>   burden on the disk to issue the merged write atomically
> - Dimensions of the merged non-atomic write need to be checked for size/
>   offset to conform to atomic write rules, which adds overhead
> - Typically only atomic writes or non-atomic writes are expected for a
>   file during normal processing, so not any expected use-case to cater for.
>
> Functions get_max_io_size() and blk_queue_get_max_sectors() are modified to
> handle atomic writes max length - those functions are used by the merge
> code.
>
> An atomic write cannot be split under any circumstances. In the case that
> an atomic write needs to be split, we reject the IO. If any atomic write
> needs to be split, it is most likely because of either:
> - atomic_write_unit_max_bytes reported is incorrect.
> - whoever submitted the atomic write BIO did not properly adhere to the
>   request_queue limits.
>
> All atomic writes limits are by default set 0 to indicate no atomic write
> support. Even though it is assumed by Linux that a logical block can always
> be atomically written, we ignore this as it is not of particular interest.
> Stacked devices are just not supported either for now.
>
> Flag REQ_ATOMIC is used for indicating an atomic write.
>
> Helper function bdev_can_atomic_write() is added to indicate whether
> atomic writes may be issued to a bdev. It ensures that if the bdev is a
> partition, that the partition is properly aligned with
> atomic_write_unit_min_sectors and atomic_write_hw_boundary_sectors.

IMHO, the commit message can definitely use a re-write. I agree that you
have put in a lot of information, but I think it can be more organized.

>
> Contains significant contributions from:
> Himanshu Madhani <[email protected]>

Myabe it can use a better tag then.
"Documentation/process/submitting-patches.rst"

>
> Signed-off-by: John Garry <[email protected]>
> ---
>  Documentation/ABI/stable/sysfs-block |  52 ++++++++++++++
>  block/blk-merge.c                    |  91 ++++++++++++++++++++++-
>  block/blk-settings.c                 | 103 +++++++++++++++++++++++++++
>  block/blk-sysfs.c                    |  33 +++++++++
>  block/blk.h                          |   3 +
>  include/linux/blk_types.h            |   2 +
>  include/linux/blkdev.h               |  60 ++++++++++++++++
>  7 files changed, 343 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
> index 1fe9a553c37b..4c775f4bdefe 100644
> --- a/Documentation/ABI/stable/sysfs-block
> +++ b/Documentation/ABI/stable/sysfs-block
> @@ -21,6 +21,58 @@ Description:
>  		device is offset from the internal allocation unit's
>  		natural alignment.
>
> +What:		/sys/block/<disk>/atomic_write_max_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] This parameter specifies the maximum atomic write
> +		size reported by the device. This parameter is relevant
> +		for merging of writes, where a merged atomic write
> +		operation must not exceed this number of bytes.
> +		This parameter may be greater to the value in
> +		atomic_write_unit_max_bytes as
> +		atomic_write_unit_max_bytes will be rounded down to a
> +		power-of-two and atomic_write_unit_max_bytes may also be
> +		limited by some other queue limits, such as max_segments.
> +		This parameter - along with atomic_write_unit_min_bytes
> +		and atomic_write_unit_max_bytes - will not be larger than
> +		max_hw_sectors_kb, but may be larger than max_sectors_kb.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_unit_min_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] This parameter specifies the smallest block which can
> +		be written atomically with an atomic write operation. All
> +		atomic write operations must begin at a
> +		atomic_write_unit_min boundary and must be multiples of
> +		atomic_write_unit_min. This value must be a power-of-two.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_unit_max_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] This parameter defines the largest block which can be
> +		written atomically with an atomic write operation. This
> +		value must be a multiple of atomic_write_unit_min and must
> +		be a power-of-two. This value will not be larger than
> +		atomic_write_max_bytes.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_boundary_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] A device may need to internally split I/Os which
> +		straddle a given logical block address boundary. In that
> +		case a single atomic write operation will be processed as
> +		one of more sub-operations which each complete atomically.
> +		This parameter specifies the size in bytes of the atomic
> +		boundary if one is reported by the device. This value must
> +		be a power-of-two.
> +
>
>  What:		/sys/block/<disk>/diskseq
>  Date:		February 2021
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 74e9e775f13d..12a75a252ca2 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -18,6 +18,42 @@
>  #include "blk-rq-qos.h"
>  #include "blk-throttle.h"
>  

/* A comment explaining this function and arguments could be helpful */

> +static bool rq_straddles_atomic_write_boundary(struct request *rq,
> +					unsigned int front,
> +					unsigned int back)

A better naming perhaps be start_adjust, end_adjust?

> +{
> +	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
> +	unsigned int mask, imask;
> +	loff_t start, end;

start_rq_pos, end_rq_pos maybe?

> +
> +	if (!boundary)
> +		return false;
> +
> +	start = rq->__sector << SECTOR_SHIFT;

blk_rq_pos(rq) perhaps?

> +	end = start + rq->__data_len;

blk_rq_bytes(rq) perhaps? It should be..
> +
> +	start -= front;
> +	end += back;
> +
> +	/* We're longer than the boundary, so must be crossing it */
> +	if (end - start > boundary)
> +		return true;
> +
> +	mask = boundary - 1;
> +
> +	/* start/end are boundary-aligned, so cannot be crossing */
> +	if (!(start & mask) || !(end & mask))
> +		return false;
> +
> +	imask = ~mask;
> +
> +	/* Top bits are different, so crossed a boundary */
> +	if ((start & imask) != (end & imask))
> +		return true;

The last condition looks wrong. Shouldn't it be end - 1?

> +
> +	return false;
> +}

Can we do something like this?

static bool rq_straddles_atomic_write_boundary(struct request *rq,
					       unsigned int start_adjust,
					       unsigned int end_adjust)
{
	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
	unsigned long boundary_mask;
	unsigned long start_rq_pos, end_rq_pos;

	if (!boundary)
		return false;

	start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
	end_rq_pos = start_rq_pos + blk_rq_bytes(rq);

	start_rq_pos -= start_adjust;
	end_rq_pos += end_adjust;

	boundary_mask = boundary - 1;

	if ((start_rq_pos | boundary_mask) != (end_rq_pos | boundary_mask))
		return true;

	return false;
}

I was thinking this check should cover all cases? Thoughts?


> +
>  static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
>  {
>  	*bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
> @@ -167,7 +203,16 @@ static inline unsigned get_max_io_size(struct bio *bio,
>  {
>  	unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
>  	unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
> -	unsigned max_sectors = lim->max_sectors, start, end;
> +	unsigned max_sectors, start, end;
> +
> +	/*
> +	 * We ignore lim->max_sectors for atomic writes simply because
> +	 * it may less than the bio size, which we cannot tolerate.
> +	 */
> +	if (bio->bi_opf & REQ_ATOMIC)
> +		max_sectors = lim->atomic_write_max_sectors;
> +	else
> +		max_sectors = lim->max_sectors;
>
>  	if (lim->chunk_sectors) {
>  		max_sectors = min(max_sectors,
> @@ -305,6 +350,11 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
>  	*segs = nsegs;
>  	return NULL;
>  split:
> +	if (bio->bi_opf & REQ_ATOMIC) {
> +		bio->bi_status = BLK_STS_IOERR;
> +		bio_endio(bio);
> +		return ERR_PTR(-EINVAL);
> +	}
>  	/*
>  	 * We can't sanely support splitting for a REQ_NOWAIT bio. End it
>  	 * with EAGAIN if splitting is required and return an error pointer.
> @@ -645,6 +695,13 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
>  		return 0;
>  	}
>
> +	if (req->cmd_flags & REQ_ATOMIC) {
> +		if (rq_straddles_atomic_write_boundary(req,
> +				0, bio->bi_iter.bi_size)) {
> +			return 0;
> +		}
> +	}
> +
>  	return ll_new_hw_segment(req, bio, nr_segs);
>  }
>
> @@ -664,6 +721,13 @@ static int ll_front_merge_fn(struct request *req, struct bio *bio,
>  		return 0;
>  	}
>
> +	if (req->cmd_flags & REQ_ATOMIC) {
> +		if (rq_straddles_atomic_write_boundary(req,
> +				bio->bi_iter.bi_size, 0)) {
> +			return 0;
> +		}
> +	}
> +
>  	return ll_new_hw_segment(req, bio, nr_segs);
>  }
>
> @@ -700,6 +764,13 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
>  	    blk_rq_get_max_sectors(req, blk_rq_pos(req)))
>  		return 0;
>
> +	if (req->cmd_flags & REQ_ATOMIC) {
> +		if (rq_straddles_atomic_write_boundary(req,
> +				0, blk_rq_bytes(next))) {
> +			return 0;
> +		}
> +	}
> +
>  	total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
>  	if (total_phys_segments > blk_rq_get_max_segments(req))
>  		return 0;
> @@ -795,6 +866,18 @@ static enum elv_merge blk_try_req_merge(struct request *req,
>  	return ELEVATOR_NO_MERGE;
>  }
>
> +static bool blk_atomic_write_mergeable_rq_bio(struct request *rq,
> +					      struct bio *bio)
> +{
> +	return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC);
> +}
> +
> +static bool blk_atomic_write_mergeable_rqs(struct request *rq,
> +					   struct request *next)
> +{
> +	return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC);
> +}
> +
>  /*
>   * For non-mq, this has to be called with the request spinlock acquired.
>   * For mq with scheduling, the appropriate queue wide lock should be held.
> @@ -814,6 +897,9 @@ static struct request *attempt_merge(struct request_queue *q,
>  	if (req->ioprio != next->ioprio)
>  		return NULL;
>
> +	if (!blk_atomic_write_mergeable_rqs(req, next))
> +		return NULL;
> +
>  	/*
>  	 * If we are allowed to merge, then append bio list
>  	 * from next to rq and release next. merge_requests_fn
> @@ -941,6 +1027,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
>  	if (rq->ioprio != bio_prio(bio))
>  		return false;
>
> +	if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false)
> +		return false;
> +
>  	return true;
>  }
>
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 06ea91e51b8b..176f26374abc 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -59,6 +59,13 @@ void blk_set_default_limits(struct queue_limits *lim)
>  	lim->zoned = false;
>  	lim->zone_write_granularity = 0;
>  	lim->dma_alignment = 511;
> +	lim->atomic_write_hw_max_sectors = 0;
> +	lim->atomic_write_max_sectors = 0;
> +	lim->atomic_write_hw_boundary_sectors = 0;
> +	lim->atomic_write_hw_unit_min_sectors = 0;
> +	lim->atomic_write_unit_min_sectors = 0;
> +	lim->atomic_write_hw_unit_max_sectors = 0;
> +	lim->atomic_write_unit_max_sectors = 0;
>  }
>
>  /**
> @@ -101,6 +108,44 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
>  }
>  EXPORT_SYMBOL(blk_queue_bounce_limit);
>
> +
> +/*
> + * Returns max guaranteed sectors which we can fit in a bio. For convenience of
> + * users, rounddown_pow_of_two() the return value.
> + *
> + * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
> + * from first and last segments.
> + */
> +static unsigned int blk_queue_max_guaranteed_bio_sectors(
> +					struct queue_limits *limits,
> +					struct request_queue *q)
> +{
> +	unsigned int max_segments = min(BIO_MAX_VECS, limits->max_segments);
> +	unsigned int length;
> +
> +	length = min(max_segments, 2) * queue_logical_block_size(q);
> +	if (max_segments > 2)
> +		length += (max_segments - 2) * PAGE_SIZE;
> +
> +	return rounddown_pow_of_two(length >> SECTOR_SHIFT);
> +}
> +
> +static void blk_atomic_writes_update_limits(struct request_queue *q)
> +{
> +	struct queue_limits *limits = &q->limits;
> +	unsigned int max_hw_sectors =
> +		rounddown_pow_of_two(limits->max_hw_sectors);
> +	unsigned int unit_limit = min(max_hw_sectors,
> +		blk_queue_max_guaranteed_bio_sectors(limits, q));
> +
> +	limits->atomic_write_max_sectors =
> +		min(limits->atomic_write_hw_max_sectors, max_hw_sectors);
> +	limits->atomic_write_unit_min_sectors =
> +		min(limits->atomic_write_hw_unit_min_sectors, unit_limit);
> +	limits->atomic_write_unit_max_sectors =
> +		min(limits->atomic_write_hw_unit_max_sectors, unit_limit);
> +}
> +
>  /**
>   * blk_queue_max_hw_sectors - set max sectors for a request for this queue
>   * @q:  the request queue for the device
> @@ -145,6 +190,8 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
>  				 limits->logical_block_size >> SECTOR_SHIFT);
>  	limits->max_sectors = max_sectors;
>
> +	blk_atomic_writes_update_limits(q);
> +
>  	if (!q->disk)
>  		return;
>  	q->disk->bdi->io_pages = max_sectors >> (PAGE_SHIFT - 9);
> @@ -182,6 +229,62 @@ void blk_queue_max_discard_sectors(struct request_queue *q,
>  }
>  EXPORT_SYMBOL(blk_queue_max_discard_sectors);
>
> +/**
> + * blk_queue_atomic_write_max_bytes - set max bytes supported by
> + * the device for atomic write operations.
> + * @q:  the request queue for the device
> + * @bytes: maximum bytes supported
> + */
> +void blk_queue_atomic_write_max_bytes(struct request_queue *q,
> +				      unsigned int bytes)
> +{
> +	q->limits.atomic_write_hw_max_sectors = bytes >> SECTOR_SHIFT;
> +	blk_atomic_writes_update_limits(q);
> +}
> +EXPORT_SYMBOL(blk_queue_atomic_write_max_bytes);
> +
> +/**
> + * blk_queue_atomic_write_boundary_bytes - Device's logical block address space
> + * which an atomic write should not cross.
> + * @q:  the request queue for the device
> + * @bytes: must be a power-of-two.
> + */
> +void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
> +					   unsigned int bytes)
> +{
> +	q->limits.atomic_write_hw_boundary_sectors = bytes >> SECTOR_SHIFT;
> +}
> +EXPORT_SYMBOL(blk_queue_atomic_write_boundary_bytes);
> +
> +/**
> + * blk_queue_atomic_write_unit_min_sectors - smallest unit that can be written
> + * atomically to the device.
> + * @q:  the request queue for the device
> + * @sectors: must be a power-of-two.
> + */
> +void blk_queue_atomic_write_unit_min_sectors(struct request_queue *q,
> +					     unsigned int sectors)
> +{
> +
> +	q->limits.atomic_write_hw_unit_min_sectors = sectors;
> +	blk_atomic_writes_update_limits(q);
> +}
> +EXPORT_SYMBOL(blk_queue_atomic_write_unit_min_sectors);
> +
> +/*
> + * blk_queue_atomic_write_unit_max_sectors - largest unit that can be written
> + * atomically to the device.
> + * @q: the request queue for the device
> + * @sectors: must be a power-of-two.
> + */
> +void blk_queue_atomic_write_unit_max_sectors(struct request_queue *q,
> +					     unsigned int sectors)
> +{
> +	q->limits.atomic_write_hw_unit_max_sectors = sectors;
> +	blk_atomic_writes_update_limits(q);
> +}
> +EXPORT_SYMBOL(blk_queue_atomic_write_unit_max_sectors);
> +
>  /**
>   * blk_queue_max_secure_erase_sectors - set max sectors for a secure erase
>   * @q:  the request queue for the device
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 6b2429cad81a..3978f14f9769 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -118,6 +118,30 @@ static ssize_t queue_max_discard_segments_show(struct request_queue *q,
>  	return queue_var_show(queue_max_discard_segments(q), page);
>  }
>
> +static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q,
> +						char *page)
> +{
> +	return queue_var_show(queue_atomic_write_max_bytes(q), page);
> +}
> +
> +static ssize_t queue_atomic_write_boundary_show(struct request_queue *q,
> +						char *page)
> +{
> +	return queue_var_show(queue_atomic_write_boundary_bytes(q), page);
> +}
> +
> +static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q,
> +						char *page)
> +{
> +	return queue_var_show(queue_atomic_write_unit_min_bytes(q), page);
> +}
> +
> +static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q,
> +						char *page)
> +{
> +	return queue_var_show(queue_atomic_write_unit_max_bytes(q), page);
> +}
> +
>  static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
>  {
>  	return queue_var_show(q->limits.max_integrity_segments, page);
> @@ -502,6 +526,11 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes");
>  QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes");
>  QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
>
> +QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes");
> +QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes");
> +QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
> +QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
> +
>  QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
>  QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
>  QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
> @@ -629,6 +658,10 @@ static struct attribute *queue_attrs[] = {
>  	&queue_discard_max_entry.attr,
>  	&queue_discard_max_hw_entry.attr,
>  	&queue_discard_zeroes_data_entry.attr,
> +	&queue_atomic_write_max_bytes_entry.attr,
> +	&queue_atomic_write_boundary_entry.attr,
> +	&queue_atomic_write_unit_min_entry.attr,
> +	&queue_atomic_write_unit_max_entry.attr,
>  	&queue_write_same_max_entry.attr,
>  	&queue_write_zeroes_max_entry.attr,
>  	&queue_zone_append_max_entry.attr,
> diff --git a/block/blk.h b/block/blk.h
> index 050696131329..6ba8333fcf26 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -178,6 +178,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
>  	if (unlikely(op == REQ_OP_WRITE_ZEROES))
>  		return q->limits.max_write_zeroes_sectors;
>
> +	if (rq->cmd_flags & REQ_ATOMIC)
> +		return q->limits.atomic_write_max_sectors;
> +
>  	return q->limits.max_sectors;
>  }
>
> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index f288c94374b3..cd7cceb8565d 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -422,6 +422,7 @@ enum req_flag_bits {
>  	__REQ_DRV,		/* for driver use */
>  	__REQ_FS_PRIVATE,	/* for file system (submitter) use */
>
> +	__REQ_ATOMIC,		/* for atomic write operations */
>  	/*
>  	 * Command specific flags, keep last:
>  	 */
> @@ -448,6 +449,7 @@ enum req_flag_bits {
>  #define REQ_RAHEAD	(__force blk_opf_t)(1ULL << __REQ_RAHEAD)
>  #define REQ_BACKGROUND	(__force blk_opf_t)(1ULL << __REQ_BACKGROUND)
>  #define REQ_NOWAIT	(__force blk_opf_t)(1ULL << __REQ_NOWAIT)
> +#define REQ_ATOMIC	(__force blk_opf_t)(1ULL << __REQ_ATOMIC)

Let's add this in the same order as of __REQ_ATOMIC i.e. after
REQ_FS_PRIVATE macro

>  #define REQ_POLLED	(__force blk_opf_t)(1ULL << __REQ_POLLED)
>  #define REQ_ALLOC_CACHE	(__force blk_opf_t)(1ULL << __REQ_ALLOC_CACHE)
>  #define REQ_SWAP	(__force blk_opf_t)(1ULL << __REQ_SWAP)
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 99e4f5e72213..40ed56ef4937 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -299,6 +299,14 @@ struct queue_limits {
>  	unsigned int		discard_alignment;
>  	unsigned int		zone_write_granularity;
>
> +	unsigned int		atomic_write_hw_max_sectors;
> +	unsigned int		atomic_write_max_sectors;
> +	unsigned int		atomic_write_hw_boundary_sectors;
> +	unsigned int		atomic_write_hw_unit_min_sectors;
> +	unsigned int		atomic_write_unit_min_sectors;
> +	unsigned int		atomic_write_hw_unit_max_sectors;
> +	unsigned int		atomic_write_unit_max_sectors;
> +

1 liner comment for above members please?

>  	unsigned short		max_segments;
>  	unsigned short		max_integrity_segments;
>  	unsigned short		max_discard_segments;
> @@ -885,6 +893,14 @@ void blk_queue_zone_write_granularity(struct request_queue *q,
>  				      unsigned int size);
>  extern void blk_queue_alignment_offset(struct request_queue *q,
>  				       unsigned int alignment);
> +void blk_queue_atomic_write_max_bytes(struct request_queue *q,
> +				unsigned int bytes);
> +void blk_queue_atomic_write_unit_max_sectors(struct request_queue *q,
> +				unsigned int sectors);
> +void blk_queue_atomic_write_unit_min_sectors(struct request_queue *q,
> +				unsigned int sectors);
> +void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
> +				unsigned int bytes);
>  void disk_update_readahead(struct gendisk *disk);
>  extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min);
>  extern void blk_queue_io_min(struct request_queue *q, unsigned int min);
> @@ -1291,6 +1307,30 @@ static inline int queue_dma_alignment(const struct request_queue *q)
>  	return q ? q->limits.dma_alignment : 511;
>  }
>
> +static inline unsigned int
> +queue_atomic_write_unit_max_bytes(const struct request_queue *q)
> +{
> +	return q->limits.atomic_write_unit_max_sectors << SECTOR_SHIFT;
> +}
> +
> +static inline unsigned int
> +queue_atomic_write_unit_min_bytes(const struct request_queue *q)
> +{
> +	return q->limits.atomic_write_unit_min_sectors << SECTOR_SHIFT;
> +}
> +
> +static inline unsigned int
> +queue_atomic_write_boundary_bytes(const struct request_queue *q)
> +{
> +	return q->limits.atomic_write_hw_boundary_sectors << SECTOR_SHIFT;
> +}
> +
> +static inline unsigned int
> +queue_atomic_write_max_bytes(const struct request_queue *q)
> +{
> +	return q->limits.atomic_write_max_sectors << SECTOR_SHIFT;
> +}
> +
>  static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
>  {
>  	return queue_dma_alignment(bdev_get_queue(bdev));
> @@ -1540,6 +1580,26 @@ struct io_comp_batch {
>  	void (*complete)(struct io_comp_batch *);
>  };
>
> +static inline bool bdev_can_atomic_write(struct block_device *bdev)
> +{
> +	struct request_queue *bd_queue = bdev->bd_queue;
> +	struct queue_limits *limits = &bd_queue->limits;
> +
> +	if (!limits->atomic_write_unit_min_sectors)
> +		return false;
> +
> +	if (bdev_is_partition(bdev)) {
> +		sector_t bd_start_sect = bdev->bd_start_sect;
> +		unsigned int granularity = max(

atomic_align perhaps?

> +				limits->atomic_write_unit_min_sectors,
> +				limits->atomic_write_hw_boundary_sectors);
> +		if (do_div(bd_start_sect, granularity))
> +			return false;
> +	}

since atomic_align is a power of 2. Why not use IS_ALIGNED()?
(bitwise operation instead of div)?

> +
> +	return true;
> +}
> +
>  #define DEFINE_IO_COMP_BATCH(name)	struct io_comp_batch name = { }
>
>  #endif /* _LINUX_BLKDEV_H */
> --
> 2.31.1

-ritesh

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 05/11] block: Add core atomic write support
  2024-02-25 12:09   ` Ritesh Harjani
@ 2024-02-25 12:21     ` Ritesh Harjani
  2024-02-26  9:23     ` John Garry
  1 sibling, 0 replies; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-25 12:21 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, John Garry

Ritesh Harjani (IBM) <[email protected]> writes:

> John Garry <[email protected]> writes:
>
>> +
>> +	mask = boundary - 1;
>> +
>> +	/* start/end are boundary-aligned, so cannot be crossing */
>> +	if (!(start & mask) || !(end & mask))
>> +		return false;
>> +
>> +	imask = ~mask;
>> +
>> +	/* Top bits are different, so crossed a boundary */
>> +	if ((start & imask) != (end & imask))
>> +		return true;
>
> The last condition looks wrong. Shouldn't it be end - 1?
>
>> +
>> +	return false;
>> +}
>
> Can we do something like this?
>
> static bool rq_straddles_atomic_write_boundary(struct request *rq,
> 					       unsigned int start_adjust,
> 					       unsigned int end_adjust)
> {
> 	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
> 	unsigned long boundary_mask;
> 	unsigned long start_rq_pos, end_rq_pos;
>
> 	if (!boundary)
> 		return false;
>
> 	start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
> 	end_rq_pos = start_rq_pos + blk_rq_bytes(rq);

my bad. I meant this...

   end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1;
>
> 	start_rq_pos -= start_adjust;
> 	end_rq_pos += end_adjust;
>
> 	boundary_mask = boundary - 1;
>
> 	if ((start_rq_pos | boundary_mask) != (end_rq_pos | boundary_mask))
> 		return true;
>
> 	return false;
> }
>
> I was thinking this check should cover all cases? Thoughts?
>
>

-ritesh

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 06/11] block: Add atomic write support for statx
  2024-02-19 13:01 ` [PATCH v4 06/11] block: Add atomic write support for statx John Garry
  2024-02-20  8:29   ` Christoph Hellwig
@ 2024-02-25 14:20   ` Ritesh Harjani
  2024-02-26  9:36     ` John Garry
  1 sibling, 1 reply; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-25 14:20 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty, John Garry

John Garry <[email protected]> writes:

> From: Prasad Singamsetty <[email protected]>
>
> Extend statx system call to return additional info for atomic write support
> support if the specified file is a block device.
>
> Signed-off-by: Prasad Singamsetty <[email protected]>
> Signed-off-by: John Garry <[email protected]>
> ---
>  block/bdev.c           | 37 +++++++++++++++++++++++++++----------
>  fs/stat.c              | 13 ++++++-------
>  include/linux/blkdev.h |  5 +++--
>  3 files changed, 36 insertions(+), 19 deletions(-)
>
> diff --git a/block/bdev.c b/block/bdev.c
> index e9f1b12bd75c..0dada9902bd4 100644
> --- a/block/bdev.c
> +++ b/block/bdev.c
> @@ -1116,24 +1116,41 @@ void sync_bdevs(bool wait)
>  	iput(old_inode);
>  }
>  
> +#define BDEV_STATX_SUPPORTED_MASK (STATX_DIOALIGN | STATX_WRITE_ATOMIC)
> +
>  /*
> - * Handle STATX_DIOALIGN for block devices.
> - *
> - * Note that the inode passed to this is the inode of a block device node file,
> - * not the block device's internal inode.  Therefore it is *not* valid to use
> - * I_BDEV() here; the block device has to be looked up by i_rdev instead.
> + * Handle STATX_{DIOALIGN, WRITE_ATOMIC} for block devices.
>   */
> -void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
> +void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask)

why change this to dentry? Why not keep it as inode itself?

-ritesh

>  {
>  	struct block_device *bdev;
>  
> -	bdev = blkdev_get_no_open(inode->i_rdev);
> +	if (!(request_mask & BDEV_STATX_SUPPORTED_MASK))
> +		return;
> +
> +	/*
> +	 * Note that d_backing_inode() returns the inode of a block device node
> +	 * file, not the block device's internal inode.  Therefore it is *not*
> +	 * valid to use I_BDEV() here; the block device has to be looked up by
> +	 * i_rdev instead.
> +	 */
> +	bdev = blkdev_get_no_open(d_backing_inode(dentry)->i_rdev);
>  	if (!bdev)
>  		return;
>  
> -	stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
> -	stat->dio_offset_align = bdev_logical_block_size(bdev);
> -	stat->result_mask |= STATX_DIOALIGN;
> +	if (request_mask & STATX_DIOALIGN) {
> +		stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
> +		stat->dio_offset_align = bdev_logical_block_size(bdev);
> +		stat->result_mask |= STATX_DIOALIGN;
> +	}
> +
> +	if (request_mask & STATX_WRITE_ATOMIC && bdev_can_atomic_write(bdev)) {
> +		struct request_queue *bd_queue = bdev->bd_queue;
> +
> +		generic_fill_statx_atomic_writes(stat,
> +			queue_atomic_write_unit_min_bytes(bd_queue),
> +			queue_atomic_write_unit_max_bytes(bd_queue));
> +	}
>  
>  	blkdev_put_no_open(bdev);
>  }
> diff --git a/fs/stat.c b/fs/stat.c
> index 522787a4ab6a..bd0618477702 100644
> --- a/fs/stat.c
> +++ b/fs/stat.c
> @@ -290,13 +290,12 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
>  		stat->attributes |= STATX_ATTR_MOUNT_ROOT;
>  	stat->attributes_mask |= STATX_ATTR_MOUNT_ROOT;
>  
> -	/* Handle STATX_DIOALIGN for block devices. */
> -	if (request_mask & STATX_DIOALIGN) {
> -		struct inode *inode = d_backing_inode(path.dentry);
> -
> -		if (S_ISBLK(inode->i_mode))
> -			bdev_statx_dioalign(inode, stat);
> -	}
> +	/* If this is a block device inode, override the filesystem
> +	 * attributes with the block device specific parameters
> +	 * that need to be obtained from the bdev backing inode
> +	 */
> +	if (S_ISBLK(d_backing_inode(path.dentry)->i_mode))
> +		bdev_statx(path.dentry, stat, request_mask);
>  
>  	path_put(&path);
>  	if (retry_estale(error, lookup_flags)) {
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 40ed56ef4937..4f04456f1250 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -1541,7 +1541,7 @@ int sync_blockdev(struct block_device *bdev);
>  int sync_blockdev_range(struct block_device *bdev, loff_t lstart, loff_t lend);
>  int sync_blockdev_nowait(struct block_device *bdev);
>  void sync_bdevs(bool wait);
> -void bdev_statx_dioalign(struct inode *inode, struct kstat *stat);
> +void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask);
>  void printk_all_partitions(void);
>  int __init early_lookup_bdev(const char *pathname, dev_t *dev);
>  #else
> @@ -1559,7 +1559,8 @@ static inline int sync_blockdev_nowait(struct block_device *bdev)
>  static inline void sync_bdevs(bool wait)
>  {
>  }
> -static inline void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
> +static inline void bdev_statx(struct dentry *dentry, struct kstat *stat,
> +				u32 request_mask)
>  {
>  }
>  static inline void printk_all_partitions(void)
> -- 
> 2.31.1

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 07/11] block: Add fops atomic write support
  2024-02-19 13:01 ` [PATCH v4 07/11] block: Add fops atomic write support John Garry
@ 2024-02-25 14:46   ` Ritesh Harjani
  2024-02-26  9:46     ` John Garry
  0 siblings, 1 reply; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-25 14:46 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, John Garry

John Garry <[email protected]> writes:

> Support atomic writes by submitting a single BIO with the REQ_ATOMIC set.
>
> It must be ensured that the atomic write adheres to its rules, like
> naturally aligned offset, so call blkdev_dio_invalid() ->
> blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to
> blkdev_dio_invalid()] for this purpose.
>
> In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot
> produce a single BIO, so error in this case.

BIO_MAX_VECS is 256. So around 1MB limit with 4k pagesize. 
Any mention of why this limit for now? Is it due to code complexity that
we only support a single bio? 
As I see it, you have still enabled req merging in block layer for
atomic requests. So it can essentially submit bio chains to the device
driver? So why not support this case for user to submit a req. larger
than 1 MB? 

>
> Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes
> and the associated file flag is for O_DIRECT.
>
> Signed-off-by: John Garry <[email protected]>
> ---
>  block/fops.c | 31 ++++++++++++++++++++++++++++---
>  1 file changed, 28 insertions(+), 3 deletions(-)
>
> diff --git a/block/fops.c b/block/fops.c
> index 28382b4d097a..563189c2fc5a 100644
> --- a/block/fops.c
> +++ b/block/fops.c
> @@ -34,13 +34,27 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
>  	return opf;
>  }
>  
> -static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos,
> -			      struct iov_iter *iter)
> +static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos,
> +				      struct iov_iter *iter)
>  {
> +	struct request_queue *q = bdev_get_queue(bdev);
> +	unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q);
> +	unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q);
> +
> +	return atomic_write_valid(pos, iter, min_bytes, max_bytes);

generic_atomic_write_valid() would be better for this function. However,
I have any commented about this in some previous

> +}
> +
> +static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
> +				struct iov_iter *iter, bool atomic_write)

bool "is_atomic" or "is_atomic_write" perhaps? 
we anyway know that we only support atomic writes and RWF_ATOMIC
operation is made -EOPNOTSUPP for reads in kiocb_set_rw_flags().
So we may as well make it "is_atomic" for bools.

> +{
> +	if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter))
> +		return true;
> +
>  	return pos & (bdev_logical_block_size(bdev) - 1) ||
>  		!bdev_iter_is_aligned(bdev, iter);
>  }
>  
> +
>  #define DIO_INLINE_BIO_VECS 4
>  
>  static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
> @@ -71,6 +85,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
>  	}
>  	bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT;
>  	bio.bi_ioprio = iocb->ki_ioprio;
> +	if (iocb->ki_flags & IOCB_ATOMIC)
> +		bio.bi_opf |= REQ_ATOMIC;
>  
>  	ret = bio_iov_iter_get_pages(&bio, iter);
>  	if (unlikely(ret))
> @@ -341,6 +357,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>  		task_io_account_write(bio->bi_iter.bi_size);
>  	}
>  
> +	if (iocb->ki_flags & IOCB_ATOMIC)
> +		bio->bi_opf |= REQ_ATOMIC;
> +
>  	if (iocb->ki_flags & IOCB_NOWAIT)
>  		bio->bi_opf |= REQ_NOWAIT;
>  
> @@ -357,13 +376,14 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>  static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
>  {
>  	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
> +	bool atomic_write = iocb->ki_flags & IOCB_ATOMIC;

ditto, bool is_atomic perhaps?

>  	loff_t pos = iocb->ki_pos;
>  	unsigned int nr_pages;
>  
>  	if (!iov_iter_count(iter))
>  		return 0;
>  
> -	if (blkdev_dio_unaligned(bdev, pos, iter))
> +	if (blkdev_dio_invalid(bdev, pos, iter, atomic_write))
>  		return -EINVAL;
>  
>  	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
> @@ -371,6 +391,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
>  		if (is_sync_kiocb(iocb))
>  			return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
>  		return __blkdev_direct_IO_async(iocb, iter, nr_pages);
> +	} else if (atomic_write) {
> +		return -EINVAL;
>  	}
>  	return __blkdev_direct_IO(iocb, iter, bio_max_segs(nr_pages));
>  }
> @@ -616,6 +638,9 @@ static int blkdev_open(struct inode *inode, struct file *filp)
>  	if (bdev_nowait(handle->bdev))
>  		filp->f_mode |= FMODE_NOWAIT;
>  
> +	if (bdev_can_atomic_write(handle->bdev) && filp->f_flags & O_DIRECT)
> +		filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
> +
>  	filp->f_mapping = handle->bdev->bd_inode->i_mapping;
>  	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
>  	filp->private_data = handle;
> -- 
> 2.31.1

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-24 18:16   ` Ritesh Harjani
  2024-02-24 18:20     ` Ritesh Harjani
@ 2024-02-26  8:51     ` John Garry
  1 sibling, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-26  8:51 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty

...

>>
>> Helper function atomic_write_valid() can be used by FSes to verify
>> compliant writes.
>>
>> Signed-off-by: Prasad Singamsetty <[email protected]>
>> #jpg: merge into single patch and much rewrite
> 
> ^^^ this might be a miss I guess.

I'm not sure what you mean. Here I am just briefly commenting on much 
changes which I made.

> 
>> Signed-off-by: John Garry <[email protected]>
>> ---
>>   fs/aio.c                |  8 ++++----
>>   fs/btrfs/ioctl.c        |  2 +-
>>   fs/read_write.c         |  2 +-
>>   include/linux/fs.h      | 36 +++++++++++++++++++++++++++++++++++-
>>   include/uapi/linux/fs.h |  5 ++++-
>>   io_uring/rw.c           |  4 ++--
>>   6 files changed, 47 insertions(+), 10 deletions(-)
>>
>> diff --git a/fs/aio.c b/fs/aio.c
>> index bb2ff48991f3..21bcbc076fd0 100644
>> --- a/fs/aio.c
>> +++ b/fs/aio.c
>> @@ -1502,7 +1502,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res)
>>   	iocb_put(iocb);
>>   }
>>   
>> -static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
>> +static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int type)
> 
> maybe rw_type?

ok

> 
>>   {
>>   	int ret;
>>   
>> @@ -1528,7 +1528,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
>>   	} else

...

>> +
>>   /* 32bit hashes as llseek() offset (for directories) */
>>   #define FMODE_32BITHASH         ((__force fmode_t)0x200)
>>   /* 64bit hashes as llseek() offset (for directories) */
>> @@ -328,6 +333,7 @@ enum rw_hint {
>>   #define IOCB_SYNC		(__force int) RWF_SYNC
>>   #define IOCB_NOWAIT		(__force int) RWF_NOWAIT
>>   #define IOCB_APPEND		(__force int) RWF_APPEND
>> +#define IOCB_ATOMIC		(__force int) RWF_ATOMIC
>>   
> 
> You might also want to add this definition in here too
> 
> #define TRACE_IOCB_STRINGS \
> <...>
> <...>
> { IOCB_ATOMIC, "ATOMIC" }

ok

I suppose that new flag RWF_NOAPPEND in linux-next also should have this

>>   
>> +static inline bool atomic_write_valid(loff_t pos, struct iov_iter *iter,
>> +			   unsigned int unit_min, unsigned int unit_max)
>> +{
>> +	size_t len = iov_iter_count(iter);
>> +
>> +	if (!iter_is_ubuf(iter))
>> +		return false;
> 
> There is no mention about this limitation in the commit message of this
> patch. Maybe it will be good to capture why this limitation to only
> support ubuf and/or any plans to lift this restriction in future
> in the commit message?

ok, I can mention this in the commit message.

> 
> 
>> +
>> +	if (len == unit_min || len == unit_max) {
>> +		/* ok if exactly min or max */
>> +	} else if (len < unit_min || len > unit_max) {
>> +		return false;
>> +	} else if (!is_power_of_2(len)) {
>> +		return false;
>> +	}
> 
> Checking for len == unit_min || len == unit_max is redundant when
> unit_min and unit_max are already power of 2.

Sure, but it was an optimization, considering that typically we will be 
issuing unit_max in anticipated FS scenario.

Anyway, I will be changing this according to an earlier comment.

Thanks,
John


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-24 18:20     ` Ritesh Harjani
@ 2024-02-26  8:58       ` John Garry
  2024-02-26  9:13         ` Ritesh Harjani
  0 siblings, 1 reply; 51+ messages in thread
From: John Garry @ 2024-02-26  8:58 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty

On 24/02/2024 18:20, Ritesh Harjani (IBM) wrote:
>>> Helper function atomic_write_valid() can be used by FSes to verify
>>> compliant writes.
> Minor nit.
> maybe generic_atomic_write_valid()?

Having "generic" in the name implies that there are other ways in which 
we can check if an atomic write is valid, but really this function 
should be good to use in scenarios so far considered.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 04/11] fs: Add initial atomic write support info to statx
  2024-02-24 18:46   ` Ritesh Harjani
@ 2024-02-26  9:07     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-26  9:07 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty

On 24/02/2024 18:46, Ritesh Harjani (IBM) wrote:
> John Garry <[email protected]> writes:
> 
>> From: Prasad Singamsetty <[email protected]>
>>
>> Extend statx system call to return additional info for atomic write support
>> support for a file.
>>
>> Helper function generic_fill_statx_atomic_writes() can be used by FSes to
>> fill in the relevant statx fields.
>>
>> Signed-off-by: Prasad Singamsetty <[email protected]>
>> #jpg: relocate bdev support to another patch
> 
> ^^^ miss maybe?
>> Signed-off-by: John Garry <[email protected]>
>> ---
>>   fs/stat.c                 | 34 ++++++++++++++++++++++++++++++++++
>>   include/linux/fs.h        |  3 +++
>>   include/linux/stat.h      |  3 +++
>>   include/uapi/linux/stat.h |  9 ++++++++-
>>   4 files changed, 48 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/stat.c b/fs/stat.c
>> index 77cdc69eb422..522787a4ab6a 100644
>> --- a/fs/stat.c
>> +++ b/fs/stat.c
>> @@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
>>   }
>>   EXPORT_SYMBOL(generic_fill_statx_attr);
>>   
>> +/**
>> + * generic_fill_statx_atomic_writes - Fill in the atomic writes statx attributes
>> + * @stat:	Where to fill in the attribute flags
>> + * @unit_min:	Minimum supported atomic write length
> + * @unit_min:	Minimum supported atomic write length in bytes
> 
> 
>> + * @unit_max:	Maximum supported atomic write length
> + * @unit_max:	Maximum supported atomic write length in bytes
> 
> mentioning unit of the length might be useful here.

Yeah, I have already improved this as suggested.

> 
>> + *
>> + * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
>> + * atomic write unit_min and unit_max values.
>> + */
>> +void generic_fill_statx_atomic_writes(struct kstat *stat,
>> +				      unsigned int unit_min,
> 
> This (unit_min) can still go above in the same line.

ok

> 
>> +				      unsigned int unit_max)
>> +{
>> +	/* Confirm that the request type is known */
>> +	stat->result_mask |= STATX_WRITE_ATOMIC;
>> +
>> +	/* Confirm that the file attribute type is known */
>> +	stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
>> +
>> +	if (unit_min) {
>> +		stat->atomic_write_unit_min = unit_min;
>> +		stat->atomic_write_unit_max = unit_max;
>> +		/* Initially only allow 1x segment */
>> +		stat->atomic_write_segments_max = 1;
> 
> Please log info about this in commit message about where this limit came
> from?

ok

> Is it since we only support ubuf (which IIUC, only supports 1
> segment)? Later when we will add support for iovec, this limit can be
> lifted?

It's not that we only support ubuf, but rather we only support one 
segment and that gives a ubuf type iter.

This is all related to how can can guarantee a unit_max advertised to 
userspace can always be written atomically.

This is further mentioned in the block layer patch.

> 
>> +
>> +		/* Confirm atomic writes are actually supported */
>> +		stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
>> +	}
>> +}
>> +EXPORT_SYMBOL(generic_fill_statx_atomic_writes);
>> +
>>   /**
>>    * vfs_getattr_nosec - getattr without security checks
>>    * @path: file to get attributes from
>> @@ -658,6 +689,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
>>   	tmp.stx_mnt_id = stat->mnt_id;
>>   	tmp.stx_dio_mem_align = stat->dio_mem_align;
>>   	tmp.stx_dio_offset_align = stat->dio_offset_align;
>> +	tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
>> +	tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
>> +	tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
>>   
>>   	return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
>>   }
>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>> index 7271640fd600..531140a7e27a 100644
>> --- a/include/linux/fs.h
>> +++ b/include/linux/fs.h
>> @@ -3167,6 +3167,9 @@ extern const struct inode_operations page_symlink_inode_operations;
>>   extern void kfree_link(void *);
>>   void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
>>   void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
>> +void generic_fill_statx_atomic_writes(struct kstat *stat,
>> +				      unsigned int unit_min,
>> +				      unsigned int unit_max);
> 
> We can make 80 col. width even with unit_min in the same first line as of *stat.

ok, I can check this.

> 
> 
>>   extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
>>   extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
>>   void __inode_add_bytes(struct inode *inode, loff_t bytes);
>> diff --git a/include/linux/stat.h b/include/linux/stat.h
>> index 52150570d37a..2c5e2b8c6559 100644
>> --- a/include/linux/stat.h
>> +++ b/include/linux/stat.h
>> @@ -53,6 +53,9 @@ struct kstat {
>>   	u32		dio_mem_align;
>>   	u32		dio_offset_align;
>>   	u64		change_cookie;
>> +	u32		atomic_write_unit_min;
>> +	u32		atomic_write_unit_max;
>> +	u32		atomic_write_segments_max;
>>   };
>>   
>>   /* These definitions are internal to the kernel for now. Mainly used by nfsd. */
>> diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
>> index 2f2ee82d5517..c0e8e10d1de6 100644
>> --- a/include/uapi/linux/stat.h
>> +++ b/include/uapi/linux/stat.h
>> @@ -127,7 +127,12 @@ struct statx {
>>   	__u32	stx_dio_mem_align;	/* Memory buffer alignment for direct I/O */
>>   	__u32	stx_dio_offset_align;	/* File offset alignment for direct I/O */
>>   	/* 0xa0 */
>> -	__u64	__spare3[12];	/* Spare space for future expansion */
>> +	__u32	stx_atomic_write_unit_min;
>> +	__u32	stx_atomic_write_unit_max;
>> +	__u32   stx_atomic_write_segments_max;
> 
> Let's add one liner for each of these fields similar to how it was done
> for others?
> 
> /* Minimum supported atomic write length in bytes */
> /* Maximum supported atomic write length in bytes */
> /* Maximum no. of segments (iovecs?) supported for atomic write */

ok

> 
> 
>> +	__u32   __spare1;
>> +	/* 0xb0 */
>> +	__u64	__spare3[10];	/* Spare space for future expansion */
>>   	/* 0x100 */
>>   };
>>   
>> @@ -155,6 +160,7 @@ struct statx {
>>   #define STATX_MNT_ID		0x00001000U	/* Got stx_mnt_id */
>>   #define STATX_DIOALIGN		0x00002000U	/* Want/got direct I/O alignment info */
>>   #define STATX_MNT_ID_UNIQUE	0x00004000U	/* Want/got extended stx_mount_id */
>> +#define STATX_WRITE_ATOMIC	0x00008000U	/* Want/got atomic_write_* fields */
>>   
>>   #define STATX__RESERVED		0x80000000U	/* Reserved for future struct statx expansion */
>>   
>> @@ -190,6 +196,7 @@ struct statx {
>>   #define STATX_ATTR_MOUNT_ROOT		0x00002000 /* Root of a mount */
>>   #define STATX_ATTR_VERITY		0x00100000 /* [I] Verity protected file */
>>   #define STATX_ATTR_DAX			0x00200000 /* File is currently in DAX state */
>> +#define STATX_ATTR_WRITE_ATOMIC		0x00400000 /* File supports atomic write operations */
>>   
>>   
>>   #endif /* _UAPI_LINUX_STAT_H */
>> -- 
>> 2.31.1

Thanks,
John


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-26  8:58       ` John Garry
@ 2024-02-26  9:13         ` Ritesh Harjani
  2024-02-26  9:46           ` John Garry
  0 siblings, 1 reply; 51+ messages in thread
From: Ritesh Harjani @ 2024-02-26  9:13 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty

John Garry <[email protected]> writes:

> On 24/02/2024 18:20, Ritesh Harjani (IBM) wrote:
>>>> Helper function atomic_write_valid() can be used by FSes to verify
>>>> compliant writes.
>> Minor nit.
>> maybe generic_atomic_write_valid()?
>
> Having "generic" in the name implies that there are other ways in which 
> we can check if an atomic write is valid, but really this function 
> should be good to use in scenarios so far considered.

It means individual FS can call in a generic atomic write validation
helper instead of implementing of their own. Hence generic_atomic_write_valid(). 

So for e.g. blkdev_atomic_write_valid() function and maybe iomap (or
ext4 or xfs) can use a generic_atomic_write_valid() helper routine to
validate an atomic write request.

-ritesh

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 05/11] block: Add core atomic write support
  2024-02-25 12:09   ` Ritesh Harjani
  2024-02-25 12:21     ` Ritesh Harjani
@ 2024-02-26  9:23     ` John Garry
  1 sibling, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-26  9:23 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay

On 25/02/2024 12:09, Ritesh Harjani (IBM) wrote:
> John Garry <[email protected]> writes:
> 
>> Add atomic write support as follows:
>> - report request_queue atomic write support limits to sysfs and udpate Doc
>> - add helper functions to get request_queue atomic write limits
>> - support to safely merge atomic writes
>> - add a per-request atomic write flag
>> - deal with splitting atomic writes
>> - misc helper functions
>>
>> New sysfs files are added to report the following atomic write limits:
>> - atomic_write_boundary_bytes
>> - atomic_write_max_bytes
>> - atomic_write_unit_max_bytes
>> - atomic_write_unit_min_bytes
>>
>> atomic_write_unit_{min,max}_bytes report the min and max atomic write
>> support size, inclusive, and are primarily dictated by HW capability. Both
>> values must be a power-of-2. atomic_write_boundary_bytes, if non-zero,
>> indicates an LBA space boundary at which an atomic write straddles no
>> longer is atomically executed by the disk. atomic_write_max_bytes is the
>> maximum merged size for an atomic write. Often it will be the same value as
>> atomic_write_unit_max_bytes.
> 
> Instead of explaining sysfs outputs which are deriviatives of HW
> and request_queue limits (and also defined in Documentation), maybe we
> could explain how those sysfs values are derived instead -
> 
> struct queue_limits {
> <...>
> 	unsigned int		atomic_write_hw_max_sectors;
> 	unsigned int		atomic_write_max_sectors;
> 	unsigned int		atomic_write_hw_boundary_sectors;
> 	unsigned int		atomic_write_hw_unit_min_sectors;
> 	unsigned int		atomic_write_unit_min_sectors;
> 	unsigned int		atomic_write_hw_unit_max_sectors;
> 	unsigned int		atomic_write_unit_max_sectors;
> <...>
> 
> 1. atomic_write_unit_hw_max_sectors comes directly from hw and it need
> not be a power of 2.
> 
> 2. atomic_write_hw_unit_min_sectors and atomic_write_hw_unit_max_sectors
> is again defined/derived from hw limits, but it is rounded down so that
> it is always a power of 2.
> 
> 3. atomic_write_hw_boundary_sectors again comes from HW boundary limit.
> It could either be 0 (which means the device specify no boundary limit) or a
> multiple of unit_max. It need not be power of 2, however the current
> code assumes it to be a power of 2 (check callers of blk_queue_atomic_write_boundary_bytes())
> 
> 4. atomic_write_max_sectors, atomic_write_unit_min_sectors
> and atomic_write_unit_max_sectors are all derived out of above hw limits
> inside function blk_atomic_writes_update_limits() based on request_queue
> limits.
>      a. atomic_write_max_sectors is derived from atomic_write_hw_unit_max_sectors and
>         request_queue's max_hw_sectors limit. It also guarantees max
>         sectors that can be fit in a single bio.
>      b. atomic_write_unit_[min|max]_sectors are derived from atomic_write_hw_unit_[min|max]_sectors,
>         request_queue's max_hw_sectors & blk_queue_max_guaranteed_bio_sectors(). Both of these limits
>         are kept as a power of 2.
> 
> Now coming to sysfs outputs -
> 1. atomic_write_unit_max_bytes: Same as atomic_write_unix_max_sectors in bytes
> 2. atomic_write_unit_min_bytes: Same as atomic_write_unit_min_sectors in bytes
> 3. atomic_write_boundary_bytes: same as atomic_write_hw_boundary_sectors
> in bytes
> 4. atomic_write_max_bytes: Same as atomic_write_max_sectors in bytes
> 

ok, I can look to incorporate the advised formatting changes

>>
>> atomic_write_unit_max_bytes is capped at the maximum data size which we are
>> guaranteed to be able to fit in a BIO, as an atomic write must always be
>> submitted as a single BIO. This BIO max size is dictated by the number of
> 
> Here it says that the atomic write must always be submitted as a single
> bio. From where to where?

submitted to the block layer/core

> I think you meant from FS to block layer.

sure, or also block device file operations (in fops.c) to block core

> Because otherwise we still allow request/bio merging inside block layer
> based on the request queue limits we defined above. i.e. bio can be
> chained to form
>        rq->biotail->bi_next = next_rq->bio
> as long as the merged requests is within the queue_limits.
> 
> i.e. atomic write requests can be merged as long as -
>      - both rqs have REQ_ATOMIC set
>      - blk_rq_sectors(final_rq) <= q->limits.atomic_write_max_sectors
>      - final rq formed should not straddle limits->atomic_write_hw_boundary_sectors
> 
> However, splitting of an atomic write requests is not allowed. And if it
> happens, we fail the I/O req & return -EINVAL.

...

> 
> IMHO, the commit message can definitely use a re-write. I agree that you
> have put in a lot of information, but I think it can be more organized.#

ok, fine. I'll look at this. Thanks.

> 
>>
>> Contains significant contributions from:
>> Himanshu Madhani <[email protected]>
> 
> Myabe it can use a better tag then.
> "Documentation/process/submitting-patches.rst"

ok

> 
>>
>> Signed-off-by: John Garry <[email protected]>
>> ---
>>   Documentation/ABI/stable/sysfs-block |  52 ++++++++++++++
>>   block/blk-merge.c                    |  91 ++++++++++++++++++++++-
>>   block/blk-settings.c                 | 103 +++++++++++++++++++++++++++
>>   block/blk-sysfs.c                    |  33 +++++++++
>>   block/blk.h                          |   3 +
>>   include/linux/blk_types.h            |   2 +
>>   include/linux/blkdev.h               |  60 ++++++++++++++++
>>   7 files changed, 343 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
>> index 1fe9a553c37b..4c775f4bdefe 100644
>> --- a/Documentation/ABI/stable/sysfs-block
>> +++ b/Documentation/ABI/stable/sysfs-block
>> @@ -21,6 +21,58 @@ Description:
>>   		device is offset from the internal allocation unit's
>>   		natural alignment.

...

>>   
> 
> /* A comment explaining this function and arguments could be helpful */

already addressed according to earlier review

> 
>> +static bool rq_straddles_atomic_write_boundary(struct request *rq,
>> +					unsigned int front,
>> +					unsigned int back)
> 
> A better naming perhaps be start_adjust, end_adjust?

ok

> 
>> +{
>> +	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>> +	unsigned int mask, imask;
>> +	loff_t start, end;
> 
> start_rq_pos, end_rq_pos maybe?

ok

> 
>> +
>> +	if (!boundary)
>> +		return false;
>> +
>> +	start = rq->__sector << SECTOR_SHIFT;
> 
> blk_rq_pos(rq) perhaps?

ok

> 
>> +	end = start + rq->__data_len;
> 
> blk_rq_bytes(rq) perhaps? It should be..

ok

>> +
>> +	start -= front;
>> +	end += back;
>> +
>> +	/* We're longer than the boundary, so must be crossing it */
>> +	if (end - start > boundary)
>> +		return true;
>> +
>> +	mask = boundary - 1;
>> +
>> +	/* start/end are boundary-aligned, so cannot be crossing */
>> +	if (!(start & mask) || !(end & mask))
>> +		return false;
>> +
>> +	imask = ~mask;
>> +
>> +	/* Top bits are different, so crossed a boundary */
>> +	if ((start & imask) != (end & imask))
>> +		return true;
> 
> The last condition looks wrong. Shouldn't it be end - 1?
> 
>> +
>> +	return false;
>> +}
> 
> Can we do something like this?
> 
> static bool rq_straddles_atomic_write_boundary(struct request *rq,
> 					       unsigned int start_adjust,
> 					       unsigned int end_adjust)
> {
> 	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
> 	unsigned long boundary_mask;
> 	unsigned long start_rq_pos, end_rq_pos;
> 
> 	if (!boundary)
> 		return false;
> 
> 	start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
> 	end_rq_pos = start_rq_pos + blk_rq_bytes(rq);
> 
> 	start_rq_pos -= start_adjust;
> 	end_rq_pos += end_adjust;
> 
> 	boundary_mask = boundary - 1;
> 
> 	if ((start_rq_pos | boundary_mask) != (end_rq_pos | boundary_mask))
> 		return true;
> 
> 	return false;
> }
> 
> I was thinking this check should cover all cases? Thoughts?

that looks ok (apart from issue already detected later). It is quite 
similar to how I coded it in the NVMe driver, apart from the initial > 
boundary check.

>> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
>> index f288c94374b3..cd7cceb8565d 100644
>> --- a/include/linux/blk_types.h
>> +++ b/include/linux/blk_types.h
>> @@ -422,6 +422,7 @@ enum req_flag_bits {
>>   	__REQ_DRV,		/* for driver use */
>>   	__REQ_FS_PRIVATE,	/* for file system (submitter) use */
>>
>> +	__REQ_ATOMIC,		/* for atomic write operations */
>>   	/*
>>   	 * Command specific flags, keep last:
>>   	 */
>> @@ -448,6 +449,7 @@ enum req_flag_bits {
>>   #define REQ_RAHEAD	(__force blk_opf_t)(1ULL << __REQ_RAHEAD)
>>   #define REQ_BACKGROUND	(__force blk_opf_t)(1ULL << __REQ_BACKGROUND)
>>   #define REQ_NOWAIT	(__force blk_opf_t)(1ULL << __REQ_NOWAIT)
>> +#define REQ_ATOMIC	(__force blk_opf_t)(1ULL << __REQ_ATOMIC)
> 
> Let's add this in the same order as of __REQ_ATOMIC i.e. after
> REQ_FS_PRIVATE macro

ok, fine

 >> @@ -299,6 +299,14 @@ struct queue_limits {
 >>   	unsigned int		discard_alignment;
 >>   	unsigned int		zone_write_granularity;
 >>
 >> +	unsigned int		atomic_write_hw_max_sectors;
 >> +	unsigned int		atomic_write_max_sectors;
 >> +	unsigned int		atomic_write_hw_boundary_sectors;
 >> +	unsigned int		atomic_write_hw_unit_min_sectors;
 >> +	unsigned int		atomic_write_unit_min_sectors;
 >> +	unsigned int		atomic_write_hw_unit_max_sectors;
 >> +	unsigned int		atomic_write_unit_max_sectors;
 >> +
 > 1 liner comment for above members please?

ok


>> +static inline bool bdev_can_atomic_write(struct block_device *bdev)
>> +{
>> +	struct request_queue *bd_queue = bdev->bd_queue;
>> +	struct queue_limits *limits = &bd_queue->limits;
>> +
>> +	if (!limits->atomic_write_unit_min_sectors)
>> +		return false;
>> +
>> +	if (bdev_is_partition(bdev)) {
>> +		sector_t bd_start_sect = bdev->bd_start_sect;
>> +		unsigned int granularity = max(
> 
> atomic_align perhaps?

or just "align"

> 
>> +				limits->atomic_write_unit_min_sectors,
>> +				limits->atomic_write_hw_boundary_sectors);
>> +		if (do_div(bd_start_sect, granularity))
>> +			return false;
>> +	}
> 
> since atomic_align is a power of 2. Why not use IS_ALIGNED()?
> (bitwise operation instead of div)?

already changed as advised

Thanks,
John


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 06/11] block: Add atomic write support for statx
  2024-02-25 14:20   ` Ritesh Harjani
@ 2024-02-26  9:36     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-26  9:36 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty

On 25/02/2024 14:20, Ritesh Harjani (IBM) wrote:
> John Garry <[email protected]> writes:
> 
>> From: Prasad Singamsetty <[email protected]>
>>
>> Extend statx system call to return additional info for atomic write support
>> support if the specified file is a block device.
>>
>> Signed-off-by: Prasad Singamsetty <[email protected]>
>> Signed-off-by: John Garry <[email protected]>
>> ---
>>   block/bdev.c           | 37 +++++++++++++++++++++++++++----------
>>   fs/stat.c              | 13 ++++++-------
>>   include/linux/blkdev.h |  5 +++--
>>   3 files changed, 36 insertions(+), 19 deletions(-)
>>
>> diff --git a/block/bdev.c b/block/bdev.c
>> index e9f1b12bd75c..0dada9902bd4 100644
>> --- a/block/bdev.c
>> +++ b/block/bdev.c
>> @@ -1116,24 +1116,41 @@ void sync_bdevs(bool wait)
>>   	iput(old_inode);
>>   }
>>   
>> +#define BDEV_STATX_SUPPORTED_MASK (STATX_DIOALIGN | STATX_WRITE_ATOMIC)
>> +
>>   /*
>> - * Handle STATX_DIOALIGN for block devices.
>> - *
>> - * Note that the inode passed to this is the inode of a block device node file,
>> - * not the block device's internal inode.  Therefore it is *not* valid to use
>> - * I_BDEV() here; the block device has to be looked up by i_rdev instead.
>> + * Handle STATX_{DIOALIGN, WRITE_ATOMIC} for block devices.
>>    */
>> -void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
>> +void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask)
> 
> why change this to dentry? Why not keep it as inode itself?

I suppose that I could do that.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 07/11] block: Add fops atomic write support
  2024-02-25 14:46   ` Ritesh Harjani
@ 2024-02-26  9:46     ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-26  9:46 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay

On 25/02/2024 14:46, Ritesh Harjani (IBM) wrote:
> John Garry <[email protected]> writes:
> 
>> Support atomic writes by submitting a single BIO with the REQ_ATOMIC set.
>>
>> It must be ensured that the atomic write adheres to its rules, like
>> naturally aligned offset, so call blkdev_dio_invalid() ->
>> blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to
>> blkdev_dio_invalid()] for this purpose.
>>
>> In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot
>> produce a single BIO, so error in this case.
> 
> BIO_MAX_VECS is 256. So around 1MB limit with 4k pagesize.
> Any mention of why this limit for now? Is it due to code complexity that
> we only support a single bio?

The reason is that lifting this limit adds extra complexity and I don't 
see any HW out there which supports a larger atomic write unit yet. And 
even if there was HW (which supports this larger size), is there a 
usecase for a larger atomic write unit?


Nilay reports awupf = 63 for his controller:

# lspci
0040:01:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 
0025 (rev 01)

# nvme id-ctrl /dev/nvme0 -H
NVME Identify Controller:
vid       : 0x1e0f
ssvid     : 0x1014
sn        : Z130A00LTGZ8
mn        : 800GB NVMe Gen4 U.2 SSD
fr        : REV.C9S2
[...]
awun      : 65535
awupf     : 63
[...]


And SCSI device I know which supports atomic writes can only handle 32KB 
max.

> As I see it, you have still enabled req merging in block layer for
> atomic requests. So it can essentially submit bio chains to the device
> driver? So why not support this case for user to submit a req. larger
> than 1 MB?

Indeed, we could try to lift this limit and submit larger bios or chains 
of bios for a single atomic write from userspace, but do we need it now?

Please also remember that we are always limited by the request queue DMA 
capabilities also.

> 
>>
>> Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes
>> and the associated file flag is for O_DIRECT.
>>
>> Signed-off-by: John Garry <[email protected]>
>> ---
>>   block/fops.c | 31 ++++++++++++++++++++++++++++---
>>   1 file changed, 28 insertions(+), 3 deletions(-)
>>
>> diff --git a/block/fops.c b/block/fops.c
>> index 28382b4d097a..563189c2fc5a 100644
>> --- a/block/fops.c
>> +++ b/block/fops.c
>> @@ -34,13 +34,27 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
>>   	return opf;
>>   }
>>   
>> -static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos,
>> -			      struct iov_iter *iter)
>> +static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos,
>> +				      struct iov_iter *iter)
>>   {
>> +	struct request_queue *q = bdev_get_queue(bdev);
>> +	unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q);
>> +	unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q);
>> +
>> +	return atomic_write_valid(pos, iter, min_bytes, max_bytes);
> 
> generic_atomic_write_valid() would be better for this function. However,
> I have any commented about this in some previous

ok

> 
>> +}
>> +
>> +static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
>> +				struct iov_iter *iter, bool atomic_write)
> 
> bool "is_atomic" or "is_atomic_write" perhaps?
> we anyway know that we only support atomic writes and RWF_ATOMIC
> operation is made -EOPNOTSUPP for reads in kiocb_set_rw_flags().
> So we may as well make it "is_atomic" for bools.

ok

> 
>> +{
>> +	if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter))
>> +		return true;
>> +
>>   	return pos & (bdev_logical_block_size(bdev) - 1) ||
>>   		!bdev_iter_is_aligned(bdev, iter);
>>   }
>>   
>> +
>>   #define DIO_INLINE_BIO_VECS 4
>>   
>>   static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
>> @@ -71,6 +85,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
>>   	}
>>   	bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT;
>>   	bio.bi_ioprio = iocb->ki_ioprio;
>> +	if (iocb->ki_flags & IOCB_ATOMIC)
>> +		bio.bi_opf |= REQ_ATOMIC;
>>   
>>   	ret = bio_iov_iter_get_pages(&bio, iter);
>>   	if (unlikely(ret))
>> @@ -341,6 +357,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>>   		task_io_account_write(bio->bi_iter.bi_size);
>>   	}
>>   
>> +	if (iocb->ki_flags & IOCB_ATOMIC)
>> +		bio->bi_opf |= REQ_ATOMIC;
>> +
>>   	if (iocb->ki_flags & IOCB_NOWAIT)
>>   		bio->bi_opf |= REQ_NOWAIT;
>>   
>> @@ -357,13 +376,14 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
>>   static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
>>   {
>>   	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
>> +	bool atomic_write = iocb->ki_flags & IOCB_ATOMIC;
> 
> ditto, bool is_atomic perhaps?

ok

> 
>>   	loff_t pos = iocb->ki_pos;
>>   	unsigned int nr_pages;
>>   
>>   	if (!iov_iter_count(iter))
>>   		return 0;
>>   
>> -	if (blkdev_dio_unaligned(bdev, pos, iter))
>> +	if (blkdev_dio_invalid(bdev, pos, iter, atomic_write))
>>   		return -EINVAL;
>>   
>>   	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
>> @@ -371,6 +391,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
>>   		if (is_sync_kiocb(iocb))
>>   			return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
>>   		return __blkdev_direct_IO_async(iocb, iter, nr_pages);
>> +	} else if (atomic_write) {
>> +		return -EINVAL;
>>   	}
>>   	return __blkdev_direct_IO(iocb, iter, bio_max_segs(nr_pages));
>>   }
>> @@ -616,6 +638,9 @@ static int blkdev_open(struct inode *inode, struct file *filp)
>>   	if (bdev_nowait(handle->bdev))
>>   		filp->f_mode |= FMODE_NOWAIT;
>>   
>> +	if (bdev_can_atomic_write(handle->bdev) && filp->f_flags & O_DIRECT)
>> +		filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
>> +
>>   	filp->f_mapping = handle->bdev->bd_inode->i_mapping;
>>   	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
>>   	filp->private_data = handle;
>> -- 
>> 2.31.1

Thanks,
John


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 03/11] fs: Initial atomic write support
  2024-02-26  9:13         ` Ritesh Harjani
@ 2024-02-26  9:46           ` John Garry
  0 siblings, 0 replies; 51+ messages in thread
From: John Garry @ 2024-02-26  9:46 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, Prasad Singamsetty

On 26/02/2024 09:13, Ritesh Harjani (IBM) wrote:
>>> maybe generic_atomic_write_valid()?
>> Having "generic" in the name implies that there are other ways in which
>> we can check if an atomic write is valid, but really this function
>> should be good to use in scenarios so far considered.
> It means individual FS can call in a generic atomic write validation
> helper instead of implementing of their own. Hence generic_atomic_write_valid().
> 
> So for e.g. blkdev_atomic_write_valid() function and maybe iomap (or
> ext4 or xfs) can use a generic_atomic_write_valid() helper routine to
> validate an atomic write request.

ok, fine, I can change the name as suggested.

Thanks,
John

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2024-02-26  9:47 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-19 13:00 [PATCH v4 00/11] block atomic writes John Garry
2024-02-19 13:00 ` [PATCH v4 01/11] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
2024-02-19 18:57   ` Keith Busch
2024-02-19 13:01 ` [PATCH v4 02/11] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
2024-02-19 18:57   ` Keith Busch
2024-02-20  8:31     ` John Garry
2024-02-20  6:54   ` Christoph Hellwig
2024-02-19 13:01 ` [PATCH v4 03/11] fs: Initial atomic write support John Garry
2024-02-19 19:16   ` David Sterba
2024-02-20  8:13     ` John Garry
2024-02-19 22:44   ` Dave Chinner
2024-02-20  9:52     ` John Garry
2024-02-24 18:16   ` Ritesh Harjani
2024-02-24 18:20     ` Ritesh Harjani
2024-02-26  8:58       ` John Garry
2024-02-26  9:13         ` Ritesh Harjani
2024-02-26  9:46           ` John Garry
2024-02-26  8:51     ` John Garry
2024-02-19 13:01 ` [PATCH v4 04/11] fs: Add initial atomic write support info to statx John Garry
2024-02-19 22:28   ` Dave Chinner
2024-02-20  9:40     ` John Garry
2024-02-20  8:20   ` Christoph Hellwig
2024-02-20  9:01     ` John Garry
2024-02-24 18:46   ` Ritesh Harjani
2024-02-26  9:07     ` John Garry
2024-02-19 13:01 ` [PATCH v4 05/11] block: Add core atomic write support John Garry
2024-02-19 22:58   ` Dave Chinner
2024-02-20  8:22     ` Christoph Hellwig
2024-02-20 10:01     ` John Garry
2024-02-25 12:09   ` Ritesh Harjani
2024-02-25 12:21     ` Ritesh Harjani
2024-02-26  9:23     ` John Garry
2024-02-19 13:01 ` [PATCH v4 06/11] block: Add atomic write support for statx John Garry
2024-02-20  8:29   ` Christoph Hellwig
2024-02-20  9:35     ` John Garry
2024-02-25 14:20   ` Ritesh Harjani
2024-02-26  9:36     ` John Garry
2024-02-19 13:01 ` [PATCH v4 07/11] block: Add fops atomic write support John Garry
2024-02-25 14:46   ` Ritesh Harjani
2024-02-26  9:46     ` John Garry
2024-02-19 13:01 ` [PATCH v4 08/11] scsi: sd: Atomic " John Garry
2024-02-19 13:01 ` [PATCH v4 09/11] scsi: scsi_debug: " John Garry
2024-02-20  7:12   ` Ojaswin Mujoo
2024-02-20  9:01     ` John Garry
2024-02-19 13:01 ` [PATCH v4 10/11] nvme: " John Garry
2024-02-19 19:21   ` Keith Busch
2024-02-20  6:55     ` Christoph Hellwig
2024-02-20  8:19       ` John Garry
2024-02-20  8:31   ` Christoph Hellwig
2024-02-20  8:50     ` John Garry
2024-02-19 13:01 ` [PATCH v4 11/11] nvme: Ensure atomic writes will be executed atomically John Garry

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox