public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH v6 00/10] block atomic writes
@ 2024-03-26 13:38 John Garry
  2024-03-26 13:38 ` [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
                   ` (10 more replies)
  0 siblings, 11 replies; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, John Garry

This series introduces a proposal to implementing atomic writes in the
kernel for torn-write protection.

This series takes the approach of adding a new "atomic" flag to each of
pwritev2() and iocb->ki_flags - RWF_ATOMIC and IOCB_ATOMIC, respectively.
When set, these indicate that we want the write issued "atomically".

Only direct IO is supported and for block devices here. For this, atomic
write HW is required, like SCSI ATOMIC WRITE (16).

XFS FS support has previously been posted at:
https://lore.kernel.org/linux-xfs/[email protected]/

I am working on a new version of that series, which I hope to post soon.

Updated man pages have been posted at:
https://lore.kernel.org/lkml/[email protected]/T/#m520dca97a9748de352b5a723d3155a4bb1e46456

The goal here is to provide an interface that allows applications use
application-specific block sizes larger than logical block size
reported by the storage device or larger than filesystem block size as
reported by stat().

With this new interface, application blocks will never be torn or
fractured when written. For a power fail, for each individual application
block, all or none of the data to be written. A racing atomic write and
read will mean that the read sees all the old data or all the new data,
but never a mix of old and new.

Three new fields are added to struct statx - atomic_write_unit_min,
atomic_write_unit_max, and atomic_write_segments_max. For each atomic
individual write, the total length of a write must be a between
atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
power-of-2. The write must also be at a natural offset in the file
wrt the write length. For pwritev2, iovcnt is limited by
atomic_write_segments_max.

There has been some discussion on supporting buffered IO and whether the
API is suitable, like:
https://lore.kernel.org/linux-nvme/[email protected]/

Specifically the concern is that supporting a range of sizes of atomic IO
in the pagecache is complex to support. For this, my idea is that FSes can
fix atomic_write_unit_min and atomic_write_unit_max at the same size, the
extent alignment size, which should be easier to support. We may need to
implement O_ATOMIC to avoid mixing atomic and non-atomic IOs for this. I
have no proposed solution for atomic write buffered IO for bdev file
operations, but I know of no requirement for this.

SCSI sd.c and scsi_debug and NVMe kernel support is added.

This series is based on v6.9-rc1

Patches can be found at:
https://github.com/johnpgarry/linux/commits/atomic-writes-v6.9-v6

Changes since v5:
- Rebase and update NVMe support for new request_queue limits API
  - Keith, please check since I still have your RB tag
- Change request_queue limits to byte-based sizes to suit new queue limits
  API
- Pass rw_type to io_uring io_rw_init_file() (Jens)
- Add BLK_STS_INVAL
- Don't check size in generic_atomic_write_valid()

Alan Adamson (1):
  nvme: Atomic write support

John Garry (6):
  block: Pass blk_queue_get_max_sectors() a request pointer
  block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  block: Add core atomic write support
  block: Add fops atomic write support
  scsi: sd: Atomic write support
  scsi: scsi_debug: Atomic write support

Prasad Singamsetty (3):
  fs: Initial atomic write support
  fs: Add initial atomic write support info to statx
  block: Add atomic write support for statx

 Documentation/ABI/stable/sysfs-block |  52 +++
 block/bdev.c                         |  36 +-
 block/blk-core.c                     |  19 +
 block/blk-merge.c                    |  98 ++++-
 block/blk-mq.c                       |   2 +-
 block/blk-settings.c                 | 109 +++++
 block/blk-sysfs.c                    |  33 ++
 block/blk.h                          |   9 +-
 block/fops.c                         |  47 ++-
 drivers/nvme/host/core.c             |  49 +++
 drivers/scsi/scsi_debug.c            | 588 +++++++++++++++++++++------
 drivers/scsi/scsi_trace.c            |  22 +
 drivers/scsi/sd.c                    |  93 ++++-
 drivers/scsi/sd.h                    |   8 +
 fs/aio.c                             |   8 +-
 fs/btrfs/ioctl.c                     |   2 +-
 fs/read_write.c                      |   2 +-
 fs/stat.c                            |  50 ++-
 include/linux/blk_types.h            |   8 +-
 include/linux/blkdev.h               |  67 ++-
 include/linux/fs.h                   |  36 +-
 include/linux/stat.h                 |   3 +
 include/scsi/scsi_proto.h            |   1 +
 include/trace/events/scsi.h          |   1 +
 include/uapi/linux/fs.h              |   5 +-
 include/uapi/linux/stat.h            |   9 +-
 io_uring/rw.c                        |   8 +-
 27 files changed, 1173 insertions(+), 192 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-04-10 22:58   ` Luis Chamberlain
  2024-03-26 13:38 ` [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, John Garry

Currently blk_queue_get_max_sectors() is passed a enum req_op. In future
the value returned from blk_queue_get_max_sectors() may depend on certain
request flags, so pass a request pointer.

Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Keith Busch <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
 block/blk-merge.c | 3 ++-
 block/blk-mq.c    | 2 +-
 block/blk.h       | 6 ++++--
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2a06fd33039d..6f9d9ca7922b 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -592,7 +592,8 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
 	if (blk_rq_is_passthrough(rq))
 		return q->limits.max_hw_sectors;
 
-	max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+	max_sectors = blk_queue_get_max_sectors(rq);
+
 	if (!q->limits.chunk_sectors ||
 	    req_op(rq) == REQ_OP_DISCARD ||
 	    req_op(rq) == REQ_OP_SECURE_ERASE)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 555ada922cf0..5428ca17add8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3042,7 +3042,7 @@ void blk_mq_submit_bio(struct bio *bio)
 blk_status_t blk_insert_cloned_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
-	unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+	unsigned int max_sectors = blk_queue_get_max_sectors(rq);
 	unsigned int max_segments = blk_rq_get_max_segments(rq);
 	blk_status_t ret;
 
diff --git a/block/blk.h b/block/blk.h
index 5cac4e29ae17..dc2fa6f88adc 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -180,9 +180,11 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
 	return queue_max_segments(rq->q);
 }
 
-static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,
-						     enum req_op op)
+static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
 {
+	struct request_queue *q = rq->q;
+	enum req_op op = req_op(rq);
+
 	if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE))
 		return min(q->limits.max_discard_sectors,
 			   UINT_MAX >> SECTOR_SHIFT);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
  2024-03-26 13:38 ` [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-04-10 22:53   ` Luis Chamberlain
  2024-03-26 13:38 ` [PATCH v6 03/10] fs: Initial atomic write support John Garry
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, John Garry

blkdev_dio_unaligned() is called from __blkdev_direct_IO(),
__blkdev_direct_IO_simple(), and __blkdev_direct_IO_async(), and all these
are only called from blkdev_direct_IO().

Move the blkdev_dio_unaligned() call to the common callsite,
blkdev_direct_IO().

Pass those functions the bdev pointer from blkdev_direct_IO() as it is non-
trivial to calculate.

Reviewed-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
 block/fops.c | 29 ++++++++++++-----------------
 1 file changed, 12 insertions(+), 17 deletions(-)

diff --git a/block/fops.c b/block/fops.c
index 679d9b752fe8..c091ea43bca3 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -44,18 +44,15 @@ static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos,
 #define DIO_INLINE_BIO_VECS 4
 
 static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
-		struct iov_iter *iter, unsigned int nr_pages)
+		struct iov_iter *iter, struct block_device *bdev,
+		unsigned int nr_pages)
 {
-	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
 	struct bio_vec inline_vecs[DIO_INLINE_BIO_VECS], *vecs;
 	loff_t pos = iocb->ki_pos;
 	bool should_dirty = false;
 	struct bio bio;
 	ssize_t ret;
 
-	if (blkdev_dio_unaligned(bdev, pos, iter))
-		return -EINVAL;
-
 	if (nr_pages <= DIO_INLINE_BIO_VECS)
 		vecs = inline_vecs;
 	else {
@@ -161,9 +158,8 @@ static void blkdev_bio_end_io(struct bio *bio)
 }
 
 static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
-		unsigned int nr_pages)
+		struct block_device *bdev, unsigned int nr_pages)
 {
-	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
 	struct blk_plug plug;
 	struct blkdev_dio *dio;
 	struct bio *bio;
@@ -172,9 +168,6 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 	loff_t pos = iocb->ki_pos;
 	int ret = 0;
 
-	if (blkdev_dio_unaligned(bdev, pos, iter))
-		return -EINVAL;
-
 	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
 		opf |= REQ_ALLOC_CACHE;
 	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
@@ -302,9 +295,9 @@ static void blkdev_bio_end_io_async(struct bio *bio)
 
 static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 					struct iov_iter *iter,
+					struct block_device *bdev,
 					unsigned int nr_pages)
 {
-	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
 	bool is_read = iov_iter_rw(iter) == READ;
 	blk_opf_t opf = is_read ? REQ_OP_READ : dio_bio_write_op(iocb);
 	struct blkdev_dio *dio;
@@ -312,9 +305,6 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 	loff_t pos = iocb->ki_pos;
 	int ret = 0;
 
-	if (blkdev_dio_unaligned(bdev, pos, iter))
-		return -EINVAL;
-
 	if (iocb->ki_flags & IOCB_ALLOC_CACHE)
 		opf |= REQ_ALLOC_CACHE;
 	bio = bio_alloc_bioset(bdev, nr_pages, opf, GFP_KERNEL,
@@ -368,18 +358,23 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 
 static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 {
+	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
 	unsigned int nr_pages;
 
 	if (!iov_iter_count(iter))
 		return 0;
 
+	if (blkdev_dio_unaligned(bdev, iocb->ki_pos, iter))
+		return -EINVAL;
+
 	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
 	if (likely(nr_pages <= BIO_MAX_VECS)) {
 		if (is_sync_kiocb(iocb))
-			return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
-		return __blkdev_direct_IO_async(iocb, iter, nr_pages);
+			return __blkdev_direct_IO_simple(iocb, iter, bdev,
+							nr_pages);
+		return __blkdev_direct_IO_async(iocb, iter, bdev, nr_pages);
 	}
-	return __blkdev_direct_IO(iocb, iter, bio_max_segs(nr_pages));
+	return __blkdev_direct_IO(iocb, iter, bdev, bio_max_segs(nr_pages));
 }
 
 static int blkdev_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 03/10] fs: Initial atomic write support
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
  2024-03-26 13:38 ` [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
  2024-03-26 13:38 ` [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-03-26 13:38 ` [PATCH v6 04/10] fs: Add initial atomic write support info to statx John Garry
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, Prasad Singamsetty, John Garry

From: Prasad Singamsetty <[email protected]>

An atomic write is a write issued with torn-write protection, meaning
that for a power failure or any other hardware failure, all or none of the
data from the write will be stored, but never a mix of old and new data.

Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the
write is to be issued with torn-write prevention, according to special
alignment and length rules.

For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for
iocb->ki_flags field to indicate the same.

A call to statx will give the relevant atomic write info for a file:
- atomic_write_unit_min
- atomic_write_unit_max
- atomic_write_segments_max

Both min and max values must be a power-of-2.

Applications can avail of atomic write feature by ensuring that the total
length of a write is a power-of-2 in size and also sized between
atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications
must ensure that the write is at a naturally-aligned offset in the file
wrt the total write length. The value in atomic_write_segments_max
indicates the upper limit for IOV_ITER iovcnt.

Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the
flag set will have RWF_ATOMIC rejected and not just ignored.

Add a type argument to kiocb_set_rw_flags() to allows reads which have
RWF_ATOMIC set to be rejected.

Helper function generic_atomic_write_valid() can be used by FSes to verify
compliant writes. There we check for iov_iter type is for ubuf, which
implies iovcnt==1 for pwritev2(), which is an initial restriction for
atomic_write_segments_max. Initially the only user will be bdev file
operations write handler. We will rely on the block BIO submission path to
ensure write sizes are compliant for the bdev, so we don't need to check
atomic writes sizes yet.

Signed-off-by: Prasad Singamsetty <[email protected]>
jpg: merge into single patch and much rewrite
Signed-off-by: John Garry <[email protected]>
---
 fs/aio.c                |  8 ++++----
 fs/btrfs/ioctl.c        |  2 +-
 fs/read_write.c         |  2 +-
 include/linux/fs.h      | 33 ++++++++++++++++++++++++++++++++-
 include/uapi/linux/fs.h |  5 ++++-
 io_uring/rw.c           |  8 ++++----
 6 files changed, 46 insertions(+), 12 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index 9cdaa2faa536..631e9aa34421 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1513,7 +1513,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res)
 	iocb_put(iocb);
 }
 
-static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int rw_type)
 {
 	int ret;
 
@@ -1539,7 +1539,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
 	} else
 		req->ki_ioprio = get_current_ioprio();
 
-	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
+	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags, rw_type);
 	if (unlikely(ret))
 		return ret;
 
@@ -1591,7 +1591,7 @@ static int aio_read(struct kiocb *req, const struct iocb *iocb,
 	struct file *file;
 	int ret;
 
-	ret = aio_prep_rw(req, iocb);
+	ret = aio_prep_rw(req, iocb, READ);
 	if (ret)
 		return ret;
 	file = req->ki_filp;
@@ -1618,7 +1618,7 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb,
 	struct file *file;
 	int ret;
 
-	ret = aio_prep_rw(req, iocb);
+	ret = aio_prep_rw(req, iocb, WRITE);
 	if (ret)
 		return ret;
 	file = req->ki_filp;
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 294e31edec9d..058a27a30d21 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -4581,7 +4581,7 @@ static int btrfs_ioctl_encoded_write(struct file *file, void __user *argp, bool
 		goto out_iov;
 
 	init_sync_kiocb(&kiocb, file);
-	ret = kiocb_set_rw_flags(&kiocb, 0);
+	ret = kiocb_set_rw_flags(&kiocb, 0, WRITE);
 	if (ret)
 		goto out_iov;
 	kiocb.ki_pos = pos;
diff --git a/fs/read_write.c b/fs/read_write.c
index d4c036e82b6c..a7dc1819192d 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -730,7 +730,7 @@ static ssize_t do_iter_readv_writev(struct file *filp, struct iov_iter *iter,
 	ssize_t ret;
 
 	init_sync_kiocb(&kiocb, filp);
-	ret = kiocb_set_rw_flags(&kiocb, flags);
+	ret = kiocb_set_rw_flags(&kiocb, flags, type);
 	if (ret)
 		return ret;
 	kiocb.ki_pos = (ppos ? *ppos : 0);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 00fc429b0af0..c0a7083a62c6 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -45,6 +45,7 @@
 #include <linux/slab.h>
 #include <linux/maple_tree.h>
 #include <linux/rw_hint.h>
+#include <linux/uio.h>
 
 #include <asm/byteorder.h>
 #include <uapi/linux/fs.h>
@@ -121,6 +122,10 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
 #define FMODE_PWRITE		((__force fmode_t)0x10)
 /* File is opened for execution with sys_execve / sys_uselib */
 #define FMODE_EXEC		((__force fmode_t)0x20)
+
+/* File supports atomic writes */
+#define FMODE_CAN_ATOMIC_WRITE	((__force fmode_t)0x40)
+
 /* 32bit hashes as llseek() offset (for directories) */
 #define FMODE_32BITHASH         ((__force fmode_t)0x200)
 /* 64bit hashes as llseek() offset (for directories) */
@@ -317,6 +322,7 @@ struct readahead_control;
 #define IOCB_SYNC		(__force int) RWF_SYNC
 #define IOCB_NOWAIT		(__force int) RWF_NOWAIT
 #define IOCB_APPEND		(__force int) RWF_APPEND
+#define IOCB_ATOMIC		(__force int) RWF_ATOMIC
 
 /* non-RWF related bits - start at 16 */
 #define IOCB_EVENTFD		(1 << 16)
@@ -351,6 +357,7 @@ struct readahead_control;
 	{ IOCB_SYNC,		"SYNC" }, \
 	{ IOCB_NOWAIT,		"NOWAIT" }, \
 	{ IOCB_APPEND,		"APPEND" }, \
+	{ IOCB_ATOMIC,		"ATOMIC"}, \
 	{ IOCB_EVENTFD,		"EVENTFD"}, \
 	{ IOCB_DIRECT,		"DIRECT" }, \
 	{ IOCB_WRITE,		"WRITE" }, \
@@ -3404,7 +3411,8 @@ static inline int iocb_flags(struct file *file)
 	return res;
 }
 
-static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
+static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags,
+				     int rw_type)
 {
 	int kiocb_flags = 0;
 
@@ -3423,6 +3431,12 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
 			return -EOPNOTSUPP;
 		kiocb_flags |= IOCB_NOIO;
 	}
+	if (flags & RWF_ATOMIC) {
+		if (rw_type != WRITE)
+			return -EOPNOTSUPP;
+		if (!(ki->ki_filp->f_mode & FMODE_CAN_ATOMIC_WRITE))
+			return -EOPNOTSUPP;
+	}
 	kiocb_flags |= (__force int) (flags & RWF_SUPPORTED);
 	if (flags & RWF_SYNC)
 		kiocb_flags |= IOCB_DSYNC;
@@ -3614,4 +3628,21 @@ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
 extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
 			   int advice);
 
+static inline
+bool generic_atomic_write_valid(loff_t pos, struct iov_iter *iter)
+{
+	size_t len = iov_iter_count(iter);
+
+	if (!iter_is_ubuf(iter))
+		return false;
+
+	if (!is_power_of_2(len))
+		return false;
+
+	if (!IS_ALIGNED(pos, len))
+		return false;
+
+	return true;
+}
+
 #endif /* _LINUX_FS_H */
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 45e4e64fd664..191a7e88a8ab 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -329,9 +329,12 @@ typedef int __bitwise __kernel_rwf_t;
 /* per-IO negation of O_APPEND */
 #define RWF_NOAPPEND	((__force __kernel_rwf_t)0x00000020)
 
+/* Atomic Write */
+#define RWF_ATOMIC	((__force __kernel_rwf_t)0x00000040)
+
 /* mask of flags supported by the kernel */
 #define RWF_SUPPORTED	(RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
-			 RWF_APPEND | RWF_NOAPPEND)
+			 RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC)
 
 /* Pagemap ioctl */
 #define PAGEMAP_SCAN	_IOWR('f', 16, struct pm_scan_arg)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 0585ebcc9773..2ad2256d4acf 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -714,7 +714,7 @@ static bool need_complete_io(struct io_kiocb *req)
 		S_ISBLK(file_inode(req->file)->i_mode);
 }
 
-static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
+static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type)
 {
 	struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
 	struct kiocb *kiocb = &rw->kiocb;
@@ -729,7 +729,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
 		req->flags |= io_file_get_flags(file);
 
 	kiocb->ki_flags = file->f_iocb_flags;
-	ret = kiocb_set_rw_flags(kiocb, rw->flags);
+	ret = kiocb_set_rw_flags(kiocb, rw->flags, rw_type);
 	if (unlikely(ret))
 		return ret;
 	kiocb->ki_flags |= IOCB_ALLOC_CACHE;
@@ -797,7 +797,7 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
 		iov_iter_restore(&s->iter, &s->iter_state);
 		iovec = NULL;
 	}
-	ret = io_rw_init_file(req, FMODE_READ);
+	ret = io_rw_init_file(req, FMODE_READ, READ);
 	if (unlikely(ret)) {
 		kfree(iovec);
 		return ret;
@@ -1010,7 +1010,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
 		iov_iter_restore(&s->iter, &s->iter_state);
 		iovec = NULL;
 	}
-	ret = io_rw_init_file(req, FMODE_WRITE);
+	ret = io_rw_init_file(req, FMODE_WRITE, WRITE);
 	if (unlikely(ret)) {
 		kfree(iovec);
 		return ret;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 04/10] fs: Add initial atomic write support info to statx
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (2 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 03/10] fs: Initial atomic write support John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-03-26 13:38 ` [PATCH v6 05/10] block: Add core atomic write support John Garry
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, Prasad Singamsetty, John Garry

From: Prasad Singamsetty <[email protected]>

Extend statx system call to return additional info for atomic write support
support for a file.

Helper function generic_fill_statx_atomic_writes() can be used by FSes to
fill in the relevant statx fields. For now atomic_write_segments_max will
always be 1, otherwise some rules would need to be imposed on iovec length
and alignment, which we don't want now.

Signed-off-by: Prasad Singamsetty <[email protected]>
jpg: relocate bdev support to another patch
Signed-off-by: John Garry <[email protected]>
---
 fs/stat.c                 | 34 ++++++++++++++++++++++++++++++++++
 include/linux/fs.h        |  3 +++
 include/linux/stat.h      |  3 +++
 include/uapi/linux/stat.h |  9 ++++++++-
 4 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/fs/stat.c b/fs/stat.c
index 77cdc69eb422..83aaa555711d 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
 }
 EXPORT_SYMBOL(generic_fill_statx_attr);
 
+/**
+ * generic_fill_statx_atomic_writes - Fill in atomic writes statx attributes
+ * @stat:	Where to fill in the attribute flags
+ * @unit_min:	Minimum supported atomic write length in bytes
+ * @unit_max:	Maximum supported atomic write length in bytes
+ *
+ * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
+ * atomic write unit_min and unit_max values.
+ */
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+				      unsigned int unit_min,
+				      unsigned int unit_max)
+{
+	/* Confirm that the request type is known */
+	stat->result_mask |= STATX_WRITE_ATOMIC;
+
+	/* Confirm that the file attribute type is known */
+	stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
+
+	if (unit_min) {
+		stat->atomic_write_unit_min = unit_min;
+		stat->atomic_write_unit_max = unit_max;
+		/* Initially only allow 1x segment */
+		stat->atomic_write_segments_max = 1;
+
+		/* Confirm atomic writes are actually supported */
+		stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
+	}
+}
+EXPORT_SYMBOL_GPL(generic_fill_statx_atomic_writes);
+
 /**
  * vfs_getattr_nosec - getattr without security checks
  * @path: file to get attributes from
@@ -658,6 +689,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
 	tmp.stx_mnt_id = stat->mnt_id;
 	tmp.stx_dio_mem_align = stat->dio_mem_align;
 	tmp.stx_dio_offset_align = stat->dio_offset_align;
+	tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
+	tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
+	tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
 
 	return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
 }
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c0a7083a62c6..6ebefb079740 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3241,6 +3241,9 @@ extern const struct inode_operations page_symlink_inode_operations;
 extern void kfree_link(void *);
 void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
 void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+				      unsigned int unit_min,
+				      unsigned int unit_max);
 extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
 extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
 void __inode_add_bytes(struct inode *inode, loff_t bytes);
diff --git a/include/linux/stat.h b/include/linux/stat.h
index 52150570d37a..2c5e2b8c6559 100644
--- a/include/linux/stat.h
+++ b/include/linux/stat.h
@@ -53,6 +53,9 @@ struct kstat {
 	u32		dio_mem_align;
 	u32		dio_offset_align;
 	u64		change_cookie;
+	u32		atomic_write_unit_min;
+	u32		atomic_write_unit_max;
+	u32		atomic_write_segments_max;
 };
 
 /* These definitions are internal to the kernel for now. Mainly used by nfsd. */
diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
index 2f2ee82d5517..319ef4afb89e 100644
--- a/include/uapi/linux/stat.h
+++ b/include/uapi/linux/stat.h
@@ -127,7 +127,12 @@ struct statx {
 	__u32	stx_dio_mem_align;	/* Memory buffer alignment for direct I/O */
 	__u32	stx_dio_offset_align;	/* File offset alignment for direct I/O */
 	/* 0xa0 */
-	__u64	__spare3[12];	/* Spare space for future expansion */
+	__u32	stx_atomic_write_unit_min; /* Min atomic write unit in bytes */
+	__u32	stx_atomic_write_unit_max; /* Max atomic write unit in bytes */
+	__u32   stx_atomic_write_segments_max; /* Max atomic write segment count */
+	__u32   __spare2[1];
+	/* 0xb0 */
+	__u64	__spare3[10];	/* Spare space for future expansion */
 	/* 0x100 */
 };
 
@@ -155,6 +160,7 @@ struct statx {
 #define STATX_MNT_ID		0x00001000U	/* Got stx_mnt_id */
 #define STATX_DIOALIGN		0x00002000U	/* Want/got direct I/O alignment info */
 #define STATX_MNT_ID_UNIQUE	0x00004000U	/* Want/got extended stx_mount_id */
+#define STATX_WRITE_ATOMIC	0x00008000U	/* Want/got atomic_write_* fields */
 
 #define STATX__RESERVED		0x80000000U	/* Reserved for future struct statx expansion */
 
@@ -190,6 +196,7 @@ struct statx {
 #define STATX_ATTR_MOUNT_ROOT		0x00002000 /* Root of a mount */
 #define STATX_ATTR_VERITY		0x00100000 /* [I] Verity protected file */
 #define STATX_ATTR_DAX			0x00200000 /* File is currently in DAX state */
+#define STATX_ATTR_WRITE_ATOMIC		0x00400000 /* File supports atomic write operations */
 
 
 #endif /* _UAPI_LINUX_STAT_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 05/10] block: Add core atomic write support
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (3 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 04/10] fs: Add initial atomic write support info to statx John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-03-26 17:11   ` Randy Dunlap
  2024-03-26 13:38 ` [PATCH v6 06/10] block: Add atomic write support for statx John Garry
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, John Garry, Himanshu Madhani

Add atomic write support, as follows:
- add helper functions to get request_queue atomic write limits
- report request_queue atomic write support limits to sysfs and update Doc
- support to safely merge atomic writes
- deal with splitting atomic writes
- misc helper functions
- add a per-request atomic write flag

New request_queue limits are added, as follows:
- atomic_write_hw_max is set by the block driver and is the maximum length
  of an atomic write which the device may support. It is not
  necessarily a power-of-2.
- atomic_write_max_sectors is derived from atomic_write_hw_max_sectors and
  max_hw_sectors. It is always a power-of-2. Atomic writes may be merged,
  and atomic_write_max_sectors would be the limit on a merged atomic write
  request size. This value is not capped at max_sectors, as the value in
  max_sectors can be controlled from userspace, and it would only cause
  trouble if userspace could limit atomic_write_unit_max_bytes and the
  other atomic write limits.
- atomic_write_hw_unit_{min,max} are set by the block driver and are the
  min/max length of an atomic write unit which the device may support. They
  both must be a power-of-2. Typically atomic_write_hw_unit_max will hold
  the same value as atomic_write_hw_max.
- atomic_write_unit_{min,max} are derived from
  atomic_write_hw_unit_{min,max}, max_hw_sectors, and block core limits.
  Both min and max values must be a power-of-2.
- atomic_write_hw_boundary is set by the block driver. If non-zero, it
  indicates an LBA space boundary at which an atomic write straddles no
  longer is atomically executed by the disk. The value must be a
  power-of-2. Note that it would be acceptable to enforce a rule that
  atomic_write_hw_boundary_sectors is a multiple of
  atomic_write_hw_unit_max, but the resultant code would be more
  complicated.

All atomic writes limits are by default set 0 to indicate no atomic write
support. Even though it is assumed by Linux that a logical block can always
be atomically written, we ignore this as it is not of particular interest.
Stacked devices are just not supported either for now.

An atomic write must always be submitted to the block driver as part of a
single request. As such, only a single BIO must be submitted to the block
layer for an atomic write. When a single atomic write BIO is submitted, it
cannot be split. As such, atomic_write_unit_{max, min}_bytes are limited
by the maximum guaranteed BIO size which will not be required to be split.
This max size is calculated by request_queue max segments and the number
of bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace
issuing a write with iovcnt=1 for pwritev2() - as such, we can rely on each
segment containing PAGE_SIZE of data, apart from the first+last, which each
can fit logical block size of data. The first+last will be LBS
length/aligned as we rely on direct IO alignment rules also.

New sysfs files are added to report the following atomic write limits:
- atomic_write_unit_max_bytes - same as atomic_write_unit_max_sectors in
				bytes
- atomic_write_unit_min_bytes - same as atomic_write_unit_min_sectors in
				bytes
- atomic_write_boundary_bytes - same as atomic_write_hw_boundary_sectors in
				bytes
- atomic_write_max_bytes      - same as atomic_write_max_sectors in bytes

Atomic writes may only be merged with other atomic writes and only under
the following conditions:
- total resultant request length <= atomic_write_max_bytes
- the merged write does not straddle a boundary

Helper function bdev_can_atomic_write() is added to indicate whether
atomic writes may be issued to a bdev. If a bdev is a partition, the
partition start must be aligned with both atomic_write_unit_min_sectors
and atomic_write_hw_boundary_sectors.

FSes will rely on the block layer to validate that an atomic write BIO
submitted will be of valid size, so add blk_validate_atomic_write_op_size()
for this purpose. Userspace expects an atomic write which is of invalid
size to be rejected with -EINVAL, so add BLK_STS_INVAL for this. Also use
BLK_STS_INVAL for when a BIO needs to be split, as this should mean an
invalid size BIO.

Flag REQ_ATOMIC is used for indicating an atomic write.

Co-developed-by: Himanshu Madhani <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
 Documentation/ABI/stable/sysfs-block |  52 +++++++++++++
 block/blk-core.c                     |  19 +++++
 block/blk-merge.c                    |  95 ++++++++++++++++++++++-
 block/blk-settings.c                 | 109 +++++++++++++++++++++++++++
 block/blk-sysfs.c                    |  33 ++++++++
 block/blk.h                          |   3 +
 include/linux/blk_types.h            |   8 +-
 include/linux/blkdev.h               |  61 +++++++++++++++
 8 files changed, 378 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 1fe9a553c37b..4c775f4bdefe 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -21,6 +21,58 @@ Description:
 		device is offset from the internal allocation unit's
 		natural alignment.
 
+What:		/sys/block/<disk>/atomic_write_max_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] This parameter specifies the maximum atomic write
+		size reported by the device. This parameter is relevant
+		for merging of writes, where a merged atomic write
+		operation must not exceed this number of bytes.
+		This parameter may be greater to the value in
+		atomic_write_unit_max_bytes as
+		atomic_write_unit_max_bytes will be rounded down to a
+		power-of-two and atomic_write_unit_max_bytes may also be
+		limited by some other queue limits, such as max_segments.
+		This parameter - along with atomic_write_unit_min_bytes
+		and atomic_write_unit_max_bytes - will not be larger than
+		max_hw_sectors_kb, but may be larger than max_sectors_kb.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_min_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] This parameter specifies the smallest block which can
+		be written atomically with an atomic write operation. All
+		atomic write operations must begin at a
+		atomic_write_unit_min boundary and must be multiples of
+		atomic_write_unit_min. This value must be a power-of-two.
+
+
+What:		/sys/block/<disk>/atomic_write_unit_max_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] This parameter defines the largest block which can be
+		written atomically with an atomic write operation. This
+		value must be a multiple of atomic_write_unit_min and must
+		be a power-of-two. This value will not be larger than
+		atomic_write_max_bytes.
+
+
+What:		/sys/block/<disk>/atomic_write_boundary_bytes
+Date:		February 2024
+Contact:	Himanshu Madhani <[email protected]>
+Description:
+		[RO] A device may need to internally split I/Os which
+		straddle a given logical block address boundary. In that
+		case a single atomic write operation will be processed as
+		one of more sub-operations which each complete atomically.
+		This parameter specifies the size in bytes of the atomic
+		boundary if one is reported by the device. This value must
+		be a power-of-two.
+
 
 What:		/sys/block/<disk>/diskseq
 Date:		February 2021
diff --git a/block/blk-core.c b/block/blk-core.c
index a16b5abdbbf5..de868c91a295 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -174,6 +174,8 @@ static const struct {
 	/* Command duration limit device-side timeout */
 	[BLK_STS_DURATION_LIMIT]	= { -ETIME, "duration limit exceeded" },
 
+	[BLK_STS_INVAL]		= { -EINVAL,	"invalid" },
+
 	/* everything else not covered above: */
 	[BLK_STS_IOERR]		= { -EIO,	"I/O" },
 };
@@ -729,6 +731,18 @@ void submit_bio_noacct_nocheck(struct bio *bio)
 		__submit_bio_noacct(bio);
 }
 
+static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
+						 struct bio *bio)
+{
+	if (bio->bi_iter.bi_size > queue_atomic_write_unit_max_bytes(q))
+		return BLK_STS_INVAL;
+
+	if (bio->bi_iter.bi_size % queue_atomic_write_unit_min_bytes(q))
+		return BLK_STS_INVAL;
+
+	return BLK_STS_OK;
+}
+
 /**
  * submit_bio_noacct - re-submit a bio to the block device layer for I/O
  * @bio:  The bio describing the location in memory and on the device.
@@ -786,6 +800,11 @@ void submit_bio_noacct(struct bio *bio)
 	switch (bio_op(bio)) {
 	case REQ_OP_READ:
 	case REQ_OP_WRITE:
+		if (bio->bi_opf & REQ_ATOMIC) {
+			status = blk_validate_atomic_write_op_size(q, bio);
+			if (status != BLK_STS_OK)
+				goto end_io;
+		}
 		break;
 	case REQ_OP_FLUSH:
 		/*
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 6f9d9ca7922b..34a68e131168 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -18,6 +18,46 @@
 #include "blk-rq-qos.h"
 #include "blk-throttle.h"
 
+/*
+ * rq_straddles_atomic_write_boundary - check for boundary violation
+ * @rq: request to check
+ * @front: data size to be appended to front
+ * @back: data size to be appended to back
+ *
+ * Determine whether merging a request or bio into another request will result
+ * in a merged request which straddles an atomic write boundary.
+ *
+ * The value @front_adjust is the data which would be appended to the front of
+ * @rq, while the value @back_adjust is the data which would be appended to the
+ * back of @rq. Callers will typically only have either @front_adjust or
+ * @back_adjust as non-zero.
+ *
+ */
+static bool rq_straddles_atomic_write_boundary(struct request *rq,
+					unsigned int front_adjust,
+					unsigned int back_adjust)
+{
+	unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
+	u64 mask, start_rq_pos, end_rq_pos;
+
+	if (!boundary)
+		return false;
+
+	start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT;
+	end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1;
+
+	start_rq_pos -= front_adjust;
+	end_rq_pos += back_adjust;
+
+	mask = ~(boundary - 1);
+
+	/* Top bits are different, so crossed a boundary */
+	if ((start_rq_pos & mask) != (end_rq_pos & mask))
+		return true;
+
+	return false;
+}
+
 static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
 {
 	*bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
@@ -167,7 +207,16 @@ static inline unsigned get_max_io_size(struct bio *bio,
 {
 	unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
 	unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
-	unsigned max_sectors = lim->max_sectors, start, end;
+	unsigned max_sectors, start, end;
+
+	/*
+	 * We ignore lim->max_sectors for atomic writes simply because
+	 * it may less than the bio size, which we cannot tolerate.
+	 */
+	if (bio->bi_opf & REQ_ATOMIC)
+		max_sectors = lim->atomic_write_max_sectors;
+	else
+		max_sectors = lim->max_sectors;
 
 	if (lim->chunk_sectors) {
 		max_sectors = min(max_sectors,
@@ -305,6 +354,11 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
 	*segs = nsegs;
 	return NULL;
 split:
+	if (bio->bi_opf & REQ_ATOMIC) {
+		bio->bi_status = BLK_STS_INVAL;
+		bio_endio(bio);
+		return ERR_PTR(-EINVAL);
+	}
 	/*
 	 * We can't sanely support splitting for a REQ_NOWAIT bio. End it
 	 * with EAGAIN if splitting is required and return an error pointer.
@@ -645,6 +699,13 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, bio->bi_iter.bi_size)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -664,6 +725,13 @@ static int ll_front_merge_fn(struct request *req, struct bio *bio,
 		return 0;
 	}
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				bio->bi_iter.bi_size, 0)) {
+			return 0;
+		}
+	}
+
 	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
@@ -700,6 +768,13 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req)))
 		return 0;
 
+	if (req->cmd_flags & REQ_ATOMIC) {
+		if (rq_straddles_atomic_write_boundary(req,
+				0, blk_rq_bytes(next))) {
+			return 0;
+		}
+	}
+
 	total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
 	if (total_phys_segments > blk_rq_get_max_segments(req))
 		return 0;
@@ -795,6 +870,18 @@ static enum elv_merge blk_try_req_merge(struct request *req,
 	return ELEVATOR_NO_MERGE;
 }
 
+static bool blk_atomic_write_mergeable_rq_bio(struct request *rq,
+					      struct bio *bio)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC);
+}
+
+static bool blk_atomic_write_mergeable_rqs(struct request *rq,
+					   struct request *next)
+{
+	return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC);
+}
+
 /*
  * For non-mq, this has to be called with the request spinlock acquired.
  * For mq with scheduling, the appropriate queue wide lock should be held.
@@ -818,6 +905,9 @@ static struct request *attempt_merge(struct request_queue *q,
 	if (req->ioprio != next->ioprio)
 		return NULL;
 
+	if (!blk_atomic_write_mergeable_rqs(req, next))
+		return NULL;
+
 	/*
 	 * If we are allowed to merge, then append bio list
 	 * from next to rq and release next. merge_requests_fn
@@ -949,6 +1039,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (rq->ioprio != bio_prio(bio))
 		return false;
 
+	if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false)
+		return false;
+
 	return true;
 }
 
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 3c7d8d638ab5..98d6c2f59ccf 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -97,6 +97,41 @@ static int blk_validate_zoned_limits(struct queue_limits *lim)
 	return 0;
 }
 
+/*
+ * Returns max guaranteed bytes which we can fit in a bio.
+ *
+ * We always assume that we can fit in at least PAGE_SIZE in a segment, apart
+ * from first and last segments.
+ */
+static
+unsigned int blk_queue_max_guaranteed_bio(struct queue_limits *limits)
+{
+	unsigned int max_segments = min(BIO_MAX_VECS, limits->max_segments);
+	unsigned int length;
+
+	length = min(max_segments, 2) * limits->logical_block_size;
+	if (max_segments > 2)
+		length += (max_segments - 2) * PAGE_SIZE;
+
+	return length;
+}
+
+static void blk_atomic_writes_update_limits(struct queue_limits *limits)
+{
+	unsigned int unit_limit = min(limits->max_hw_sectors << SECTOR_SHIFT,
+					blk_queue_max_guaranteed_bio(limits));
+
+	unit_limit = rounddown_pow_of_two(unit_limit);
+
+	limits->atomic_write_max_sectors =
+		min(limits->atomic_write_hw_max >> SECTOR_SHIFT,
+			limits->max_hw_sectors);
+	limits->atomic_write_unit_min =
+		min(limits->atomic_write_hw_unit_min, unit_limit);
+	limits->atomic_write_unit_max =
+		min(limits->atomic_write_hw_unit_max, unit_limit);
+}
+
 /*
  * Check that the limits in lim are valid, initialize defaults for unset
  * values, and cap values based on others where needed.
@@ -221,6 +256,23 @@ static int blk_validate_limits(struct queue_limits *lim)
 		lim->misaligned = 0;
 	}
 
+	/*
+	 * The atomic write boundary size just needs to be a multiple of
+	 * unit_max (and not necessarily a power-of-2), so this following check
+	 * could be relaxed in future.
+	 * Furthermore, if needed, unit_max could be reduced so that the
+	 * boundary size was compliant (with a !power-of-2 boundary).
+	 */
+	if (lim->atomic_write_hw_boundary &&
+	    !is_power_of_2(lim->atomic_write_hw_boundary)) {
+
+		lim->atomic_write_hw_max = 0;
+		lim->atomic_write_hw_boundary = 0;
+		lim->atomic_write_hw_unit_min = 0;
+		lim->atomic_write_hw_unit_max = 0;
+	}
+	blk_atomic_writes_update_limits(lim);
+
 	return blk_validate_zoned_limits(lim);
 }
 
@@ -344,6 +396,8 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
 				 limits->logical_block_size >> SECTOR_SHIFT);
 	limits->max_sectors = max_sectors;
 
+	blk_atomic_writes_update_limits(limits);
+
 	if (!q->disk)
 		return;
 	q->disk->bdi->io_pages = max_sectors >> (PAGE_SHIFT - 9);
@@ -384,6 +438,61 @@ void blk_queue_max_discard_sectors(struct request_queue *q,
 }
 EXPORT_SYMBOL(blk_queue_max_discard_sectors);
 
+/**
+ * blk_queue_atomic_write_max_bytes - set max bytes supported by
+ * the device for atomic write operations.
+ * @q:  the request queue for the device
+ * @bytes: maximum bytes supported
+ */
+void blk_queue_atomic_write_max_bytes(struct request_queue *q,
+				      unsigned int bytes)
+{
+	q->limits.atomic_write_hw_max = bytes;
+	blk_atomic_writes_update_limits(&q->limits);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_max_bytes);
+
+/**
+ * blk_queue_atomic_write_boundary_bytes - Device's logical block address space
+ * which an atomic write should not cross.
+ * @q:  the request queue for the device
+ * @bytes: must be a power-of-two.
+ */
+void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
+					   unsigned int bytes)
+{
+	q->limits.atomic_write_hw_boundary = bytes;
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_boundary_bytes);
+
+/**
+ * blk_queue_atomic_write_unit_min_bytes - smallest unit that can be written
+ * atomically to the device.
+ * @q:  the request queue for the device
+ * @bytes: must be a power-of-two.
+ */
+void blk_queue_atomic_write_unit_min_bytes(struct request_queue *q,
+					     unsigned int bytes)
+{
+	q->limits.atomic_write_hw_unit_min = bytes;
+	blk_atomic_writes_update_limits(&q->limits);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_unit_min_bytes);
+
+/*
+ * blk_queue_atomic_write_unit_max_bytes - largest unit that can be written
+ * atomically to the device.
+ * @q: the request queue for the device
+ * @bytes: must be a power-of-two.
+ */
+void blk_queue_atomic_write_unit_max_bytes(struct request_queue *q,
+					     unsigned int bytes)
+{
+	q->limits.atomic_write_hw_unit_max = bytes;
+	blk_atomic_writes_update_limits(&q->limits);
+}
+EXPORT_SYMBOL(blk_queue_atomic_write_unit_max_bytes);
+
 /**
  * blk_queue_max_secure_erase_sectors - set max sectors for a secure erase
  * @q:  the request queue for the device
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 8c8f69d8ba48..e2ff824ce02f 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -118,6 +118,30 @@ static ssize_t queue_max_discard_segments_show(struct request_queue *q,
 	return queue_var_show(queue_max_discard_segments(q), page);
 }
 
+static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_max_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_boundary_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_boundary_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_min_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q,
+						char *page)
+{
+	return queue_var_show(queue_atomic_write_unit_max_bytes(q), page);
+}
+
 static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
 {
 	return queue_var_show(q->limits.max_integrity_segments, page);
@@ -495,6 +519,11 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes");
 QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes");
 QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
 
+QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
+
 QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
 QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
 QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
@@ -622,6 +651,10 @@ static struct attribute *queue_attrs[] = {
 	&queue_discard_max_entry.attr,
 	&queue_discard_max_hw_entry.attr,
 	&queue_discard_zeroes_data_entry.attr,
+	&queue_atomic_write_max_bytes_entry.attr,
+	&queue_atomic_write_boundary_entry.attr,
+	&queue_atomic_write_unit_min_entry.attr,
+	&queue_atomic_write_unit_max_entry.attr,
 	&queue_write_same_max_entry.attr,
 	&queue_write_zeroes_max_entry.attr,
 	&queue_zone_append_max_entry.attr,
diff --git a/block/blk.h b/block/blk.h
index dc2fa6f88adc..5e49c14525df 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -192,6 +192,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
 	if (unlikely(op == REQ_OP_WRITE_ZEROES))
 		return q->limits.max_write_zeroes_sectors;
 
+	if (rq->cmd_flags & REQ_ATOMIC)
+		return q->limits.atomic_write_max_sectors;
+
 	return q->limits.max_sectors;
 }
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index cb1526ec44b5..b7d35ead4d1b 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -179,6 +179,11 @@ typedef u16 blk_short_t;
  */
 #define BLK_STS_DURATION_LIMIT	((__force blk_status_t)18)
 
+/*
+ * Invalid size or alignment.
+ */
+#define BLK_STS_INVAL	((__force blk_status_t)19)
+
 /**
  * blk_path_error - returns true if error may be path related
  * @error: status the request was completed with
@@ -381,7 +386,7 @@ enum req_flag_bits {
 	__REQ_SWAP,		/* swap I/O */
 	__REQ_DRV,		/* for driver use */
 	__REQ_FS_PRIVATE,	/* for file system (submitter) use */
-
+	__REQ_ATOMIC,		/* for atomic write operations */
 	/*
 	 * Command specific flags, keep last:
 	 */
@@ -413,6 +418,7 @@ enum req_flag_bits {
 #define REQ_SWAP	(__force blk_opf_t)(1ULL << __REQ_SWAP)
 #define REQ_DRV		(__force blk_opf_t)(1ULL << __REQ_DRV)
 #define REQ_FS_PRIVATE	(__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE)
+#define REQ_ATOMIC	(__force blk_opf_t)(1ULL << __REQ_ATOMIC)
 
 #define REQ_NOUNMAP	(__force blk_opf_t)(1ULL << __REQ_NOUNMAP)
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c3e8f7cf96be..07145b0acbc8 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -300,6 +300,15 @@ struct queue_limits {
 	unsigned int		discard_alignment;
 	unsigned int		zone_write_granularity;
 
+	/* atomic write limits */
+	unsigned int		atomic_write_hw_max;
+	unsigned int		atomic_write_max_sectors;
+	unsigned int		atomic_write_hw_boundary;
+	unsigned int		atomic_write_hw_unit_min;
+	unsigned int		atomic_write_unit_min;
+	unsigned int		atomic_write_hw_unit_max;
+	unsigned int		atomic_write_unit_max;
+
 	unsigned short		max_segments;
 	unsigned short		max_integrity_segments;
 	unsigned short		max_discard_segments;
@@ -916,6 +925,14 @@ void blk_queue_zone_write_granularity(struct request_queue *q,
 				      unsigned int size);
 extern void blk_queue_alignment_offset(struct request_queue *q,
 				       unsigned int alignment);
+void blk_queue_atomic_write_max_bytes(struct request_queue *q,
+				unsigned int bytes);
+void blk_queue_atomic_write_boundary_bytes(struct request_queue *q,
+				unsigned int bytes);
+void blk_queue_atomic_write_unit_max_bytes(struct request_queue *q,
+				unsigned int bytes);
+void blk_queue_atomic_write_unit_min_bytes(struct request_queue *q,
+				unsigned int bytes);
 void disk_update_readahead(struct gendisk *disk);
 extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min);
 extern void blk_queue_io_min(struct request_queue *q, unsigned int min);
@@ -1339,6 +1356,30 @@ static inline int queue_dma_alignment(const struct request_queue *q)
 	return q ? q->limits.dma_alignment : 511;
 }
 
+static inline unsigned int
+queue_atomic_write_unit_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_max;
+}
+
+static inline unsigned int
+queue_atomic_write_unit_min_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_unit_min;
+}
+
+static inline unsigned int
+queue_atomic_write_boundary_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_hw_boundary;
+}
+
+static inline unsigned int
+queue_atomic_write_max_bytes(const struct request_queue *q)
+{
+	return q->limits.atomic_write_max_sectors << SECTOR_SHIFT;
+}
+
 static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
 {
 	return queue_dma_alignment(bdev_get_queue(bdev));
@@ -1592,6 +1633,26 @@ struct io_comp_batch {
 	void (*complete)(struct io_comp_batch *);
 };
 
+static inline bool bdev_can_atomic_write(struct block_device *bdev)
+{
+	struct request_queue *bd_queue = bdev->bd_queue;
+	struct queue_limits *limits = &bd_queue->limits;
+
+	if (!limits->atomic_write_unit_min)
+		return false;
+
+	if (bdev_is_partition(bdev)) {
+		sector_t bd_start_sect = bdev->bd_start_sect;
+		unsigned int alignment =
+			max(limits->atomic_write_unit_min,
+			    limits->atomic_write_hw_boundary);
+		if (!IS_ALIGNED(bd_start_sect, alignment))
+			return false;
+	}
+
+	return true;
+}
+
 #define DEFINE_IO_COMP_BATCH(name)	struct io_comp_batch name = { }
 
 #endif /* _LINUX_BLKDEV_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 06/10] block: Add atomic write support for statx
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (4 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 05/10] block: Add core atomic write support John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-03-26 13:38 ` [PATCH v6 07/10] block: Add fops atomic write support John Garry
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, Prasad Singamsetty, John Garry

From: Prasad Singamsetty <[email protected]>

Extend statx system call to return additional info for atomic write support
support if the specified file is a block device.

Signed-off-by: Prasad Singamsetty <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
 block/bdev.c           | 36 ++++++++++++++++++++++++++----------
 fs/stat.c              | 16 +++++++++-------
 include/linux/blkdev.h |  6 ++++--
 3 files changed, 39 insertions(+), 19 deletions(-)

diff --git a/block/bdev.c b/block/bdev.c
index 7a5f611c3d2e..f3dd9f3c8838 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -1182,23 +1182,39 @@ void sync_bdevs(bool wait)
 }
 
 /*
- * Handle STATX_DIOALIGN for block devices.
- *
- * Note that the inode passed to this is the inode of a block device node file,
- * not the block device's internal inode.  Therefore it is *not* valid to use
- * I_BDEV() here; the block device has to be looked up by i_rdev instead.
+ * Handle STATX_{DIOALIGN, WRITE_ATOMIC} for block devices.
  */
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+void bdev_statx(struct inode *backing_inode, struct kstat *stat,
+		u32 request_mask)
 {
 	struct block_device *bdev;
 
-	bdev = blkdev_get_no_open(inode->i_rdev);
+	if (!(request_mask & (STATX_DIOALIGN | STATX_WRITE_ATOMIC)))
+		return;
+
+	/*
+	 * Note that backing_inode is the inode of a block device node file,
+	 * not the block device's internal inode.  Therefore it is *not* valid
+	 * to use I_BDEV() here; the block device has to be looked up by i_rdev
+	 * instead.
+	 */
+	bdev = blkdev_get_no_open(backing_inode->i_rdev);
 	if (!bdev)
 		return;
 
-	stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
-	stat->dio_offset_align = bdev_logical_block_size(bdev);
-	stat->result_mask |= STATX_DIOALIGN;
+	if (request_mask & STATX_DIOALIGN) {
+		stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
+		stat->dio_offset_align = bdev_logical_block_size(bdev);
+		stat->result_mask |= STATX_DIOALIGN;
+	}
+
+	if (request_mask & STATX_WRITE_ATOMIC && bdev_can_atomic_write(bdev)) {
+		struct request_queue *bd_queue = bdev->bd_queue;
+
+		generic_fill_statx_atomic_writes(stat,
+			queue_atomic_write_unit_min_bytes(bd_queue),
+			queue_atomic_write_unit_max_bytes(bd_queue));
+	}
 
 	blkdev_put_no_open(bdev);
 }
diff --git a/fs/stat.c b/fs/stat.c
index 83aaa555711d..0e296925a56b 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -265,6 +265,7 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
 {
 	struct path path;
 	unsigned int lookup_flags = getname_statx_lookup_flags(flags);
+	struct inode *backing_inode;
 	int error;
 
 	if (flags & ~(AT_SYMLINK_NOFOLLOW | AT_NO_AUTOMOUNT | AT_EMPTY_PATH |
@@ -290,13 +291,14 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
 		stat->attributes |= STATX_ATTR_MOUNT_ROOT;
 	stat->attributes_mask |= STATX_ATTR_MOUNT_ROOT;
 
-	/* Handle STATX_DIOALIGN for block devices. */
-	if (request_mask & STATX_DIOALIGN) {
-		struct inode *inode = d_backing_inode(path.dentry);
-
-		if (S_ISBLK(inode->i_mode))
-			bdev_statx_dioalign(inode, stat);
-	}
+	/*
+	 * If this is a block device inode, override the filesystem
+	 * attributes with the block device specific parameters that need to be
+	 * obtained from the bdev backing inode.
+	 */
+	backing_inode = d_backing_inode(path.dentry);
+	if (S_ISBLK(backing_inode->i_mode))
+		bdev_statx(backing_inode, stat, request_mask);
 
 	path_put(&path);
 	if (retry_estale(error, lookup_flags)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 07145b0acbc8..d16b0c451b27 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1594,7 +1594,8 @@ int sync_blockdev(struct block_device *bdev);
 int sync_blockdev_range(struct block_device *bdev, loff_t lstart, loff_t lend);
 int sync_blockdev_nowait(struct block_device *bdev);
 void sync_bdevs(bool wait);
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat);
+void bdev_statx(struct inode *backing_inode, struct kstat *stat,
+		u32 request_mask);
 void printk_all_partitions(void);
 int __init early_lookup_bdev(const char *pathname, dev_t *dev);
 #else
@@ -1612,7 +1613,8 @@ static inline int sync_blockdev_nowait(struct block_device *bdev)
 static inline void sync_bdevs(bool wait)
 {
 }
-static inline void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+static inline void bdev_statx(struct inode *backing_inode, struct kstat *stat,
+				u32 request_mask)
 {
 }
 static inline void printk_all_partitions(void)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 07/10] block: Add fops atomic write support
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (5 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 06/10] block: Add atomic write support for statx John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-03-26 13:38 ` [PATCH v6 08/10] scsi: sd: Atomic " John Garry
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, John Garry

Support atomic writes by submitting a single BIO with the REQ_ATOMIC set.

It must be ensured that the atomic write adheres to its rules, like
naturally aligned offset, so call blkdev_dio_invalid() ->
blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to
blkdev_dio_invalid()] for this purpose. The BIO submission path currently
checks for atomic writes which are too large, so no need to check here.

In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot
produce a single BIO, so error in this case.

Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes
and the associated file flag is for O_DIRECT.

Signed-off-by: John Garry <[email protected]>
---
 block/fops.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/block/fops.c b/block/fops.c
index c091ea43bca3..34f788348352 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -34,9 +34,12 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
 	return opf;
 }
 
-static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos,
-			      struct iov_iter *iter)
+static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
+				struct iov_iter *iter, bool is_atomic)
 {
+	if (is_atomic && !generic_atomic_write_valid(pos, iter))
+		return true;
+
 	return pos & (bdev_logical_block_size(bdev) - 1) ||
 		!bdev_iter_is_aligned(bdev, iter);
 }
@@ -72,6 +75,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
 	bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT;
 	bio.bi_write_hint = file_inode(iocb->ki_filp)->i_write_hint;
 	bio.bi_ioprio = iocb->ki_ioprio;
+	if (iocb->ki_flags & IOCB_ATOMIC)
+		bio.bi_opf |= REQ_ATOMIC;
 
 	ret = bio_iov_iter_get_pages(&bio, iter);
 	if (unlikely(ret))
@@ -343,6 +348,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 		task_io_account_write(bio->bi_iter.bi_size);
 	}
 
+	if (iocb->ki_flags & IOCB_ATOMIC)
+		bio->bi_opf |= REQ_ATOMIC;
+
 	if (iocb->ki_flags & IOCB_NOWAIT)
 		bio->bi_opf |= REQ_NOWAIT;
 
@@ -359,12 +367,13 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 {
 	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+	bool is_atomic = iocb->ki_flags & IOCB_ATOMIC;
 	unsigned int nr_pages;
 
 	if (!iov_iter_count(iter))
 		return 0;
 
-	if (blkdev_dio_unaligned(bdev, iocb->ki_pos, iter))
+	if (blkdev_dio_invalid(bdev, iocb->ki_pos, iter, is_atomic))
 		return -EINVAL;
 
 	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
@@ -373,6 +382,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 			return __blkdev_direct_IO_simple(iocb, iter, bdev,
 							nr_pages);
 		return __blkdev_direct_IO_async(iocb, iter, bdev, nr_pages);
+	} else if (is_atomic) {
+		return -EINVAL;
 	}
 	return __blkdev_direct_IO(iocb, iter, bdev, bio_max_segs(nr_pages));
 }
@@ -612,6 +623,9 @@ static int blkdev_open(struct inode *inode, struct file *filp)
 	if (!bdev)
 		return -ENXIO;
 
+	if (bdev_can_atomic_write(bdev) && filp->f_flags & O_DIRECT)
+		filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
+
 	ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
 	if (ret)
 		blkdev_put_no_open(bdev);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 08/10] scsi: sd: Atomic write support
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (6 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 07/10] block: Add fops atomic write support John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-03-26 13:38 ` [PATCH v6 09/10] scsi: scsi_debug: " John Garry
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, John Garry

Support is divided into two main areas:
- reading VPD pages and setting sdev request_queue limits
- support WRITE ATOMIC (16) command and tracing

The relevant block limits VPD page need to be read to allow the block layer
request_queue atomic write limits to be set. These VPD page limits are
described in sbc4r22 section 6.6.4 - Block limits VPD page.

There are five limits of interest:
- MAXIMUM ATOMIC TRANSFER LENGTH
- ATOMIC ALIGNMENT
- ATOMIC TRANSFER LENGTH GRANULARITY
- MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY
- MAXIMUM ATOMIC BOUNDARY SIZE

MAXIMUM ATOMIC TRANSFER LENGTH is the maximum length for a WRITE ATOMIC
(16) command. It will not be greater than the device MAXIMUM TRANSFER
LENGTH.

ATOMIC ALIGNMENT and ATOMIC TRANSFER LENGTH GRANULARITY are the minimum
alignment and length values for an atomic write in terms of logical blocks.

Unlike NVMe, SCSI does not specify an LBA space boundary, but does specify
a per-IO boundary granularity. The maximum boundary size is specified in
MAXIMUM ATOMIC BOUNDARY SIZE. When used, this boundary value is set in the
WRITE ATOMIC (16) ATOMIC BOUNDARY field - layout for the WRITE_ATOMIC_16
command can be found in sbc4r22 section 5.48. This boundary value is the
granularity size at which the device may atomically write the data. A value
of zero in WRITE ATOMIC (16) ATOMIC BOUNDARY field means that all data must
be atomically written together.

MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY is the maximum atomic write
length if a non-zero boundary value is set.

For atomic write support, the WRITE ATOMIC (16) boundary is not of much
interest, as the block layer expects each request submitted to be executed
atomically. However, the SCSI spec does leave itself open to a quirky
scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero, yet MAXIMUM ATOMIC
TRANSFER LENGTH WITH BOUNDARY and MAXIMUM ATOMIC BOUNDARY SIZE are both
non-zero. This case will be supported.

To set the block layer request_queue atomic write capabilities, sanitize
the VPD page limits and set limits as follows:
- atomic_write_unit_min is derived from granularity and alignment values.
  If no granularity value is not set, use physical block size
- atomic_write_unit_max is derived from MAXIMUM ATOMIC TRANSFER LENGTH. In
  the scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero and boundary
  limits are non-zero, use MAXIMUM ATOMIC BOUNDARY SIZE for
  atomic_write_unit_max. New flag scsi_disk.use_atomic_write_boundary is
  set for this scenario.
- atomic_write_boundary_bytes is set to zero always

SCSI also supports a WRITE ATOMIC (32) command, which is for type 2
protection enabled. This is not going to be supported now, so check for
T10_PI_TYPE2_PROTECTION when setting any request_queue limits.

To handle an atomic write request, add support for WRITE ATOMIC (16)
command in handler sd_setup_atomic_cmnd(). Flag use_atomic_write_boundary
is checked here for encoding ATOMIC BOUNDARY field.

Trace info is also added for WRITE_ATOMIC_16 command.

Signed-off-by: John Garry <[email protected]>
---
 drivers/scsi/scsi_trace.c   | 22 +++++++++
 drivers/scsi/sd.c           | 93 ++++++++++++++++++++++++++++++++++++-
 drivers/scsi/sd.h           |  8 ++++
 include/scsi/scsi_proto.h   |  1 +
 include/trace/events/scsi.h |  1 +
 5 files changed, 124 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/scsi_trace.c b/drivers/scsi/scsi_trace.c
index 41a950075913..3e47c4472a80 100644
--- a/drivers/scsi/scsi_trace.c
+++ b/drivers/scsi/scsi_trace.c
@@ -325,6 +325,26 @@ scsi_trace_zbc_out(struct trace_seq *p, unsigned char *cdb, int len)
 	return ret;
 }
 
+static const char *
+scsi_trace_atomic_write16_out(struct trace_seq *p, unsigned char *cdb, int len)
+{
+	const char *ret = trace_seq_buffer_ptr(p);
+	unsigned int boundary_size;
+	unsigned int nr_blocks;
+	sector_t lba;
+
+	lba = get_unaligned_be64(&cdb[2]);
+	boundary_size = get_unaligned_be16(&cdb[10]);
+	nr_blocks = get_unaligned_be16(&cdb[12]);
+
+	trace_seq_printf(p, "lba=%llu txlen=%u boundary_size=%u",
+			  lba, nr_blocks, boundary_size);
+
+	trace_seq_putc(p, 0);
+
+	return ret;
+}
+
 static const char *
 scsi_trace_varlen(struct trace_seq *p, unsigned char *cdb, int len)
 {
@@ -385,6 +405,8 @@ scsi_trace_parse_cdb(struct trace_seq *p, unsigned char *cdb, int len)
 		return scsi_trace_zbc_in(p, cdb, len);
 	case ZBC_OUT:
 		return scsi_trace_zbc_out(p, cdb, len);
+	case WRITE_ATOMIC_16:
+		return scsi_trace_atomic_write16_out(p, cdb, len);
 	default:
 		return scsi_trace_misc(p, cdb, len);
 	}
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index ccff8f2e2e75..60046299844f 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -917,6 +917,65 @@ static blk_status_t sd_setup_unmap_cmnd(struct scsi_cmnd *cmd)
 	return scsi_alloc_sgtables(cmd);
 }
 
+static void sd_config_atomic(struct scsi_disk *sdkp)
+{
+	unsigned int logical_block_size = sdkp->device->sector_size,
+		physical_block_size_sectors, max_atomic, unit_min, unit_max;
+	struct request_queue *q = sdkp->disk->queue;
+
+	if ((!sdkp->max_atomic && !sdkp->max_atomic_with_boundary) ||
+	    sdkp->protection_type == T10_PI_TYPE2_PROTECTION)
+		return;
+
+	physical_block_size_sectors = sdkp->physical_block_size /
+					sdkp->device->sector_size;
+
+	unit_min = rounddown_pow_of_two(sdkp->atomic_granularity ?
+					sdkp->atomic_granularity :
+					physical_block_size_sectors);
+
+	/*
+	 * Only use atomic boundary when we have the odd scenario of
+	 * sdkp->max_atomic == 0, which the spec does permit.
+	 */
+	if (sdkp->max_atomic) {
+		max_atomic = sdkp->max_atomic;
+		unit_max = rounddown_pow_of_two(sdkp->max_atomic);
+		sdkp->use_atomic_write_boundary = 0;
+	} else {
+		max_atomic = sdkp->max_atomic_with_boundary;
+		unit_max = rounddown_pow_of_two(sdkp->max_atomic_boundary);
+		sdkp->use_atomic_write_boundary = 1;
+	}
+
+	/*
+	 * Ensure compliance with granularity and alignment. For now, keep it
+	 * simple and just don't support atomic writes for values mismatched
+	 * with max_{boundary}atomic, physical block size, and
+	 * atomic_granularity itself.
+	 *
+	 * We're really being distrustful by checking unit_max also...
+	 */
+	if (sdkp->atomic_granularity > 1) {
+		if (unit_min > 1 && unit_min % sdkp->atomic_granularity)
+			return;
+		if (unit_max > 1 && unit_max % sdkp->atomic_granularity)
+			return;
+	}
+
+	if (sdkp->atomic_alignment > 1) {
+		if (unit_min > 1 && unit_min % sdkp->atomic_alignment)
+			return;
+		if (unit_max > 1 && unit_max % sdkp->atomic_alignment)
+			return;
+	}
+
+	blk_queue_atomic_write_max_bytes(q, max_atomic * logical_block_size);
+	blk_queue_atomic_write_unit_min_bytes(q, unit_min * logical_block_size);
+	blk_queue_atomic_write_unit_max_bytes(q, unit_max * logical_block_size);
+	blk_queue_atomic_write_boundary_bytes(q, 0);
+}
+
 static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd,
 		bool unmap)
 {
@@ -1208,6 +1267,26 @@ static int sd_cdl_dld(struct scsi_disk *sdkp, struct scsi_cmnd *scmd)
 	return (hint - IOPRIO_HINT_DEV_DURATION_LIMIT_1) + 1;
 }
 
+static blk_status_t sd_setup_atomic_cmnd(struct scsi_cmnd *cmd,
+					sector_t lba, unsigned int nr_blocks,
+					bool boundary, unsigned char flags)
+{
+	cmd->cmd_len  = 16;
+	cmd->cmnd[0]  = WRITE_ATOMIC_16;
+	cmd->cmnd[1]  = flags;
+	put_unaligned_be64(lba, &cmd->cmnd[2]);
+	put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+	if (boundary)
+		put_unaligned_be16(nr_blocks, &cmd->cmnd[10]);
+	else
+		put_unaligned_be16(0, &cmd->cmnd[10]);
+	put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+	cmd->cmnd[14] = 0;
+	cmd->cmnd[15] = 0;
+
+	return BLK_STS_OK;
+}
+
 static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
 {
 	struct request *rq = scsi_cmd_to_rq(cmd);
@@ -1279,6 +1358,10 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
 	if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) {
 		ret = sd_setup_rw32_cmnd(cmd, write, lba, nr_blocks,
 					 protect | fua, dld);
+	} else if (rq->cmd_flags & REQ_ATOMIC && write) {
+		ret = sd_setup_atomic_cmnd(cmd, lba, nr_blocks,
+				sdkp->use_atomic_write_boundary,
+				protect | fua);
 	} else if (sdp->use_16_for_rw || (nr_blocks > 0xffff)) {
 		ret = sd_setup_rw16_cmnd(cmd, write, lba, nr_blocks,
 					 protect | fua, dld);
@@ -3220,7 +3303,7 @@ static void sd_read_block_limits(struct scsi_disk *sdkp)
 		sdkp->max_ws_blocks = (u32)get_unaligned_be64(&vpd->data[36]);
 
 		if (!sdkp->lbpme)
-			goto out;
+			goto read_atomics;
 
 		lba_count = get_unaligned_be32(&vpd->data[20]);
 		desc_count = get_unaligned_be32(&vpd->data[24]);
@@ -3251,6 +3334,14 @@ static void sd_read_block_limits(struct scsi_disk *sdkp)
 			else
 				sd_config_discard(sdkp, SD_LBP_DISABLE);
 		}
+read_atomics:
+		sdkp->max_atomic = get_unaligned_be32(&vpd->data[44]);
+		sdkp->atomic_alignment = get_unaligned_be32(&vpd->data[48]);
+		sdkp->atomic_granularity = get_unaligned_be32(&vpd->data[52]);
+		sdkp->max_atomic_with_boundary = get_unaligned_be32(&vpd->data[56]);
+		sdkp->max_atomic_boundary = get_unaligned_be32(&vpd->data[60]);
+
+		sd_config_atomic(sdkp);
 	}
 
  out:
diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
index 5c4285a582b2..bc376ebb37ac 100644
--- a/drivers/scsi/sd.h
+++ b/drivers/scsi/sd.h
@@ -121,6 +121,13 @@ struct scsi_disk {
 	u32		max_unmap_blocks;
 	u32		unmap_granularity;
 	u32		unmap_alignment;
+
+	u32		max_atomic;
+	u32		atomic_alignment;
+	u32		atomic_granularity;
+	u32		max_atomic_with_boundary;
+	u32		max_atomic_boundary;
+
 	u32		index;
 	unsigned int	physical_block_size;
 	unsigned int	max_medium_access_timeouts;
@@ -154,6 +161,7 @@ struct scsi_disk {
 	unsigned	security : 1;
 	unsigned	ignore_medium_access_errors : 1;
 	unsigned	rscs : 1; /* reduced stream control support */
+	unsigned	use_atomic_write_boundary : 1;
 };
 #define to_scsi_disk(obj) container_of(obj, struct scsi_disk, disk_dev)
 
diff --git a/include/scsi/scsi_proto.h b/include/scsi/scsi_proto.h
index 843106e1109f..70e1262b2e20 100644
--- a/include/scsi/scsi_proto.h
+++ b/include/scsi/scsi_proto.h
@@ -120,6 +120,7 @@
 #define WRITE_SAME_16	      0x93
 #define ZBC_OUT		      0x94
 #define ZBC_IN		      0x95
+#define WRITE_ATOMIC_16	0x9c
 #define SERVICE_ACTION_BIDIRECTIONAL 0x9d
 #define SERVICE_ACTION_IN_16  0x9e
 #define SERVICE_ACTION_OUT_16 0x9f
diff --git a/include/trace/events/scsi.h b/include/trace/events/scsi.h
index 8e2d9b1b0e77..05f1945ed204 100644
--- a/include/trace/events/scsi.h
+++ b/include/trace/events/scsi.h
@@ -102,6 +102,7 @@
 		scsi_opcode_name(WRITE_32),			\
 		scsi_opcode_name(WRITE_SAME_32),		\
 		scsi_opcode_name(ATA_16),			\
+		scsi_opcode_name(WRITE_ATOMIC_16),		\
 		scsi_opcode_name(ATA_12))
 
 #define scsi_hostbyte_name(result)	{ result, #result }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 09/10] scsi: scsi_debug: Atomic write support
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (7 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 08/10] scsi: sd: Atomic " John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-03-26 13:38 ` [PATCH v6 10/10] nvme: " John Garry
  2024-03-27  3:50 ` [PATCH v6 00/10] block atomic writes Matthew Wilcox
  10 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, John Garry

Add initial support for atomic writes.

As is standard method, feed device properties via modules param, those
being:
- atomic_max_size_blks
- atomic_alignment_blks
- atomic_granularity_blks
- atomic_max_size_with_boundary_blks
- atomic_max_boundary_blks

These just match sbc4r22 section 6.6.4 - Block limits VPD page.

We just support ATOMIC WRITE (16).

The major change in the driver is how we lock the device for RW accesses.

Currently the driver uses a per-device lock for accessing device metadata
and "media" data (calls to do_device_access()) atomically for the duration
of the whole read/write command.

This should not suit verifying atomic writes. Reason being that currently
all reads/writes are atomic, so using atomic writes does not prove
anything.

Change device access model to basis that regular writes only atomic on a
per-sector basis, while reads and atomic writes are fully atomic.

As mentioned, since accessing metadata and device media is atomic,
continue to have regular writes involving metadata - like discard or PI -
as atomic. We can improve this later.

Currently we only support model where overlapping going reads or writes
wait for current access to complete before commencing an atomic write.
This is described in 4.29.3.2 section of the SBC. However, we simplify,
things and wait for all accesses to complete (when issuing an atomic
write).

Signed-off-by: John Garry <[email protected]>
---
 drivers/scsi/scsi_debug.c | 588 +++++++++++++++++++++++++++++---------
 1 file changed, 454 insertions(+), 134 deletions(-)

diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 7f340a59fdc5..fcc9640fa18a 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -69,6 +69,8 @@ static const char *sdebug_version_date = "20210520";
 
 /* Additional Sense Code (ASC) */
 #define NO_ADDITIONAL_SENSE 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASC 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASCQ 0x23
 #define LOGICAL_UNIT_NOT_READY 0x4
 #define LOGICAL_UNIT_COMMUNICATION_FAILURE 0x8
 #define UNRECOVERED_READ_ERR 0x11
@@ -103,6 +105,7 @@ static const char *sdebug_version_date = "20210520";
 #define READ_BOUNDARY_ASCQ 0x7
 #define ATTEMPT_ACCESS_GAP 0x9
 #define INSUFF_ZONE_ASCQ 0xe
+/* see drivers/scsi/sense_codes.h */
 
 /* Additional Sense Code Qualifier (ASCQ) */
 #define ACK_NAK_TO 0x3
@@ -152,6 +155,12 @@ static const char *sdebug_version_date = "20210520";
 #define DEF_VIRTUAL_GB   0
 #define DEF_VPD_USE_HOSTNO 1
 #define DEF_WRITESAME_LENGTH 0xFFFF
+#define DEF_ATOMIC_WR 0
+#define DEF_ATOMIC_WR_MAX_LENGTH 8192
+#define DEF_ATOMIC_WR_ALIGN 2
+#define DEF_ATOMIC_WR_GRAN 2
+#define DEF_ATOMIC_WR_MAX_LENGTH_BNDRY (DEF_ATOMIC_WR_MAX_LENGTH)
+#define DEF_ATOMIC_WR_MAX_BNDRY 128
 #define DEF_STRICT 0
 #define DEF_STATISTICS false
 #define DEF_SUBMIT_QUEUES 1
@@ -374,7 +383,9 @@ struct sdebug_host_info {
 
 /* There is an xarray of pointers to this struct's objects, one per host */
 struct sdeb_store_info {
-	rwlock_t macc_lck;	/* for atomic media access on this store */
+	rwlock_t macc_data_lck;	/* for media data access on this store */
+	rwlock_t macc_meta_lck;	/* for atomic media meta access on this store */
+	rwlock_t macc_sector_lck;	/* per-sector media data access on this store */
 	u8 *storep;		/* user data storage (ram) */
 	struct t10_pi_tuple *dif_storep; /* protection info */
 	void *map_storep;	/* provisioning map */
@@ -398,12 +409,20 @@ struct sdebug_defer {
 	enum sdeb_defer_type defer_t;
 };
 
+struct sdebug_device_access_info {
+	bool atomic_write;
+	u64 lba;
+	u32 num;
+	struct scsi_cmnd *self;
+};
+
 struct sdebug_queued_cmd {
 	/* corresponding bit set in in_use_bm[] in owning struct sdebug_queue
 	 * instance indicates this slot is in use.
 	 */
 	struct sdebug_defer sd_dp;
 	struct scsi_cmnd *scmd;
+	struct sdebug_device_access_info *i;
 };
 
 struct sdebug_scsi_cmd {
@@ -463,7 +482,8 @@ enum sdeb_opcode_index {
 	SDEB_I_PRE_FETCH = 29,		/* 10, 16 */
 	SDEB_I_ZONE_OUT = 30,		/* 0x94+SA; includes no data xfer */
 	SDEB_I_ZONE_IN = 31,		/* 0x95+SA; all have data-in */
-	SDEB_I_LAST_ELEM_P1 = 32,	/* keep this last (previous + 1) */
+	SDEB_I_ATOMIC_WRITE_16 = 32,
+	SDEB_I_LAST_ELEM_P1 = 33,	/* keep this last (previous + 1) */
 };
 
 
@@ -497,7 +517,8 @@ static const unsigned char opcode_ind_arr[256] = {
 	0, 0, 0, SDEB_I_VERIFY,
 	SDEB_I_PRE_FETCH, SDEB_I_SYNC_CACHE, 0, SDEB_I_WRITE_SAME,
 	SDEB_I_ZONE_OUT, SDEB_I_ZONE_IN, 0, 0,
-	0, 0, 0, 0, 0, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
+	0, 0, 0, 0,
+	SDEB_I_ATOMIC_WRITE_16, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
 /* 0xa0; 0xa0->0xbf: 12 byte cdbs */
 	SDEB_I_REPORT_LUNS, SDEB_I_ATA_PT, 0, SDEB_I_MAINT_IN,
 	     SDEB_I_MAINT_OUT, 0, 0, 0,
@@ -547,6 +568,7 @@ static int resp_write_buffer(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_sync_cache(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_pre_fetch(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_report_zones(struct scsi_cmnd *, struct sdebug_dev_info *);
+static int resp_atomic_write(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_open_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_close_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
 static int resp_finish_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
@@ -788,6 +810,11 @@ static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = {
 	    resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */
 		{16,  0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
 		 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} },
+/* 31 */
+	{0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO,
+	    resp_atomic_write, NULL, /* ATOMIC WRITE 16 */
+		{16,  0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} },
 /* sentinel */
 	{0xff, 0, 0, 0, NULL, NULL,		/* terminating element */
 	    {0,  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
@@ -835,6 +862,13 @@ static unsigned int sdebug_unmap_granularity = DEF_UNMAP_GRANULARITY;
 static unsigned int sdebug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS;
 static unsigned int sdebug_unmap_max_desc = DEF_UNMAP_MAX_DESC;
 static unsigned int sdebug_write_same_length = DEF_WRITESAME_LENGTH;
+static unsigned int sdebug_atomic_wr = DEF_ATOMIC_WR;
+static unsigned int sdebug_atomic_wr_max_length = DEF_ATOMIC_WR_MAX_LENGTH;
+static unsigned int sdebug_atomic_wr_align = DEF_ATOMIC_WR_ALIGN;
+static unsigned int sdebug_atomic_wr_gran = DEF_ATOMIC_WR_GRAN;
+static unsigned int sdebug_atomic_wr_max_length_bndry =
+			DEF_ATOMIC_WR_MAX_LENGTH_BNDRY;
+static unsigned int sdebug_atomic_wr_max_bndry = DEF_ATOMIC_WR_MAX_BNDRY;
 static int sdebug_uuid_ctl = DEF_UUID_CTL;
 static bool sdebug_random = DEF_RANDOM;
 static bool sdebug_per_host_store = DEF_PER_HOST_STORE;
@@ -1188,6 +1222,11 @@ static inline bool scsi_debug_lbp(void)
 		(sdebug_lbpu || sdebug_lbpws || sdebug_lbpws10);
 }
 
+static inline bool scsi_debug_atomic_write(void)
+{
+	return sdebug_fake_rw == 0 && sdebug_atomic_wr;
+}
+
 static void *lba2fake_store(struct sdeb_store_info *sip,
 			    unsigned long long lba)
 {
@@ -1815,6 +1854,14 @@ static int inquiry_vpd_b0(unsigned char *arr)
 	/* Maximum WRITE SAME Length */
 	put_unaligned_be64(sdebug_write_same_length, &arr[32]);
 
+	if (sdebug_atomic_wr) {
+		put_unaligned_be32(sdebug_atomic_wr_max_length, &arr[40]);
+		put_unaligned_be32(sdebug_atomic_wr_align, &arr[44]);
+		put_unaligned_be32(sdebug_atomic_wr_gran, &arr[48]);
+		put_unaligned_be32(sdebug_atomic_wr_max_length_bndry, &arr[52]);
+		put_unaligned_be32(sdebug_atomic_wr_max_bndry, &arr[56]);
+	}
+
 	return 0x3c; /* Mandatory page length for Logical Block Provisioning */
 }
 
@@ -3377,16 +3424,238 @@ static inline struct sdeb_store_info *devip2sip(struct sdebug_dev_info *devip,
 	return xa_load(per_store_ap, devip->sdbg_host->si_idx);
 }
 
+static inline void
+sdeb_read_lock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__acquire(lock);
+	else
+		read_lock(lock);
+}
+
+static inline void
+sdeb_read_unlock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__release(lock);
+	else
+		read_unlock(lock);
+}
+
+static inline void
+sdeb_write_lock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__acquire(lock);
+	else
+		write_lock(lock);
+}
+
+static inline void
+sdeb_write_unlock(rwlock_t *lock)
+{
+	if (sdebug_no_rwlock)
+		__release(lock);
+	else
+		write_unlock(lock);
+}
+
+static inline void
+sdeb_data_read_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_read_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_sector_read_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_read_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_read_unlock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_lock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_unlock(struct sdeb_store_info *sip)
+{
+	BUG_ON(!sip);
+
+	sdeb_write_unlock(&sip->macc_sector_lck);
+}
+
+/*
+ * Atomic locking:
+ * We simplify the atomic model to allow only 1x atomic write and many non-
+ * atomic reads or writes for all LBAs.
+
+ * A RW lock has a similar bahaviour:
+ * Only 1x writer and many readers.
+
+ * So use a RW lock for per-device read and write locking:
+ * An atomic access grabs the lock as a writer and non-atomic grabs the lock
+ * as a reader.
+ */
+
+static inline void
+sdeb_data_lock(struct sdeb_store_info *sip, bool atomic)
+{
+	if (atomic)
+		sdeb_data_write_lock(sip);
+	else
+		sdeb_data_read_lock(sip);
+}
+
+static inline void
+sdeb_data_unlock(struct sdeb_store_info *sip, bool atomic)
+{
+	if (atomic)
+		sdeb_data_write_unlock(sip);
+	else
+		sdeb_data_read_unlock(sip);
+}
+
+/* Allow many reads but only 1x write per sector */
+static inline void
+sdeb_data_sector_lock(struct sdeb_store_info *sip, bool do_write)
+{
+	if (do_write)
+		sdeb_data_sector_write_lock(sip);
+	else
+		sdeb_data_sector_read_lock(sip);
+}
+
+static inline void
+sdeb_data_sector_unlock(struct sdeb_store_info *sip, bool do_write)
+{
+	if (do_write)
+		sdeb_data_sector_write_unlock(sip);
+	else
+		sdeb_data_sector_read_unlock(sip);
+}
+
+static inline void
+sdeb_meta_read_lock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__acquire(&sip->macc_meta_lck);
+		else
+			__acquire(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			read_lock(&sip->macc_meta_lck);
+		else
+			read_lock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_read_unlock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__release(&sip->macc_meta_lck);
+		else
+			__release(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			read_unlock(&sip->macc_meta_lck);
+		else
+			read_unlock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_write_lock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__acquire(&sip->macc_meta_lck);
+		else
+			__acquire(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			write_lock(&sip->macc_meta_lck);
+		else
+			write_lock(&sdeb_fake_rw_lck);
+	}
+}
+
+static inline void
+sdeb_meta_write_unlock(struct sdeb_store_info *sip)
+{
+	if (sdebug_no_rwlock) {
+		if (sip)
+			__release(&sip->macc_meta_lck);
+		else
+			__release(&sdeb_fake_rw_lck);
+	} else {
+		if (sip)
+			write_unlock(&sip->macc_meta_lck);
+		else
+			write_unlock(&sdeb_fake_rw_lck);
+	}
+}
+
 /* Returns number of bytes copied or -1 if error. */
 static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
-			    u32 sg_skip, u64 lba, u32 num, bool do_write,
-			    u8 group_number)
+			    u32 sg_skip, u64 lba, u32 num, u8 group_number,
+			    bool do_write, bool atomic)
 {
 	int ret;
-	u64 block, rest = 0;
+	u64 block;
 	enum dma_data_direction dir;
 	struct scsi_data_buffer *sdb = &scp->sdb;
 	u8 *fsp;
+	int i;
+
+	/*
+	 * Even though reads are inherently atomic (in this driver), we expect
+	 * the atomic flag only for writes.
+	 */
+	if (!do_write && atomic)
+		return -1;
 
 	if (do_write) {
 		dir = DMA_TO_DEVICE;
@@ -3406,21 +3675,26 @@ static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
 	fsp = sip->storep;
 
 	block = do_div(lba, sdebug_store_sectors);
-	if (block + num > sdebug_store_sectors)
-		rest = block + num - sdebug_store_sectors;
 
-	ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
+	/* Only allow 1x atomic write or multiple non-atomic writes at any given time */
+	sdeb_data_lock(sip, atomic);
+	for (i = 0; i < num; i++) {
+		/* We shouldn't need to lock for atomic writes, but do it anyway */
+		sdeb_data_sector_lock(sip, do_write);
+		ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
 		   fsp + (block * sdebug_sector_size),
-		   (num - rest) * sdebug_sector_size, sg_skip, do_write);
-	if (ret != (num - rest) * sdebug_sector_size)
-		return ret;
-
-	if (rest) {
-		ret += sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
-			    fsp, rest * sdebug_sector_size,
-			    sg_skip + ((num - rest) * sdebug_sector_size),
-			    do_write);
+		   sdebug_sector_size, sg_skip, do_write);
+		sdeb_data_sector_unlock(sip, do_write);
+		if (ret != sdebug_sector_size) {
+			ret += (i * sdebug_sector_size);
+			break;
+		}
+		sg_skip += sdebug_sector_size;
+		if (++block >= sdebug_store_sectors)
+			block = 0;
 	}
+	ret = num * sdebug_sector_size;
+	sdeb_data_unlock(sip, atomic);
 
 	return ret;
 }
@@ -3596,70 +3870,6 @@ static int prot_verify_read(struct scsi_cmnd *scp, sector_t start_sec,
 	return ret;
 }
 
-static inline void
-sdeb_read_lock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__acquire(&sip->macc_lck);
-		else
-			__acquire(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			read_lock(&sip->macc_lck);
-		else
-			read_lock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_read_unlock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__release(&sip->macc_lck);
-		else
-			__release(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			read_unlock(&sip->macc_lck);
-		else
-			read_unlock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_write_lock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__acquire(&sip->macc_lck);
-		else
-			__acquire(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			write_lock(&sip->macc_lck);
-		else
-			write_lock(&sdeb_fake_rw_lck);
-	}
-}
-
-static inline void
-sdeb_write_unlock(struct sdeb_store_info *sip)
-{
-	if (sdebug_no_rwlock) {
-		if (sip)
-			__release(&sip->macc_lck);
-		else
-			__release(&sdeb_fake_rw_lck);
-	} else {
-		if (sip)
-			write_unlock(&sip->macc_lck);
-		else
-			write_unlock(&sdeb_fake_rw_lck);
-	}
-}
-
 static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 {
 	bool check_prot;
@@ -3669,6 +3879,7 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	u64 lba;
 	struct sdeb_store_info *sip = devip2sip(devip, true);
 	u8 *cmd = scp->cmnd;
+	bool meta_data_locked = false;
 
 	switch (cmd[0]) {
 	case READ_16:
@@ -3727,6 +3938,10 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		atomic_set(&sdeb_inject_pending, 0);
 	}
 
+	/*
+	 * When checking device access params, for reads we only check data
+	 * versus what is set at init time, so no need to lock.
+	 */
 	ret = check_device_access_params(scp, lba, num, false);
 	if (ret)
 		return ret;
@@ -3746,29 +3961,33 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 
-	sdeb_read_lock(sip);
+	if (sdebug_dev_is_zoned(devip) ||
+	    (sdebug_dix && scsi_prot_sg_count(scp)))  {
+		sdeb_meta_read_lock(sip);
+		meta_data_locked = true;
+	}
 
 	/* DIX + T10 DIF */
 	if (unlikely(sdebug_dix && scsi_prot_sg_count(scp))) {
 		switch (prot_verify_read(scp, lba, num, ei_lba)) {
 		case 1: /* Guard tag error */
 			if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
 				return check_condition_result;
 			} else if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
 				return illegal_condition_result;
 			}
 			break;
 		case 3: /* Reference tag error */
 			if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
 				return check_condition_result;
 			} else if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
-				sdeb_read_unlock(sip);
+				sdeb_meta_read_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
 				return illegal_condition_result;
 			}
@@ -3776,8 +3995,9 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		}
 	}
 
-	ret = do_device_access(sip, scp, 0, lba, num, false, 0);
-	sdeb_read_unlock(sip);
+	ret = do_device_access(sip, scp, 0, lba, num, 0, false, false);
+	if (meta_data_locked)
+		sdeb_meta_read_unlock(sip);
 	if (unlikely(ret == -1))
 		return DID_ERROR << 16;
 
@@ -3967,6 +4187,7 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	u64 lba;
 	struct sdeb_store_info *sip = devip2sip(devip, true);
 	u8 *cmd = scp->cmnd;
+	bool meta_data_locked = false;
 
 	switch (cmd[0]) {
 	case WRITE_16:
@@ -4025,10 +4246,17 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 				    "to DIF device\n");
 	}
 
-	sdeb_write_lock(sip);
+	if (sdebug_dev_is_zoned(devip) ||
+	    (sdebug_dix && scsi_prot_sg_count(scp)) ||
+	    scsi_debug_lbp())  {
+		sdeb_meta_write_lock(sip);
+		meta_data_locked = true;
+	}
+
 	ret = check_device_access_params(scp, lba, num, true);
 	if (ret) {
-		sdeb_write_unlock(sip);
+		if (meta_data_locked)
+			sdeb_meta_write_unlock(sip);
 		return ret;
 	}
 
@@ -4037,22 +4265,22 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		switch (prot_verify_write(scp, lba, num, ei_lba)) {
 		case 1: /* Guard tag error */
 			if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
 				return illegal_condition_result;
 			} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
 				return check_condition_result;
 			}
 			break;
 		case 3: /* Reference tag error */
 			if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
 				return illegal_condition_result;
 			} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
-				sdeb_write_unlock(sip);
+				sdeb_meta_write_unlock(sip);
 				mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
 				return check_condition_result;
 			}
@@ -4060,13 +4288,16 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		}
 	}
 
-	ret = do_device_access(sip, scp, 0, lba, num, true, group);
+	ret = do_device_access(sip, scp, 0, lba, num, group, true, false);
 	if (unlikely(scsi_debug_lbp()))
 		map_region(sip, lba, num);
+
 	/* If ZBC zone then bump its write pointer */
 	if (sdebug_dev_is_zoned(devip))
 		zbc_inc_wp(devip, lba, num);
-	sdeb_write_unlock(sip);
+	if (meta_data_locked)
+		sdeb_meta_write_unlock(sip);
+
 	if (unlikely(-1 == ret))
 		return DID_ERROR << 16;
 	else if (unlikely(sdebug_verbose &&
@@ -4176,7 +4407,8 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 		goto err_out;
 	}
 
-	sdeb_write_lock(sip);
+	/* Just keep it simple and always lock for now */
+	sdeb_meta_write_lock(sip);
 	sg_off = lbdof_blen;
 	/* Spec says Buffer xfer Length field in number of LBs in dout */
 	cum_lb = 0;
@@ -4219,7 +4451,11 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 			}
 		}
 
-		ret = do_device_access(sip, scp, sg_off, lba, num, true, group);
+		/*
+		 * Write ranges atomically to keep as close to pre-atomic
+		 * writes behaviour as possible.
+		 */
+		ret = do_device_access(sip, scp, sg_off, lba, num, group, true, true);
 		/* If ZBC zone then bump its write pointer */
 		if (sdebug_dev_is_zoned(devip))
 			zbc_inc_wp(devip, lba, num);
@@ -4258,7 +4494,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
 	}
 	ret = 0;
 err_out_unlock:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 err_out:
 	kfree(lrdp);
 	return ret;
@@ -4277,14 +4513,16 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 						scp->device->hostdata, true);
 	u8 *fs1p;
 	u8 *fsp;
+	bool meta_data_locked = false;
 
-	sdeb_write_lock(sip);
+	if (sdebug_dev_is_zoned(devip) || scsi_debug_lbp()) {
+		sdeb_meta_write_lock(sip);
+		meta_data_locked = true;
+	}
 
 	ret = check_device_access_params(scp, lba, num, true);
-	if (ret) {
-		sdeb_write_unlock(sip);
-		return ret;
-	}
+	if (ret)
+		goto out;
 
 	if (unmap && scsi_debug_lbp()) {
 		unmap_region(sip, lba, num);
@@ -4295,6 +4533,7 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 	/* if ndob then zero 1 logical block, else fetch 1 logical block */
 	fsp = sip->storep;
 	fs1p = fsp + (block * lb_size);
+	sdeb_data_write_lock(sip);
 	if (ndob) {
 		memset(fs1p, 0, lb_size);
 		ret = 0;
@@ -4302,8 +4541,8 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 		ret = fetch_to_dev_buffer(scp, fs1p, lb_size);
 
 	if (-1 == ret) {
-		sdeb_write_unlock(sip);
-		return DID_ERROR << 16;
+		ret = DID_ERROR << 16;
+		goto out;
 	} else if (sdebug_verbose && !ndob && (ret < lb_size))
 		sdev_printk(KERN_INFO, scp->device,
 			    "%s: %s: lb size=%u, IO sent=%d bytes\n",
@@ -4320,10 +4559,12 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
 	/* If ZBC zone then bump its write pointer */
 	if (sdebug_dev_is_zoned(devip))
 		zbc_inc_wp(devip, lba, num);
+	sdeb_data_write_unlock(sip);
+	ret = 0;
 out:
-	sdeb_write_unlock(sip);
-
-	return 0;
+	if (meta_data_locked)
+		sdeb_meta_write_unlock(sip);
+	return ret;
 }
 
 static int resp_write_same_10(struct scsi_cmnd *scp,
@@ -4466,25 +4707,30 @@ static int resp_comp_write(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
-
 	ret = do_dout_fetch(scp, dnum, arr);
 	if (ret == -1) {
 		retval = DID_ERROR << 16;
-		goto cleanup;
+		goto cleanup_free;
 	} else if (sdebug_verbose && (ret < (dnum * lb_size)))
 		sdev_printk(KERN_INFO, scp->device, "%s: compare_write: cdb "
 			    "indicated=%u, IO sent=%d bytes\n", my_name,
 			    dnum * lb_size, ret);
+
+	sdeb_data_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 	if (!comp_write_worker(sip, lba, num, arr, false)) {
 		mk_sense_buffer(scp, MISCOMPARE, MISCOMPARE_VERIFY_ASC, 0);
 		retval = check_condition_result;
-		goto cleanup;
+		goto cleanup_unlock;
 	}
+
+	/* Cover sip->map_storep (which map_region()) sets with data lock */
 	if (scsi_debug_lbp())
 		map_region(sip, lba, num);
-cleanup:
-	sdeb_write_unlock(sip);
+cleanup_unlock:
+	sdeb_meta_write_unlock(sip);
+	sdeb_data_write_unlock(sip);
+cleanup_free:
 	kfree(arr);
 	return retval;
 }
@@ -4528,7 +4774,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	desc = (void *)&buf[8];
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	for (i = 0 ; i < descriptors ; i++) {
 		unsigned long long lba = get_unaligned_be64(&desc[i].lba);
@@ -4544,7 +4790,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 	ret = 0;
 
 out:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	kfree(buf);
 
 	return ret;
@@ -4702,12 +4948,13 @@ static int resp_pre_fetch(struct scsi_cmnd *scp,
 		rest = block + nblks - sdebug_store_sectors;
 
 	/* Try to bring the PRE-FETCH range into CPU's cache */
-	sdeb_read_lock(sip);
+	sdeb_data_read_lock(sip);
 	prefetch_range(fsp + (sdebug_sector_size * block),
 		       (nblks - rest) * sdebug_sector_size);
 	if (rest)
 		prefetch_range(fsp, rest * sdebug_sector_size);
-	sdeb_read_unlock(sip);
+
+	sdeb_data_read_unlock(sip);
 fini:
 	if (cmd[1] & 0x2)
 		res = SDEG_RES_IMMED_MASK;
@@ -4866,7 +5113,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 	/* Not changing store, so only need read access */
-	sdeb_read_lock(sip);
+	sdeb_data_read_lock(sip);
 
 	ret = do_dout_fetch(scp, a_num, arr);
 	if (ret == -1) {
@@ -4888,7 +5135,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		goto cleanup;
 	}
 cleanup:
-	sdeb_read_unlock(sip);
+	sdeb_data_read_unlock(sip);
 	kfree(arr);
 	return ret;
 }
@@ -4934,7 +5181,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_read_lock(sip);
+	sdeb_meta_read_lock(sip);
 
 	desc = arr + 64;
 	for (lba = zs_lba; lba < sdebug_capacity;
@@ -5032,11 +5279,70 @@ static int resp_report_zones(struct scsi_cmnd *scp,
 	ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
 
 fini:
-	sdeb_read_unlock(sip);
+	sdeb_meta_read_unlock(sip);
 	kfree(arr);
 	return ret;
 }
 
+static int resp_atomic_write(struct scsi_cmnd *scp,
+			     struct sdebug_dev_info *devip)
+{
+	struct sdeb_store_info *sip;
+	u8 *cmd = scp->cmnd;
+	u16 boundary, len;
+	u64 lba, lba_tmp;
+	int ret;
+
+	if (!scsi_debug_atomic_write()) {
+		mk_sense_invalid_opcode(scp);
+		return check_condition_result;
+	}
+
+	sip = devip2sip(devip, true);
+
+	lba = get_unaligned_be64(cmd + 2);
+	boundary = get_unaligned_be16(cmd + 10);
+	len = get_unaligned_be16(cmd + 12);
+
+	lba_tmp = lba;
+	if (sdebug_atomic_wr_align &&
+	    do_div(lba_tmp, sdebug_atomic_wr_align)) {
+		/* Does not meet alignment requirement */
+		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+		return check_condition_result;
+	}
+
+	if (sdebug_atomic_wr_gran && len % sdebug_atomic_wr_gran) {
+		/* Does not meet alignment requirement */
+		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+		return check_condition_result;
+	}
+
+	if (boundary > 0) {
+		if (boundary > sdebug_atomic_wr_max_bndry) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+
+		if (len > sdebug_atomic_wr_max_length_bndry) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+	} else {
+		if (len > sdebug_atomic_wr_max_length) {
+			mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+			return check_condition_result;
+		}
+	}
+
+	ret = do_device_access(sip, scp, 0, lba, len, 0, true, true);
+	if (unlikely(ret == -1))
+		return DID_ERROR << 16;
+	if (unlikely(ret != len * sdebug_sector_size))
+		return DID_ERROR << 16;
+	return 0;
+}
+
 /* Logic transplanted from tcmu-runner, file_zbc.c */
 static void zbc_open_all(struct sdebug_dev_info *devip)
 {
@@ -5063,8 +5369,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		mk_sense_invalid_opcode(scp);
 		return check_condition_result;
 	}
-
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		/* Check if all closed zones can be open */
@@ -5113,7 +5418,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	zbc_open_zone(devip, zsp, true);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5140,7 +5445,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_close_all(devip);
@@ -5169,7 +5474,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
 
 	zbc_close_zone(devip, zsp);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5212,7 +5517,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_finish_all(devip);
@@ -5241,7 +5546,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
 
 	zbc_finish_zone(devip, zsp, true);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -5292,7 +5597,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 		return check_condition_result;
 	}
 
-	sdeb_write_lock(sip);
+	sdeb_meta_write_lock(sip);
 
 	if (all) {
 		zbc_rwp_all(devip);
@@ -5320,7 +5625,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
 
 	zbc_rwp_zone(devip, zsp);
 fini:
-	sdeb_write_unlock(sip);
+	sdeb_meta_write_unlock(sip);
 	return res;
 }
 
@@ -6284,6 +6589,7 @@ module_param_named(lbprz, sdebug_lbprz, int, S_IRUGO);
 module_param_named(lbpu, sdebug_lbpu, int, S_IRUGO);
 module_param_named(lbpws, sdebug_lbpws, int, S_IRUGO);
 module_param_named(lbpws10, sdebug_lbpws10, int, S_IRUGO);
+module_param_named(atomic_wr, sdebug_atomic_wr, int, S_IRUGO);
 module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
 module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
 module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
@@ -6318,6 +6624,11 @@ module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO);
 module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO);
 module_param_named(unmap_max_blocks, sdebug_unmap_max_blocks, int, S_IRUGO);
 module_param_named(unmap_max_desc, sdebug_unmap_max_desc, int, S_IRUGO);
+module_param_named(atomic_wr_max_length, sdebug_atomic_wr_max_length, int, S_IRUGO);
+module_param_named(atomic_wr_align, sdebug_atomic_wr_align, int, S_IRUGO);
+module_param_named(atomic_wr_gran, sdebug_atomic_wr_gran, int, S_IRUGO);
+module_param_named(atomic_wr_max_length_bndry, sdebug_atomic_wr_max_length_bndry, int, S_IRUGO);
+module_param_named(atomic_wr_max_bndry, sdebug_atomic_wr_max_bndry, int, S_IRUGO);
 module_param_named(uuid_ctl, sdebug_uuid_ctl, int, S_IRUGO);
 module_param_named(virtual_gb, sdebug_virtual_gb, int, S_IRUGO | S_IWUSR);
 module_param_named(vpd_use_hostno, sdebug_vpd_use_hostno, int,
@@ -6361,6 +6672,7 @@ MODULE_PARM_DESC(lbprz,
 MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)");
 MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)");
 MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)");
+MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=0)");
 MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
 MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
 MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
@@ -6392,6 +6704,11 @@ MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"
 MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)");
 MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)");
 MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)");
+MODULE_PARM_DESC(atomic_wr_max_length, "max # of blocks can be atomically written in one cmd (def=8192)");
+MODULE_PARM_DESC(atomic_wr_align, "minimum alignment of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_gran, "minimum granularity of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_max_length_bndry, "max # of blocks can be atomically written in one cmd with boundary set (def=8192)");
+MODULE_PARM_DESC(atomic_wr_max_bndry, "max # boundaries per atomic write (def=128)");
 MODULE_PARM_DESC(uuid_ctl,
 		 "1->use uuid for lu name, 0->don't, 2->all use same (def=0)");
 MODULE_PARM_DESC(virtual_gb, "virtual gigabyte (GiB) size (def=0 -> use dev_size_mb)");
@@ -7563,6 +7880,7 @@ static int __init scsi_debug_init(void)
 			return -EINVAL;
 		}
 	}
+
 	xa_init_flags(per_store_ap, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
 	if (want_store) {
 		idx = sdebug_add_store();
@@ -7770,7 +8088,9 @@ static int sdebug_add_store(void)
 			map_region(sip, 0, 2);
 	}
 
-	rwlock_init(&sip->macc_lck);
+	rwlock_init(&sip->macc_data_lck);
+	rwlock_init(&sip->macc_meta_lck);
+	rwlock_init(&sip->macc_sector_lck);
 	return (int)n_idx;
 err:
 	sdebug_erase_store((int)n_idx, sip);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v6 10/10] nvme: Atomic write support
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (8 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 09/10] scsi: scsi_debug: " John Garry
@ 2024-03-26 13:38 ` John Garry
  2024-04-11  0:29   ` Luis Chamberlain
  2024-03-27  3:50 ` [PATCH v6 00/10] block atomic writes Matthew Wilcox
  10 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-03-26 13:38 UTC (permalink / raw)
  To: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, Alan Adamson, John Garry

From: Alan Adamson <[email protected]>

Add support to set block layer request_queue atomic write limits. The
limits will be derived from either the namespace or controller atomic
parameters.

NVMe atomic-related parameters are grouped into "normal" and "power-fail"
(or PF) class of parameter. For atomic write support, only PF parameters
are of interest. The "normal" parameters are concerned with racing reads
and writes (which also applies to PF). See NVM Command Set Specification
Revision 1.0d section 2.1.4 for reference.

Whether to use per namespace or controller atomic parameters is decided by
NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data
Structure, NVM Command Set.

NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
are provided for a write which straddles this per-lba space boundary. The
block layer merging policy is such that no merges may occur in which the
resultant request would straddle such a boundary.

Unlike SCSI, NVMe specifies no granularity or alignment rules, apart from
atomic boundary rule. In addition, again unlike SCSI, there is no
dedicated atomic write command - a write which adheres to the atomic size
limit and boundary is implicitly atomic.

If NSFEAT bit 1 is set, the following parameters are of interest:
- NAWUPF (Namespace Atomic Write Unit Power Fail)
- NABSPF (Namespace Atomic Boundary Size Power Fail)
- NABO (Namespace Atomic Boundary Offset)

and we set request_queue limits as follows:
- atomic_write_unit_max = rounddown_pow_of_two(NAWUPF)
- atomic_write_max_bytes = NAWUPF
- atomic_write_boundary = NABSPF

If in the unlikely scenario that NABO is non-zero, then atomic writes will
not be supported at all as dealing with this adds extra complexity. This
policy may change in future.

In all cases, atomic_write_unit_min is set to the logical block size.

If NSFEAT bit 1 is unset, the following parameter is of interest:
- AWUPF (Atomic Write Unit Power Fail)

and we set request_queue limits as follows:
- atomic_write_unit_max = rounddown_pow_of_two(AWUPF)
- atomic_write_max_bytes = AWUPF
- atomic_write_boundary = 0

A new function, nvme_valid_atomic_write(), is also called from submission
path to verify that a request has been submitted to the driver will
actually be executed atomically. As mentioned, there is no dedicated NVMe
atomic write command (which may error for a command which exceeds the
controller atomic write limits).

Note on NABSPF:
There seems to be some vagueness in the spec as to whether NABSPF applies
for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF
and how it is affected by bit 1. However Figure 4 does tell to check Figure
97 for info about per-namespace parameters, which NABSPF is, so it is
implied. However currently nvme_update_disk_info() does check namespace
parameter NABO regardless of this bit.

Signed-off-by: Alan Adamson <[email protected]>
Reviewed-by: Keith Busch <[email protected]>
jpg: total rewrite
Signed-off-by: John Garry <[email protected]>
---
 drivers/nvme/host/core.c | 49 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 943d72bdd794..7d3247be5cb9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -943,6 +943,30 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
 	return BLK_STS_OK;
 }
 
+static bool nvme_valid_atomic_write(struct request *req)
+{
+	struct request_queue *q = req->q;
+	u32 boundary_bytes = queue_atomic_write_boundary_bytes(q);
+
+	if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q))
+		return false;
+
+	if (boundary_bytes) {
+		u64 mask = boundary_bytes - 1, imask = ~mask;
+		u64 start = blk_rq_pos(req) << SECTOR_SHIFT;
+		u64 end = start + blk_rq_bytes(req) - 1;
+
+		/* If greater then must be crossing a boundary */
+		if (blk_rq_bytes(req) > boundary_bytes)
+			return false;
+
+		if ((start & imask) != (end & imask))
+			return false;
+	}
+
+	return true;
+}
+
 static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
 		struct request *req, struct nvme_command *cmnd,
 		enum nvme_opcode op)
@@ -957,6 +981,12 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
 
 	if (req->cmd_flags & REQ_RAHEAD)
 		dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
+	/*
+	 * Ensure that nothing has been sent which cannot be executed
+	 * atomically.
+	 */
+	if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req))
+		return BLK_STS_INVAL;
 
 	cmnd->rw.opcode = op;
 	cmnd->rw.flags = 0;
@@ -1937,6 +1967,23 @@ static void nvme_configure_metadata(struct nvme_ctrl *ctrl,
 	}
 }
 
+
+static void nvme_update_atomic_write_disk_info(struct nvme_ns *ns,
+			struct nvme_id_ns *id, struct queue_limits *lim,
+			u32 bs, u32 atomic_bs)
+{
+	unsigned int boundary = 0;
+
+	if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) {
+		if (le16_to_cpu(id->nabspf))
+			boundary = (le16_to_cpu(id->nabspf) + 1) * bs;
+	}
+	lim->atomic_write_hw_max = atomic_bs;
+	lim->atomic_write_hw_boundary = boundary;
+	lim->atomic_write_hw_unit_min = bs;
+	lim->atomic_write_hw_unit_max = rounddown_pow_of_two(atomic_bs);
+}
+
 static u32 nvme_max_drv_segments(struct nvme_ctrl *ctrl)
 {
 	return ctrl->max_hw_sectors / (NVME_CTRL_PAGE_SIZE >> SECTOR_SHIFT) + 1;
@@ -1983,6 +2030,8 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id,
 			atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
 		else
 			atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs;
+
+		nvme_update_atomic_write_disk_info(ns, id, lim, bs, atomic_bs);
 	}
 
 	if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 05/10] block: Add core atomic write support
  2024-03-26 13:38 ` [PATCH v6 05/10] block: Add core atomic write support John Garry
@ 2024-03-26 17:11   ` Randy Dunlap
  2024-04-10 23:34     ` Luis Chamberlain
  0 siblings, 1 reply; 40+ messages in thread
From: Randy Dunlap @ 2024-03-26 17:11 UTC (permalink / raw)
  To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack
  Cc: linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list, willy, Himanshu Madhani

Hi,

On 3/26/24 06:38, John Garry wrote:
> diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
> index 1fe9a553c37b..4c775f4bdefe 100644
> --- a/Documentation/ABI/stable/sysfs-block
> +++ b/Documentation/ABI/stable/sysfs-block
> @@ -21,6 +21,58 @@ Description:
>  		device is offset from the internal allocation unit's
>  		natural alignment.
>  
> +What:		/sys/block/<disk>/atomic_write_max_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] This parameter specifies the maximum atomic write
> +		size reported by the device. This parameter is relevant
> +		for merging of writes, where a merged atomic write
> +		operation must not exceed this number of bytes.
> +		This parameter may be greater to the value in

		                              than

> +		atomic_write_unit_max_bytes as
> +		atomic_write_unit_max_bytes will be rounded down to a
> +		power-of-two and atomic_write_unit_max_bytes may also be
> +		limited by some other queue limits, such as max_segments.
> +		This parameter - along with atomic_write_unit_min_bytes
> +		and atomic_write_unit_max_bytes - will not be larger than
> +		max_hw_sectors_kb, but may be larger than max_sectors_kb.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_unit_min_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] This parameter specifies the smallest block which can
> +		be written atomically with an atomic write operation. All
> +		atomic write operations must begin at a
> +		atomic_write_unit_min boundary and must be multiples of
> +		atomic_write_unit_min. This value must be a power-of-two.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_unit_max_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] This parameter defines the largest block which can be
> +		written atomically with an atomic write operation. This
> +		value must be a multiple of atomic_write_unit_min and must
> +		be a power-of-two. This value will not be larger than
> +		atomic_write_max_bytes.
> +
> +
> +What:		/sys/block/<disk>/atomic_write_boundary_bytes
> +Date:		February 2024
> +Contact:	Himanshu Madhani <[email protected]>
> +Description:
> +		[RO] A device may need to internally split I/Os which
> +		straddle a given logical block address boundary. In that
> +		case a single atomic write operation will be processed as
> +		one of more sub-operations which each complete atomically.

		    or

> +		This parameter specifies the size in bytes of the atomic
> +		boundary if one is reported by the device. This value must
> +		be a power-of-two.

-- 
#Randy

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
                   ` (9 preceding siblings ...)
  2024-03-26 13:38 ` [PATCH v6 10/10] nvme: " John Garry
@ 2024-03-27  3:50 ` Matthew Wilcox
  2024-03-27 13:37   ` John Garry
                     ` (2 more replies)
  10 siblings, 3 replies; 40+ messages in thread
From: Matthew Wilcox @ 2024-03-27  3:50 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Tue, Mar 26, 2024 at 01:38:03PM +0000, John Garry wrote:
> The goal here is to provide an interface that allows applications use
> application-specific block sizes larger than logical block size
> reported by the storage device or larger than filesystem block size as
> reported by stat().
> 
> With this new interface, application blocks will never be torn or
> fractured when written. For a power fail, for each individual application
> block, all or none of the data to be written. A racing atomic write and
> read will mean that the read sees all the old data or all the new data,
> but never a mix of old and new.
> 
> Three new fields are added to struct statx - atomic_write_unit_min,
> atomic_write_unit_max, and atomic_write_segments_max. For each atomic
> individual write, the total length of a write must be a between
> atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
> power-of-2. The write must also be at a natural offset in the file
> wrt the write length. For pwritev2, iovcnt is limited by
> atomic_write_segments_max.
> 
> There has been some discussion on supporting buffered IO and whether the
> API is suitable, like:
> https://lore.kernel.org/linux-nvme/[email protected]/
> 
> Specifically the concern is that supporting a range of sizes of atomic IO
> in the pagecache is complex to support. For this, my idea is that FSes can
> fix atomic_write_unit_min and atomic_write_unit_max at the same size, the
> extent alignment size, which should be easier to support. We may need to
> implement O_ATOMIC to avoid mixing atomic and non-atomic IOs for this. I
> have no proposed solution for atomic write buffered IO for bdev file
> operations, but I know of no requirement for this.

The thing is that there's no requirement for an interface as complex as
the one you're proposing here.  I've talked to a few database people
and all they want is to increase the untorn write boundary from "one
disc block" to one database block, typically 8kB or 16kB.

So they would be quite happy with a much simpler interface where they
set the inode block size at inode creation time, and then all writes to
that inode were guaranteed to be untorn.  This would also be simpler to
implement for buffered writes.

Who's asking for this more complex interface?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-03-27  3:50 ` [PATCH v6 00/10] block atomic writes Matthew Wilcox
@ 2024-03-27 13:37   ` John Garry
  2024-04-04 16:48     ` Matthew Wilcox
  2024-03-27 20:31   ` Dave Chinner
  2024-04-05  6:14   ` Kent Overstreet
  2 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-03-27 13:37 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On 27/03/2024 03:50, Matthew Wilcox wrote:
> On Tue, Mar 26, 2024 at 01:38:03PM +0000, John Garry wrote:
>> The goal here is to provide an interface that allows applications use
>> application-specific block sizes larger than logical block size
>> reported by the storage device or larger than filesystem block size as
>> reported by stat().
>>
>> With this new interface, application blocks will never be torn or
>> fractured when written. For a power fail, for each individual application
>> block, all or none of the data to be written. A racing atomic write and
>> read will mean that the read sees all the old data or all the new data,
>> but never a mix of old and new.
>>
>> Three new fields are added to struct statx - atomic_write_unit_min,
>> atomic_write_unit_max, and atomic_write_segments_max. For each atomic
>> individual write, the total length of a write must be a between
>> atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
>> power-of-2. The write must also be at a natural offset in the file
>> wrt the write length. For pwritev2, iovcnt is limited by
>> atomic_write_segments_max.
>>
>> There has been some discussion on supporting buffered IO and whether the
>> API is suitable, like:
>> https://lore.kernel.org/linux-nvme/[email protected]/
>>
>> Specifically the concern is that supporting a range of sizes of atomic IO
>> in the pagecache is complex to support. For this, my idea is that FSes can
>> fix atomic_write_unit_min and atomic_write_unit_max at the same size, the
>> extent alignment size, which should be easier to support. We may need to
>> implement O_ATOMIC to avoid mixing atomic and non-atomic IOs for this. I
>> have no proposed solution for atomic write buffered IO for bdev file
>> operations, but I know of no requirement for this.
> 
> The thing is that there's no requirement for an interface as complex as
> the one you're proposing here.  I've talked to a few database people
> and all they want is to increase the untorn write boundary from "one
> disc block" to one database block, typically 8kB or 16kB.
> 
> So they would be quite happy with a much simpler interface where they
> set the inode block size at inode creation time,

We want to support untorn writes for bdev file operations - how can we 
set the inode block size there? Currently it is based on logical block size.

> and then all writes to
> that inode were guaranteed to be untorn.  This would also be simpler to
> implement for buffered writes.

We did consider that. Won't that lead to the possibility of breaking 
existing applications which want to do regular unaligned writes to these 
files? We do know that mysql/innodb does have some "compressed" mode of 
operation, which involves regular writes to the same file which wants 
untorn writes.

Furthermore, untorn writes in HW are expensive - for SCSI anyway. Do we 
always want these for such a file?

We saw untorn writes as not being a property of the file or even the 
inode itself, but rather an attribute of the specific IO being issued 
from the userspace application.

> 
> Who's asking for this more complex interface?

It's not a case of someone specifically asking for this interface. This 
is just a proposal to satisfy userspace requirement to do untorn writes 
in a generic way.

 From a user point-of-view, untorn writes for a regular file can be 
enabled for up to a specific size* with FS_IOC_SETFLAGS API. Then they 
need to follow alignment and size rules for issuing untorn writes, but 
they would always need to do this. In addition, the user may still issue 
regular (tearable) writes to the file.

* I think that we could change this to only allow writes for that 
specific size, which was my proposal for buffered IO.

Thanks,
John


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-03-27  3:50 ` [PATCH v6 00/10] block atomic writes Matthew Wilcox
  2024-03-27 13:37   ` John Garry
@ 2024-03-27 20:31   ` Dave Chinner
  2024-04-05 10:20     ` Kent Overstreet
  2024-04-05  6:14   ` Kent Overstreet
  2 siblings, 1 reply; 40+ messages in thread
From: Dave Chinner @ 2024-03-27 20:31 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
	linux-aio, linux-btrfs, io-uring, nilay, ritesh.list

On Wed, Mar 27, 2024 at 03:50:07AM +0000, Matthew Wilcox wrote:
> On Tue, Mar 26, 2024 at 01:38:03PM +0000, John Garry wrote:
> > The goal here is to provide an interface that allows applications use
> > application-specific block sizes larger than logical block size
> > reported by the storage device or larger than filesystem block size as
> > reported by stat().
> > 
> > With this new interface, application blocks will never be torn or
> > fractured when written. For a power fail, for each individual application
> > block, all or none of the data to be written. A racing atomic write and
> > read will mean that the read sees all the old data or all the new data,
> > but never a mix of old and new.
> > 
> > Three new fields are added to struct statx - atomic_write_unit_min,
> > atomic_write_unit_max, and atomic_write_segments_max. For each atomic
> > individual write, the total length of a write must be a between
> > atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
> > power-of-2. The write must also be at a natural offset in the file
> > wrt the write length. For pwritev2, iovcnt is limited by
> > atomic_write_segments_max.
> > 
> > There has been some discussion on supporting buffered IO and whether the
> > API is suitable, like:
> > https://lore.kernel.org/linux-nvme/[email protected]/
> > 
> > Specifically the concern is that supporting a range of sizes of atomic IO
> > in the pagecache is complex to support. For this, my idea is that FSes can
> > fix atomic_write_unit_min and atomic_write_unit_max at the same size, the
> > extent alignment size, which should be easier to support. We may need to
> > implement O_ATOMIC to avoid mixing atomic and non-atomic IOs for this. I
> > have no proposed solution for atomic write buffered IO for bdev file
> > operations, but I know of no requirement for this.
> 
> The thing is that there's no requirement for an interface as complex as
> the one you're proposing here.  I've talked to a few database people
> and all they want is to increase the untorn write boundary from "one
> disc block" to one database block, typically 8kB or 16kB.
> 
> So they would be quite happy with a much simpler interface where they
> set the inode block size at inode creation time, and then all writes to
> that inode were guaranteed to be untorn.  This would also be simpler to
> implement for buffered writes.

You're conflating filesystem functionality that applications will use
with hardware and block-layer enablement that filesystems and
filesystem utilities need to configure the filesystem in ways that
allow users to make use of atomic write capability of the hardware.

The block layer functionality needs to export everything that the
hardware can do and filesystems will make use of. The actual
application usage and setup of atomic writes at the filesystem/page
cache layer is a separate problem.  i.e. The block layer interfaces
need only support direct IO and expose limits for issuing atomic
direct IO, and nothing more. All the more complex stuff to make it
"easy to use" is filesystem level functionality and completely
outside the scope of this patchset....

-Dave.
-- 
Dave Chinner
[email protected]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-03-27 13:37   ` John Garry
@ 2024-04-04 16:48     ` Matthew Wilcox
  2024-04-05 10:06       ` John Garry
  0 siblings, 1 reply; 40+ messages in thread
From: Matthew Wilcox @ 2024-04-04 16:48 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Wed, Mar 27, 2024 at 01:37:41PM +0000, John Garry wrote:
> On 27/03/2024 03:50, Matthew Wilcox wrote:
> > On Tue, Mar 26, 2024 at 01:38:03PM +0000, John Garry wrote:
> > > The goal here is to provide an interface that allows applications use
> > > application-specific block sizes larger than logical block size
> > > reported by the storage device or larger than filesystem block size as
> > > reported by stat().
> > > 
> > > With this new interface, application blocks will never be torn or
> > > fractured when written. For a power fail, for each individual application
> > > block, all or none of the data to be written. A racing atomic write and
> > > read will mean that the read sees all the old data or all the new data,
> > > but never a mix of old and new.
> > > 
> > > Three new fields are added to struct statx - atomic_write_unit_min,
> > > atomic_write_unit_max, and atomic_write_segments_max. For each atomic
> > > individual write, the total length of a write must be a between
> > > atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
> > > power-of-2. The write must also be at a natural offset in the file
> > > wrt the write length. For pwritev2, iovcnt is limited by
> > > atomic_write_segments_max.
> > > 
> > > There has been some discussion on supporting buffered IO and whether the
> > > API is suitable, like:
> > > https://lore.kernel.org/linux-nvme/[email protected]/
> > > 
> > > Specifically the concern is that supporting a range of sizes of atomic IO
> > > in the pagecache is complex to support. For this, my idea is that FSes can
> > > fix atomic_write_unit_min and atomic_write_unit_max at the same size, the
> > > extent alignment size, which should be easier to support. We may need to
> > > implement O_ATOMIC to avoid mixing atomic and non-atomic IOs for this. I
> > > have no proposed solution for atomic write buffered IO for bdev file
> > > operations, but I know of no requirement for this.
> > 
> > The thing is that there's no requirement for an interface as complex as
> > the one you're proposing here.  I've talked to a few database people
> > and all they want is to increase the untorn write boundary from "one
> > disc block" to one database block, typically 8kB or 16kB.
> > 
> > So they would be quite happy with a much simpler interface where they
> > set the inode block size at inode creation time,
> 
> We want to support untorn writes for bdev file operations - how can we set
> the inode block size there? Currently it is based on logical block size.

ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
think we can remove that limitation with the bs>PS patches.

> > and then all writes to
> > that inode were guaranteed to be untorn.  This would also be simpler to
> > implement for buffered writes.
> 
> We did consider that. Won't that lead to the possibility of breaking
> existing applications which want to do regular unaligned writes to these
> files? We do know that mysql/innodb does have some "compressed" mode of
> operation, which involves regular writes to the same file which wants untorn
> writes.

If you're talking about "regular unaligned buffered writes", then that
won't break.  If you cross a folio boundary, the result may be torn,
but if you're crossing a block boundary you expect that.

> Furthermore, untorn writes in HW are expensive - for SCSI anyway. Do we
> always want these for such a file?

Do untorn writes actually exist in SCSI?  I was under the impression
nobody had actually implemented them in SCSI hardware.

> We saw untorn writes as not being a property of the file or even the inode
> itself, but rather an attribute of the specific IO being issued from the
> userspace application.

The problem is that keeping track of that is expensive for buffered
writes.  It's a model that only works for direct IO.  Arguably we
could make it work for O_SYNC buffered IO, but that'll require some
surgery.

> > Who's asking for this more complex interface?
> 
> It's not a case of someone specifically asking for this interface. This is
> just a proposal to satisfy userspace requirement to do untorn writes in a
> generic way.
> 
> From a user point-of-view, untorn writes for a regular file can be enabled
> for up to a specific size* with FS_IOC_SETFLAGS API. Then they need to
> follow alignment and size rules for issuing untorn writes, but they would
> always need to do this. In addition, the user may still issue regular
> (tearable) writes to the file.
> 
> * I think that we could change this to only allow writes for that specific
> size, which was my proposal for buffered IO.
> 
> Thanks,
> John
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-03-27  3:50 ` [PATCH v6 00/10] block atomic writes Matthew Wilcox
  2024-03-27 13:37   ` John Garry
  2024-03-27 20:31   ` Dave Chinner
@ 2024-04-05  6:14   ` Kent Overstreet
  2 siblings, 0 replies; 40+ messages in thread
From: Kent Overstreet @ 2024-04-05  6:14 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
	linux-aio, linux-btrfs, io-uring, nilay, ritesh.list

On Wed, Mar 27, 2024 at 03:50:07AM +0000, Matthew Wilcox wrote:
> On Tue, Mar 26, 2024 at 01:38:03PM +0000, John Garry wrote:
> > The goal here is to provide an interface that allows applications use
> > application-specific block sizes larger than logical block size
> > reported by the storage device or larger than filesystem block size as
> > reported by stat().
> > 
> > With this new interface, application blocks will never be torn or
> > fractured when written. For a power fail, for each individual application
> > block, all or none of the data to be written. A racing atomic write and
> > read will mean that the read sees all the old data or all the new data,
> > but never a mix of old and new.
> > 
> > Three new fields are added to struct statx - atomic_write_unit_min,
> > atomic_write_unit_max, and atomic_write_segments_max. For each atomic
> > individual write, the total length of a write must be a between
> > atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
> > power-of-2. The write must also be at a natural offset in the file
> > wrt the write length. For pwritev2, iovcnt is limited by
> > atomic_write_segments_max.
> > 
> > There has been some discussion on supporting buffered IO and whether the
> > API is suitable, like:
> > https://lore.kernel.org/linux-nvme/[email protected]/
> > 
> > Specifically the concern is that supporting a range of sizes of atomic IO
> > in the pagecache is complex to support. For this, my idea is that FSes can
> > fix atomic_write_unit_min and atomic_write_unit_max at the same size, the
> > extent alignment size, which should be easier to support. We may need to
> > implement O_ATOMIC to avoid mixing atomic and non-atomic IOs for this. I
> > have no proposed solution for atomic write buffered IO for bdev file
> > operations, but I know of no requirement for this.
> 
> The thing is that there's no requirement for an interface as complex as
> the one you're proposing here.  I've talked to a few database people
> and all they want is to increase the untorn write boundary from "one
> disc block" to one database block, typically 8kB or 16kB.
> 
> So they would be quite happy with a much simpler interface where they
> set the inode block size at inode creation time, and then all writes to
> that inode were guaranteed to be untorn.  This would also be simpler to
> implement for buffered writes.
> 
> Who's asking for this more complex interface?

I get the impression the atomic writes stuff has suffered from too
/much/ review and too many maintainers asking for and demanding all
their different must-haves.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-04 16:48     ` Matthew Wilcox
@ 2024-04-05 10:06       ` John Garry
  2024-04-08 17:50         ` Luis Chamberlain
  0 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-04-05 10:06 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On 04/04/2024 17:48, Matthew Wilcox wrote:
>>> The thing is that there's no requirement for an interface as complex as
>>> the one you're proposing here.  I've talked to a few database people
>>> and all they want is to increase the untorn write boundary from "one
>>> disc block" to one database block, typically 8kB or 16kB.
>>>
>>> So they would be quite happy with a much simpler interface where they
>>> set the inode block size at inode creation time,
>> We want to support untorn writes for bdev file operations - how can we set
>> the inode block size there? Currently it is based on logical block size.
> ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
> think we can remove that limitation with the bs>PS patches.

We want a consistent interface for bdev and regular files, so that would 
need to work for FSes also. FSes(XFS) work based on a homogeneous inode 
blocksize, which is the SB blocksize.

Furthermore, we would seem to be mixing different concepts here. 
Currently in Linux we say that a logical block size write is atomic. In 
the block layer, we split BIOs on LBS boundaries. iomap creates BIOs 
based on LBS boundaries. But writing a FS block is not always guaranteed 
to be atomic, as far as I'm concerned. So just increasing the inode 
block size / FS block size does not really change anything, in itself.

> 
>>> and then all writes to
>>> that inode were guaranteed to be untorn.  This would also be simpler to
>>> implement for buffered writes.
>> We did consider that. Won't that lead to the possibility of breaking
>> existing applications which want to do regular unaligned writes to these
>> files? We do know that mysql/innodb does have some "compressed" mode of
>> operation, which involves regular writes to the same file which wants untorn
>> writes.
> If you're talking about "regular unaligned buffered writes", then that
> won't break.  If you cross a folio boundary, the result may be torn,
> but if you're crossing a block boundary you expect that.
> 
>> Furthermore, untorn writes in HW are expensive - for SCSI anyway. Do we
>> always want these for such a file?
> Do untorn writes actually exist in SCSI?  I was under the impression
> nobody had actually implemented them in SCSI hardware.

I know that some SCSI targets actually atomically write data in chunks > 
LBS. Obviously atomic vs non-atomic performance is a moot point there, 
as data is implicitly always atomically written.

We actually have an mysql/innodb port of this API working on such a SCSI 
target.

However I am not sure about atomic write support for other SCSI targets.

> 
>> We saw untorn writes as not being a property of the file or even the inode
>> itself, but rather an attribute of the specific IO being issued from the
>> userspace application.
> The problem is that keeping track of that is expensive for buffered
> writes.  It's a model that only works for direct IO.  Arguably we
> could make it work for O_SYNC buffered IO, but that'll require some
> surgery.

To me, O_ATOMIC would be required for buffered atomic writes IO, as we 
want a fixed-sized IO, so that would mean no mixing of atomic and 
non-atomic IO.

Thanks,
John


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-03-27 20:31   ` Dave Chinner
@ 2024-04-05 10:20     ` Kent Overstreet
  2024-04-05 10:55       ` John Garry
  0 siblings, 1 reply; 40+ messages in thread
From: Kent Overstreet @ 2024-04-05 10:20 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Matthew Wilcox, John Garry, axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack,
	linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list

On Thu, Mar 28, 2024 at 07:31:45AM +1100, Dave Chinner wrote:
> On Wed, Mar 27, 2024 at 03:50:07AM +0000, Matthew Wilcox wrote:
> > On Tue, Mar 26, 2024 at 01:38:03PM +0000, John Garry wrote:
> > > The goal here is to provide an interface that allows applications use
> > > application-specific block sizes larger than logical block size
> > > reported by the storage device or larger than filesystem block size as
> > > reported by stat().
> > > 
> > > With this new interface, application blocks will never be torn or
> > > fractured when written. For a power fail, for each individual application
> > > block, all or none of the data to be written. A racing atomic write and
> > > read will mean that the read sees all the old data or all the new data,
> > > but never a mix of old and new.
> > > 
> > > Three new fields are added to struct statx - atomic_write_unit_min,
> > > atomic_write_unit_max, and atomic_write_segments_max. For each atomic
> > > individual write, the total length of a write must be a between
> > > atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
> > > power-of-2. The write must also be at a natural offset in the file
> > > wrt the write length. For pwritev2, iovcnt is limited by
> > > atomic_write_segments_max.
> > > 
> > > There has been some discussion on supporting buffered IO and whether the
> > > API is suitable, like:
> > > https://lore.kernel.org/linux-nvme/[email protected]/
> > > 
> > > Specifically the concern is that supporting a range of sizes of atomic IO
> > > in the pagecache is complex to support. For this, my idea is that FSes can
> > > fix atomic_write_unit_min and atomic_write_unit_max at the same size, the
> > > extent alignment size, which should be easier to support. We may need to
> > > implement O_ATOMIC to avoid mixing atomic and non-atomic IOs for this. I
> > > have no proposed solution for atomic write buffered IO for bdev file
> > > operations, but I know of no requirement for this.
> > 
> > The thing is that there's no requirement for an interface as complex as
> > the one you're proposing here.  I've talked to a few database people
> > and all they want is to increase the untorn write boundary from "one
> > disc block" to one database block, typically 8kB or 16kB.
> > 
> > So they would be quite happy with a much simpler interface where they
> > set the inode block size at inode creation time, and then all writes to
> > that inode were guaranteed to be untorn.  This would also be simpler to
> > implement for buffered writes.
> 
> You're conflating filesystem functionality that applications will use
> with hardware and block-layer enablement that filesystems and
> filesystem utilities need to configure the filesystem in ways that
> allow users to make use of atomic write capability of the hardware.
> 
> The block layer functionality needs to export everything that the
> hardware can do and filesystems will make use of. The actual
> application usage and setup of atomic writes at the filesystem/page
> cache layer is a separate problem.  i.e. The block layer interfaces
> need only support direct IO and expose limits for issuing atomic
> direct IO, and nothing more. All the more complex stuff to make it
> "easy to use" is filesystem level functionality and completely
> outside the scope of this patchset....

A CoW filesystem can implement atomic writes without any block device
support. It seems to me that might have been the easier place to start -
start by getting the APIs right, then do all the plumbing for efficient
untorn writes on non CoW filesystems...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-05 10:20     ` Kent Overstreet
@ 2024-04-05 10:55       ` John Garry
  0 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-04-05 10:55 UTC (permalink / raw)
  To: Kent Overstreet, Dave Chinner
  Cc: Matthew Wilcox, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
	linux-aio, linux-btrfs, io-uring, nilay, ritesh.list

On 05/04/2024 11:20, Kent Overstreet wrote:
>>> The thing is that there's no requirement for an interface as complex as
>>> the one you're proposing here.  I've talked to a few database people
>>> and all they want is to increase the untorn write boundary from "one
>>> disc block" to one database block, typically 8kB or 16kB.
>>>
>>> So they would be quite happy with a much simpler interface where they
>>> set the inode block size at inode creation time, and then all writes to
>>> that inode were guaranteed to be untorn.  This would also be simpler to
>>> implement for buffered writes.
>> You're conflating filesystem functionality that applications will use
>> with hardware and block-layer enablement that filesystems and
>> filesystem utilities need to configure the filesystem in ways that
>> allow users to make use of atomic write capability of the hardware.
>>
>> The block layer functionality needs to export everything that the
>> hardware can do and filesystems will make use of. The actual
>> application usage and setup of atomic writes at the filesystem/page
>> cache layer is a separate problem.  i.e. The block layer interfaces
>> need only support direct IO and expose limits for issuing atomic
>> direct IO, and nothing more. All the more complex stuff to make it
>> "easy to use" is filesystem level functionality and completely
>> outside the scope of this patchset....
> A CoW filesystem can implement atomic writes without any block device
> support. It seems to me that might have been the easier place to start -
> start by getting the APIs right, then do all the plumbing for efficient
> untorn writes on non CoW filesystems...

03/10 and 04/10 in this series define the user API, i.e. RWF_ATOMIC and 
statx updates.

Any filesystem-specific changes - like in 
https://lore.kernel.org/linux-xfs/[email protected]/ 
- are just for enabling this API for that filesystem.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-05 10:06       ` John Garry
@ 2024-04-08 17:50         ` Luis Chamberlain
  2024-04-10  4:05           ` Matthew Wilcox
  2024-04-10  8:34           ` John Garry
  0 siblings, 2 replies; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-08 17:50 UTC (permalink / raw)
  To: John Garry
  Cc: Matthew Wilcox, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote:
> On 04/04/2024 17:48, Matthew Wilcox wrote:
> > > > The thing is that there's no requirement for an interface as complex as
> > > > the one you're proposing here.  I've talked to a few database people
> > > > and all they want is to increase the untorn write boundary from "one
> > > > disc block" to one database block, typically 8kB or 16kB.
> > > > 
> > > > So they would be quite happy with a much simpler interface where they
> > > > set the inode block size at inode creation time,
> > > We want to support untorn writes for bdev file operations - how can we set
> > > the inode block size there? Currently it is based on logical block size.
> > ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
> > think we can remove that limitation with the bs>PS patches.

I can say a bit more on this, as I explored that. Essentially Matthew,
yes, I got that to work but it requires a set of different patches. We have
what we tried and then based on feedback from Chinner we have a
direction on what to try next. The last effort on that front was having the
iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the
page cache limits. The crux on that front was that we end requiring
disabling BUFFER_HEAD and that is pretty limitting, so my old
implementation had dynamic aops so to let us use the buffer-head aops
only when using filesystems which require it and use iomap aops
otherwise. But as Chinner noted we learned through the DAX experience
that's not a route we want to again try, so the real solution is to
extend iomap bdev aops code with buffer-head compatibility.

> We want a consistent interface for bdev and regular files, so that would
> need to work for FSes also. FSes(XFS) work based on a homogeneous inode
> blocksize, which is the SB blocksize.

There are two aspects to this and it is important to differentiate them.

1) LBA formats used
2) When a large atomic is supported and you want to use smaller LBA formats

When the LBA format, and so logical block size is say 16k, the LBS
patches with the above mentioned patches enable IOs to the block device
to be atomic to say 16k.

But to remain flexible we want to support a world where 512 byte and 4k
LBA formats are still used, and you *optionally* want to leverage say
16k atomics. Today's block device topology enables this only with a knob
to userspace to allow userspace to override the sector size for the
filesystem. In practice today if you want to use 4k IOs you just format
the drive to use 4k LBA format. However, an alternative at laest for
NVMe today is to support say 16k atomic with an 4k or 512 LBA format.
This essentially *lifts* the physical block size to 16k while keeping
the logical block size at the LBA format, so 4k or 512 bytes. What you
*could* do with this, from the userspace side of things is at mkfs you
can *opt* in to use a larger sector size up to the physical block size.
When you do this the block device still has a logical block size of the
LBA format, but all IOs the filesystem would use use the larger sector
size you opted in for.

I suspect this is a use case where perhaps the max folio order could be
set for the bdev in the future, the logical block size the min order,
and max order the large atomic.

> Furthermore, we would seem to be mixing different concepts here. Currently
> in Linux we say that a logical block size write is atomic. In the block
> layer, we split BIOs on LBS boundaries. iomap creates BIOs based on LBS
> boundaries. But writing a FS block is not always guaranteed to be atomic, as
> far as I'm concerned.

True. To be clear above paragraph refers to LBS as logical block size.

However when a filesystem sets the min order, and it should be respected.
I agree that when you don't set the sector size to 16k you are not forcing the
filesystem to use 16k IOs, the metadata can still be 4k. But when you
use a 16k sector size, the 16k IOs should be respected by the
filesystem.

Do we break BIOs to below a min order if the sector size is also set to
16k?  I haven't seen that and its unclear when or how that could happen.

At least for NVMe we don't need to yell to a device to inform it we want
a 16k IO issued to it to be atomic, if we read that it has the
capability for it, it just does it. The IO verificaiton can be done with
blkalgn [0].

Does SCSI *require* an 16k atomic prep work, or can it be done implicitly?
Does it need WRITE_ATOMIC_16?

[0] https://github.com/dagmcr/bcc/tree/blkalgn

> So just increasing the inode block size / FS block size does not
> really change anything, in itself.

If we're breaking up IOs when a min order is set for an inode, that
would need to be looked into, but we're not seeing that.

> > Do untorn writes actually exist in SCSI?  I was under the impression
> > nobody had actually implemented them in SCSI hardware.
> 
> I know that some SCSI targets actually atomically write data in chunks >
> LBS. Obviously atomic vs non-atomic performance is a moot point there, as
> data is implicitly always atomically written.
> 
> We actually have an mysql/innodb port of this API working on such a SCSI
> target.

I suspect IO verification with the above tool should prove to show the
same if you use a filesystem with a larger sector size set too, and you
just would not have to do any changes to userspace other than the
filesystem creation with say mkfs.xfs params of -b size=16k -s size=16k

> However I am not sure about atomic write support for other SCSI targets.

Good to know!

> > > We saw untorn writes as not being a property of the file or even the inode
> > > itself, but rather an attribute of the specific IO being issued from the
> > > userspace application.
> > The problem is that keeping track of that is expensive for buffered
> > writes.  It's a model that only works for direct IO.  Arguably we
> > could make it work for O_SYNC buffered IO, but that'll require some
> > surgery.
> 
> To me, O_ATOMIC would be required for buffered atomic writes IO, as we want
> a fixed-sized IO, so that would mean no mixing of atomic and non-atomic IO.

Would using the same min and max order for the inode work instead?

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-08 17:50         ` Luis Chamberlain
@ 2024-04-10  4:05           ` Matthew Wilcox
  2024-04-10  6:20             ` Hannes Reinecke
  2024-04-14 20:50             ` Luis Chamberlain
  2024-04-10  8:34           ` John Garry
  1 sibling, 2 replies; 40+ messages in thread
From: Matthew Wilcox @ 2024-04-10  4:05 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: John Garry, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Mon, Apr 08, 2024 at 10:50:47AM -0700, Luis Chamberlain wrote:
> On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote:
> > On 04/04/2024 17:48, Matthew Wilcox wrote:
> > > > > The thing is that there's no requirement for an interface as complex as
> > > > > the one you're proposing here.  I've talked to a few database people
> > > > > and all they want is to increase the untorn write boundary from "one
> > > > > disc block" to one database block, typically 8kB or 16kB.
> > > > > 
> > > > > So they would be quite happy with a much simpler interface where they
> > > > > set the inode block size at inode creation time,
> > > > We want to support untorn writes for bdev file operations - how can we set
> > > > the inode block size there? Currently it is based on logical block size.
> > > ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
> > > think we can remove that limitation with the bs>PS patches.
> 
> I can say a bit more on this, as I explored that. Essentially Matthew,
> yes, I got that to work but it requires a set of different patches. We have
> what we tried and then based on feedback from Chinner we have a
> direction on what to try next. The last effort on that front was having the
> iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the
> page cache limits. The crux on that front was that we end requiring
> disabling BUFFER_HEAD and that is pretty limitting, so my old
> implementation had dynamic aops so to let us use the buffer-head aops
> only when using filesystems which require it and use iomap aops
> otherwise. But as Chinner noted we learned through the DAX experience
> that's not a route we want to again try, so the real solution is to
> extend iomap bdev aops code with buffer-head compatibility.

Have you tried just using the buffer_head code?  I think you heard bad
advice at last LSFMM.  Since then I've landed a bunch of patches which
remove PAGE_SIZE assumptions throughout the buffer_head code, and while
I haven't tried it, it might work.  And it might be easier to make work
than adding more BH hacks to the iomap code.

A quick audit for problems ...

__getblk_slow:
       if (unlikely(size & (bdev_logical_block_size(bdev)-1) ||
                        (size < 512 || size > PAGE_SIZE))) {

cont_expand_zero (not used by bdev code)
cont_write_begin (ditto)

That's all I spot from a quick grep for PAGE, offset_in_page() and kmap.

You can't do a lot of buffer_heads per folio, because you'll overrun
        struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
in block_read_full_folio(), but you can certainly do _one_ buffer_head
per folio, and that's all you need for bs>PS.

> I suspect this is a use case where perhaps the max folio order could be
> set for the bdev in the future, the logical block size the min order,
> and max order the large atomic.

No, that's not what we want to do at all!  Minimum writeback size needs
to be the atomic size, otherwise we have to keep track of which writes
are atomic and which ones aren't.  So, just set the logical block size
to the atomic size, and we're done.


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-10  4:05           ` Matthew Wilcox
@ 2024-04-10  6:20             ` Hannes Reinecke
  2024-04-11  0:38               ` Luis Chamberlain
  2024-04-14 20:50             ` Luis Chamberlain
  1 sibling, 1 reply; 40+ messages in thread
From: Hannes Reinecke @ 2024-04-10  6:20 UTC (permalink / raw)
  To: Matthew Wilcox, Luis Chamberlain
  Cc: John Garry, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On 4/10/24 06:05, Matthew Wilcox wrote:
> On Mon, Apr 08, 2024 at 10:50:47AM -0700, Luis Chamberlain wrote:
>> On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote:
>>> On 04/04/2024 17:48, Matthew Wilcox wrote:
>>>>>> The thing is that there's no requirement for an interface as complex as
>>>>>> the one you're proposing here.  I've talked to a few database people
>>>>>> and all they want is to increase the untorn write boundary from "one
>>>>>> disc block" to one database block, typically 8kB or 16kB.
>>>>>>
>>>>>> So they would be quite happy with a much simpler interface where they
>>>>>> set the inode block size at inode creation time,
>>>>> We want to support untorn writes for bdev file operations - how can we set
>>>>> the inode block size there? Currently it is based on logical block size.
>>>> ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
>>>> think we can remove that limitation with the bs>PS patches.
>>
>> I can say a bit more on this, as I explored that. Essentially Matthew,
>> yes, I got that to work but it requires a set of different patches. We have
>> what we tried and then based on feedback from Chinner we have a
>> direction on what to try next. The last effort on that front was having the
>> iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the
>> page cache limits. The crux on that front was that we end requiring
>> disabling BUFFER_HEAD and that is pretty limitting, so my old
>> implementation had dynamic aops so to let us use the buffer-head aops
>> only when using filesystems which require it and use iomap aops
>> otherwise. But as Chinner noted we learned through the DAX experience
>> that's not a route we want to again try, so the real solution is to
>> extend iomap bdev aops code with buffer-head compatibility.
> 
> Have you tried just using the buffer_head code?  I think you heard bad
> advice at last LSFMM.  Since then I've landed a bunch of patches which
> remove PAGE_SIZE assumptions throughout the buffer_head code, and while
> I haven't tried it, it might work.  And it might be easier to make work
> than adding more BH hacks to the iomap code.
> 
> A quick audit for problems ...
> 
> __getblk_slow:
>         if (unlikely(size & (bdev_logical_block_size(bdev)-1) ||
>                          (size < 512 || size > PAGE_SIZE))) {
> 
> cont_expand_zero (not used by bdev code)
> cont_write_begin (ditto)
> 
> That's all I spot from a quick grep for PAGE, offset_in_page() and kmap.
> 
> You can't do a lot of buffer_heads per folio, because you'll overrun
>          struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
> in block_read_full_folio(), but you can certainly do _one_ buffer_head
> per folio, and that's all you need for bs>PS.
> 
Indeed; I got a patch here to just restart the submission loop if one
reaches the end of the array. But maybe submitting one bh at a time and
using plugging should achieve that same thing. Let's see.

>> I suspect this is a use case where perhaps the max folio order could be
>> set for the bdev in the future, the logical block size the min order,
>> and max order the large atomic.
> 
> No, that's not what we want to do at all!  Minimum writeback size needs
> to be the atomic size, otherwise we have to keep track of which writes
> are atomic and which ones aren't.  So, just set the logical block size
> to the atomic size, and we're done.
> 
+1. My thoughts all along.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
[email protected]                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-08 17:50         ` Luis Chamberlain
  2024-04-10  4:05           ` Matthew Wilcox
@ 2024-04-10  8:34           ` John Garry
  2024-04-11 19:07             ` Luis Chamberlain
  1 sibling, 1 reply; 40+ messages in thread
From: John Garry @ 2024-04-10  8:34 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Matthew Wilcox, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On 08/04/2024 18:50, Luis Chamberlain wrote:
> I agree that when you don't set the sector size to 16k you are not forcing the
> filesystem to use 16k IOs, the metadata can still be 4k. But when you
> use a 16k sector size, the 16k IOs should be respected by the
> filesystem.
> 
> Do we break BIOs to below a min order if the sector size is also set to
> 16k?  I haven't seen that and its unclear when or how that could happen.

AFAICS, the only guarantee is to not split below LBS.

> 
> At least for NVMe we don't need to yell to a device to inform it we want
> a 16k IO issued to it to be atomic, if we read that it has the
> capability for it, it just does it. The IO verificaiton can be done with
> blkalgn [0].
> 
> Does SCSI*require*  an 16k atomic prep work, or can it be done implicitly?
> Does it need WRITE_ATOMIC_16?

physical block size is what we can implicitly write atomically. So if 
you have a 4K PBS and 512B LBS, then WRITE_ATOMIC_16 would be required 
to write 16KB atomically.

> 
> [0]https://urldefense.com/v3/__https://github.com/dagmcr/bcc/tree/blkalgn__;!!ACWV5N9M2RV99hQ!I0tfdPsEq9vdHMSC7JVmVDHCb5w6invjudW7pZW50v3mZ7dWMMf0cBtY_BQlZZmYSjLzPQDZoLO7-K6MQQ$  
> 
>> So just increasing the inode block size / FS block size does not
>> really change anything, in itself.
> If we're breaking up IOs when a min order is set for an inode, that
> would need to be looked into, but we're not seeing that.

In practice you won't see it, but I am talking about guarantees not to 
see it.

> 
>>> Do untorn writes actually exist in SCSI?  I was under the impression
>>> nobody had actually implemented them in SCSI hardware.
>> I know that some SCSI targets actually atomically write data in chunks >
>> LBS. Obviously atomic vs non-atomic performance is a moot point there, as
>> data is implicitly always atomically written.
>>
>> We actually have an mysql/innodb port of this API working on such a SCSI
>> target.
> I suspect IO verification with the above tool should prove to show the
> same if you use a filesystem with a larger sector size set too, and you
> just would not have to do any changes to userspace other than the
> filesystem creation with say mkfs.xfs params of -b size=16k -s size=16k

Ok, I see

> 
>> However I am not sure about atomic write support for other SCSI targets.
> Good to know!
> 
>>>> We saw untorn writes as not being a property of the file or even the inode
>>>> itself, but rather an attribute of the specific IO being issued from the
>>>> userspace application.
>>> The problem is that keeping track of that is expensive for buffered
>>> writes.  It's a model that only works for direct IO.  Arguably we
>>> could make it work for O_SYNC buffered IO, but that'll require some
>>> surgery.
>> To me, O_ATOMIC would be required for buffered atomic writes IO, as we want
>> a fixed-sized IO, so that would mean no mixing of atomic and non-atomic IO.
> Would using the same min and max order for the inode work instead?

Maybe, I would need to check further.

Thanks,
John


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  2024-03-26 13:38 ` [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
@ 2024-04-10 22:53   ` Luis Chamberlain
  2024-04-11  8:06     ` John Garry
  0 siblings, 1 reply; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-10 22:53 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, willy

On Tue, Mar 26, 2024 at 01:38:05PM +0000, John Garry wrote:
> blkdev_dio_unaligned() is called from __blkdev_direct_IO(),
> __blkdev_direct_IO_simple(), and __blkdev_direct_IO_async(), and all these
> are only called from blkdev_direct_IO().
> 
> Move the blkdev_dio_unaligned() call to the common callsite,
> blkdev_direct_IO().
> 
> Pass those functions the bdev pointer from blkdev_direct_IO() as it is non-
> trivial to calculate.
> 
> Reviewed-by: Keith Busch <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: John Garry <[email protected]>

Reviewed-by: Luis Chamberlain <[email protected]>

I think this patch should just be sent separately already and not part
of this series.

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer
  2024-03-26 13:38 ` [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
@ 2024-04-10 22:58   ` Luis Chamberlain
  0 siblings, 0 replies; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-10 22:58 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, willy

On Tue, Mar 26, 2024 at 01:38:04PM +0000, John Garry wrote:
> Currently blk_queue_get_max_sectors() is passed a enum req_op. In future
> the value returned from blk_queue_get_max_sectors() may depend on certain
> request flags, so pass a request pointer.
> 
> Reviewed-by: Christoph Hellwig <[email protected]>
> Reviewed-by: Keith Busch <[email protected]>
> Signed-off-by: John Garry <[email protected]>

Reviewed-by: Luis Chamberlain <[email protected]>

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 05/10] block: Add core atomic write support
  2024-03-26 17:11   ` Randy Dunlap
@ 2024-04-10 23:34     ` Luis Chamberlain
  2024-04-11  8:15       ` John Garry
  0 siblings, 1 reply; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-10 23:34 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
	linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy,
	Himanshu Madhani

On Tue, Mar 26, 2024 at 10:11:50AM -0700, Randy Dunlap wrote:
> Hi,
> 
> On 3/26/24 06:38, John Garry wrote:
> > diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
> > index 1fe9a553c37b..4c775f4bdefe 100644
> > --- a/Documentation/ABI/stable/sysfs-block
> > +++ b/Documentation/ABI/stable/sysfs-block
> > +What:		/sys/block/<disk>/atomic_write_boundary_bytes
> > +Date:		February 2024
> > +Contact:	Himanshu Madhani <[email protected]>
> > +Description:
> > +		[RO] A device may need to internally split I/Os which
> > +		straddle a given logical block address boundary. In that
> > +		case a single atomic write operation will be processed as
> > +		one of more sub-operations which each complete atomically.
> 
> 		    or

If *or* was meant, wouldn't it be better just to say one or more
operations may be processed as one atomically in this situation?

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 10/10] nvme: Atomic write support
  2024-03-26 13:38 ` [PATCH v6 10/10] nvme: " John Garry
@ 2024-04-11  0:29   ` Luis Chamberlain
  2024-04-11  8:59     ` John Garry
  0 siblings, 1 reply; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-11  0:29 UTC (permalink / raw)
  To: John Garry, Dan Helmick
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, willy, Alan Adamson

On Tue, Mar 26, 2024 at 01:38:13PM +0000, John Garry wrote:
> From: Alan Adamson <[email protected]>
> 
> Add support to set block layer request_queue atomic write limits. The
> limits will be derived from either the namespace or controller atomic
> parameters.
> 
> NVMe atomic-related parameters are grouped into "normal" and "power-fail"
> (or PF) class of parameter. For atomic write support, only PF parameters
> are of interest. The "normal" parameters are concerned with racing reads
> and writes (which also applies to PF). See NVM Command Set Specification
> Revision 1.0d section 2.1.4 for reference.
> 
> Whether to use per namespace or controller atomic parameters is decided by
> NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data
> Structure, NVM Command Set.
> 
> NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
> are provided for a write which straddles this per-lba space boundary. The
> block layer merging policy is such that no merges may occur in which the
> resultant request would straddle such a boundary.
> 
> Unlike SCSI, NVMe specifies no granularity or alignment rules, apart from
> atomic boundary rule.

Larger IU drives a larger alignment *preference*, and it can be multiples
of the LBA format, it's called Namespace Preferred Write Granularity (NPWG)
and the NVMe driver already parses it. So say you have a 4k LBA format
but a 16k NPWG. I suspect this means we'd want atomics writes to align to 16k
but I can let Dan confirm.

> Note on NABSPF:
> There seems to be some vagueness in the spec as to whether NABSPF applies
> for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF
> and how it is affected by bit 1. However Figure 4 does tell to check Figure
> 97 for info about per-namespace parameters, which NABSPF is, so it is
> implied. However currently nvme_update_disk_info() does check namespace
> parameter NABO regardless of this bit.

Yeah that its quirky.

Also today we set the physical block size to min(npwg, atomic) and that
means for a today's average 4k IU drive if they get 16k atomic the
physical block size would still be 4k. As the physical block size in
practice can also lift the sector size filesystems used it would seem
odd only a larger npwg could lift it. So we may want to revisit this
eventually, specially if we have an API to do atomics properly across the
block layer.

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-10  6:20             ` Hannes Reinecke
@ 2024-04-11  0:38               ` Luis Chamberlain
  0 siblings, 0 replies; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-11  0:38 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Matthew Wilcox, John Garry, Pankaj Raghav, Daniel Gomez,
	Javier González, axboe, kbusch, hch, sagi, jejb,
	martin.petersen, djwong, viro, brauner, dchinner, jack,
	linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
	jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
	nilay, ritesh.list

On Wed, Apr 10, 2024 at 08:20:37AM +0200, Hannes Reinecke wrote:
> On 4/10/24 06:05, Matthew Wilcox wrote:
> > On Mon, Apr 08, 2024 at 10:50:47AM -0700, Luis Chamberlain wrote:
> > > On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote:
> > > > On 04/04/2024 17:48, Matthew Wilcox wrote:
> > > > > > > The thing is that there's no requirement for an interface as complex as
> > > > > > > the one you're proposing here.  I've talked to a few database people
> > > > > > > and all they want is to increase the untorn write boundary from "one
> > > > > > > disc block" to one database block, typically 8kB or 16kB.
> > > > > > > 
> > > > > > > So they would be quite happy with a much simpler interface where they
> > > > > > > set the inode block size at inode creation time,
> > > > > > We want to support untorn writes for bdev file operations - how can we set
> > > > > > the inode block size there? Currently it is based on logical block size.
> > > > > ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
> > > > > think we can remove that limitation with the bs>PS patches.
> > > 
> > > I can say a bit more on this, as I explored that. Essentially Matthew,
> > > yes, I got that to work but it requires a set of different patches. We have
> > > what we tried and then based on feedback from Chinner we have a
> > > direction on what to try next. The last effort on that front was having the
> > > iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the
> > > page cache limits. The crux on that front was that we end requiring
> > > disabling BUFFER_HEAD and that is pretty limitting, so my old
> > > implementation had dynamic aops so to let us use the buffer-head aops
> > > only when using filesystems which require it and use iomap aops
> > > otherwise. But as Chinner noted we learned through the DAX experience
> > > that's not a route we want to again try, so the real solution is to
> > > extend iomap bdev aops code with buffer-head compatibility.
> > 
> > Have you tried just using the buffer_head code?  I think you heard bad
> > advice at last LSFMM.  Since then I've landed a bunch of patches which
> > remove PAGE_SIZE assumptions throughout the buffer_head code, and while
> > I haven't tried it, it might work.  And it might be easier to make work
> > than adding more BH hacks to the iomap code.
> > 
> > A quick audit for problems ...
> > 
> > __getblk_slow:
> >         if (unlikely(size & (bdev_logical_block_size(bdev)-1) ||
> >                          (size < 512 || size > PAGE_SIZE))) {
> > 
> > cont_expand_zero (not used by bdev code)
> > cont_write_begin (ditto)
> > 
> > That's all I spot from a quick grep for PAGE, offset_in_page() and kmap.
> > 
> > You can't do a lot of buffer_heads per folio, because you'll overrun
> >          struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
> > in block_read_full_folio(), but you can certainly do _one_ buffer_head
> > per folio, and that's all you need for bs>PS.
> > 
> Indeed; I got a patch here to just restart the submission loop if one
> reaches the end of the array. But maybe submitting one bh at a time and
> using plugging should achieve that same thing. Let's see.

That's great to hear, what about a target filesystem? Without a
buffer-head filesystem to test I'm not sure we'd get enough test
coverage.

The block device cache isn't exaclty a great filesystem target to test
correctness.

> > > I suspect this is a use case where perhaps the max folio order could be
> > > set for the bdev in the future, the logical block size the min order,
> > > and max order the large atomic.
> > 
> > No, that's not what we want to do at all!  Minimum writeback size needs
> > to be the atomic size, otherwise we have to keep track of which writes
> > are atomic and which ones aren't.  So, just set the logical block size
> > to the atomic size, and we're done.
> > 
> +1. My thoughts all along.

Oh, hrm yes, but let's test it out then...

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
  2024-04-10 22:53   ` Luis Chamberlain
@ 2024-04-11  8:06     ` John Garry
  0 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-04-11  8:06 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, willy

On 10/04/2024 23:53, Luis Chamberlain wrote:
> On Tue, Mar 26, 2024 at 01:38:05PM +0000, John Garry wrote:
>> blkdev_dio_unaligned() is called from __blkdev_direct_IO(),
>> __blkdev_direct_IO_simple(), and __blkdev_direct_IO_async(), and all these
>> are only called from blkdev_direct_IO().
>>
>> Move the blkdev_dio_unaligned() call to the common callsite,
>> blkdev_direct_IO().
>>
>> Pass those functions the bdev pointer from blkdev_direct_IO() as it is non-
>> trivial to calculate.
>>
>> Reviewed-by: Keith Busch<[email protected]>
>> Reviewed-by: Christoph Hellwig<[email protected]>
>> Signed-off-by: John Garry<[email protected]>
> Reviewed-by: Luis Chamberlain<[email protected]>
> 

cheers

> I think this patch should just be sent separately already and not part
> of this series.

That just creates a merge dependency, since I have later changes which 
depend on this. I suppose that since we're nearly at rc4, I could do that.

John


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 05/10] block: Add core atomic write support
  2024-04-10 23:34     ` Luis Chamberlain
@ 2024-04-11  8:15       ` John Garry
  0 siblings, 0 replies; 40+ messages in thread
From: John Garry @ 2024-04-11  8:15 UTC (permalink / raw)
  To: Luis Chamberlain, Randy Dunlap
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, willy,
	Himanshu Madhani

On 11/04/2024 00:34, Luis Chamberlain wrote:
>> On 3/26/24 06:38, John Garry wrote:
>>> diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
>>> index 1fe9a553c37b..4c775f4bdefe 100644
>>> --- a/Documentation/ABI/stable/sysfs-block
>>> +++ b/Documentation/ABI/stable/sysfs-block
>>> +What:		/sys/block/<disk>/atomic_write_boundary_bytes
>>> +Date:		February 2024
>>> +Contact:	Himanshu Madhani<[email protected]>
>>> +Description:
>>> +		[RO] A device may need to internally split I/Os which
>>> +		straddle a given logical block address boundary. In that
>>> +		case a single atomic write operation will be processed as
>>> +		one of more sub-operations which each complete atomically.
>> 		    or
> If*or*  was meant, wouldn't it be better just to say one or more
> operations may be processed as one atomically in this situation?

"Or" was meant (thanks Randy).

I think that we just need to say that the write operation will not 
complete atomically if it straddles the boundary. Whether the separate 
parts of the write operation which straddle the boundary complete 
atomically is undefined and irrelevant.

Thanks,
John

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 10/10] nvme: Atomic write support
  2024-04-11  0:29   ` Luis Chamberlain
@ 2024-04-11  8:59     ` John Garry
  2024-04-11 16:22       ` Luis Chamberlain
  0 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-04-11  8:59 UTC (permalink / raw)
  To: Luis Chamberlain, Dan Helmick
  Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list, willy, Alan Adamson

On 11/04/2024 01:29, Luis Chamberlain wrote:
> On Tue, Mar 26, 2024 at 01:38:13PM +0000, John Garry wrote:
>> From: Alan Adamson <[email protected]>
>>
>> Add support to set block layer request_queue atomic write limits. The
>> limits will be derived from either the namespace or controller atomic
>> parameters.
>>
>> NVMe atomic-related parameters are grouped into "normal" and "power-fail"
>> (or PF) class of parameter. For atomic write support, only PF parameters
>> are of interest. The "normal" parameters are concerned with racing reads
>> and writes (which also applies to PF). See NVM Command Set Specification
>> Revision 1.0d section 2.1.4 for reference.
>>
>> Whether to use per namespace or controller atomic parameters is decided by
>> NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data
>> Structure, NVM Command Set.
>>
>> NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
>> are provided for a write which straddles this per-lba space boundary. The
>> block layer merging policy is such that no merges may occur in which the
>> resultant request would straddle such a boundary.
>>
>> Unlike SCSI, NVMe specifies no granularity or alignment rules, apart from
>> atomic boundary rule.
> 
> Larger IU drives a larger alignment *preference*, and it can be multiples
> of the LBA format, it's called Namespace Preferred Write Granularity (NPWG)
> and the NVMe driver already parses it. So say you have a 4k LBA format
> but a 16k NPWG. I suspect this means we'd want atomics writes to align to 16k
> but I can let Dan confirm.

If we need to be aligned to NPWG, then the min atomic write unit would 
also need to be NPWG. Any NPWG relation to atomic writes is not defined 
in the spec, AFAICS.

We simply use the LBA data size as the min atomic unit in this patch.

> 
>> Note on NABSPF:
>> There seems to be some vagueness in the spec as to whether NABSPF applies
>> for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF
>> and how it is affected by bit 1. However Figure 4 does tell to check Figure
>> 97 for info about per-namespace parameters, which NABSPF is, so it is
>> implied. However currently nvme_update_disk_info() does check namespace
>> parameter NABO regardless of this bit.
> 
> Yeah that its quirky.
> 
> Also today we set the physical block size to min(npwg, atomic) and that
> means for a today's average 4k IU drive if they get 16k atomic the
> physical block size would still be 4k. As the physical block size in
> practice can also lift the sector size filesystems used it would seem
> odd only a larger npwg could lift it.
It seems to me that if you want to provide atomic guarantees for this 
large "physical block size", then it needs to be based on (N)AWUPF and NPWG.

> So we may want to revisit this
> eventually, specially if we have an API to do atomics properly across the
> block layer.
>
Thanks,
John


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 10/10] nvme: Atomic write support
  2024-04-11  8:59     ` John Garry
@ 2024-04-11 16:22       ` Luis Chamberlain
  2024-04-11 23:32         ` Dan Helmick
  0 siblings, 1 reply; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-11 16:22 UTC (permalink / raw)
  To: John Garry
  Cc: Dan Helmick, axboe, kbusch, hch, sagi, jejb, martin.petersen,
	djwong, viro, brauner, dchinner, jack, linux-block, linux-kernel,
	linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
	linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy,
	Alan Adamson

On Thu, Apr 11, 2024 at 09:59:57AM +0100, John Garry wrote:
> On 11/04/2024 01:29, Luis Chamberlain wrote:
> > On Tue, Mar 26, 2024 at 01:38:13PM +0000, John Garry wrote:
> > > From: Alan Adamson <[email protected]>
> > > 
> > > Add support to set block layer request_queue atomic write limits. The
> > > limits will be derived from either the namespace or controller atomic
> > > parameters.
> > > 
> > > NVMe atomic-related parameters are grouped into "normal" and "power-fail"
> > > (or PF) class of parameter. For atomic write support, only PF parameters
> > > are of interest. The "normal" parameters are concerned with racing reads
> > > and writes (which also applies to PF). See NVM Command Set Specification
> > > Revision 1.0d section 2.1.4 for reference.
> > > 
> > > Whether to use per namespace or controller atomic parameters is decided by
> > > NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data
> > > Structure, NVM Command Set.
> > > 
> > > NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
> > > are provided for a write which straddles this per-lba space boundary. The
> > > block layer merging policy is such that no merges may occur in which the
> > > resultant request would straddle such a boundary.
> > > 
> > > Unlike SCSI, NVMe specifies no granularity or alignment rules, apart from
> > > atomic boundary rule.
> > 
> > Larger IU drives a larger alignment *preference*, and it can be multiples
> > of the LBA format, it's called Namespace Preferred Write Granularity (NPWG)
> > and the NVMe driver already parses it. So say you have a 4k LBA format
> > but a 16k NPWG. I suspect this means we'd want atomics writes to align to 16k
> > but I can let Dan confirm.
> 
> If we need to be aligned to NPWG, then the min atomic write unit would also
> need to be NPWG. Any NPWG relation to atomic writes is not defined in the
> spec, AFAICS.

NPWG is just a preference, not a requirement, so it is different than
logical block size. As far as I can tell we have no block topology
information to represent it. LBS will help users opt-in to align to
the NPWG, and a respective NAWUPF will ensure you can also atomically
write the respective sector size.

For atomics, NABSPF is what we want to use.

The above statement on the commit log just seems a bit misleading then.

> We simply use the LBA data size as the min atomic unit in this patch.

I thought NABSPF is used.

> > > Note on NABSPF:
> > > There seems to be some vagueness in the spec as to whether NABSPF applies
> > > for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF
> > > and how it is affected by bit 1. However Figure 4 does tell to check Figure
> > > 97 for info about per-namespace parameters, which NABSPF is, so it is
> > > implied. However currently nvme_update_disk_info() does check namespace
> > > parameter NABO regardless of this bit.
> > 
> > Yeah that its quirky.
> > 
> > Also today we set the physical block size to min(npwg, atomic) and that
> > means for a today's average 4k IU drive if they get 16k atomic the
> > physical block size would still be 4k. As the physical block size in
> > practice can also lift the sector size filesystems used it would seem
> > odd only a larger npwg could lift it.
> It seems to me that if you want to provide atomic guarantees for this large
> "physical block size", then it needs to be based on (N)AWUPF and NPWG.

For atomicity, I read it as needing to use NABSPF. Aligning to NPWG will just
help performance.

The NPWG comes from an internal mapping table constructed and kept on
DRAM on a drive in units of an IU size [0], and so not aligning to the
IU just causes having to work with entries in the able rather than just
one, and also incurs a read-modify-write. Contrary to the logical block
size, a write below NPWG but respecting the logical block size is allowed,
its just not optimal.

[0] https://kernelnewbies.org/KernelProjects/large-block-size#Indirection_Unit_size_increases

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-10  8:34           ` John Garry
@ 2024-04-11 19:07             ` Luis Chamberlain
  2024-04-12  8:15               ` John Garry
  0 siblings, 1 reply; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-11 19:07 UTC (permalink / raw)
  To: John Garry
  Cc: Matthew Wilcox, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Wed, Apr 10, 2024 at 09:34:36AM +0100, John Garry wrote:
> On 08/04/2024 18:50, Luis Chamberlain wrote:
> > I agree that when you don't set the sector size to 16k you are not forcing the
> > filesystem to use 16k IOs, the metadata can still be 4k. But when you
> > use a 16k sector size, the 16k IOs should be respected by the
> > filesystem.
> > 
> > Do we break BIOs to below a min order if the sector size is also set to
> > 16k?  I haven't seen that and its unclear when or how that could happen.
> 
> AFAICS, the only guarantee is to not split below LBS.

It would be odd to split a BIO given a inode requirement size spelled
out, but indeed I don't recall verifying this gaurantee.

> > At least for NVMe we don't need to yell to a device to inform it we want
> > a 16k IO issued to it to be atomic, if we read that it has the
> > capability for it, it just does it. The IO verificaiton can be done with
> > blkalgn [0].
> > 
> > Does SCSI*require*  an 16k atomic prep work, or can it be done implicitly?
> > Does it need WRITE_ATOMIC_16?
> 
> physical block size is what we can implicitly write atomically.

Yes, and also on flash to avoid read modify writes.

> So if you
> have a 4K PBS and 512B LBS, then WRITE_ATOMIC_16 would be required to write
> 16KB atomically.

Ugh. Why does SCSI requires a special command for this?

Now we know what would be needed to bump the physical block size, it is
certainly a different feature, however I think it would be good to
evaluate that world too. For NVMe we don't have such special write
requirements.

I put together this kludge with the last patches series of LBS + the
bdev cache aops stuff (which as I said before needs an alternative
solution) and just the scsi atomics topology + physical block size
change to easily experiment to see what would break:

https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git/log/?h=20240408-lbs-scsi-kludge

Using a larger sector size works but it does not use the special scsi
atomic write.

> > > To me, O_ATOMIC would be required for buffered atomic writes IO, as we want
> > > a fixed-sized IO, so that would mean no mixing of atomic and non-atomic IO.
> > Would using the same min and max order for the inode work instead?
> 
> Maybe, I would need to check further.

I'd be happy to help review too.

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH v6 10/10] nvme: Atomic write support
  2024-04-11 16:22       ` Luis Chamberlain
@ 2024-04-11 23:32         ` Dan Helmick
  0 siblings, 0 replies; 40+ messages in thread
From: Dan Helmick @ 2024-04-11 23:32 UTC (permalink / raw)
  To: Luis Chamberlain, John Garry
  Cc: [email protected], [email protected], [email protected], [email protected],
	[email protected], [email protected], [email protected],
	[email protected], [email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected], Alan Adamson

On Thu, April 11, 2024 10:23 AM, Luis Chaimberlain wrote:
> On Thu, Apr 11, 2024 at 09:59:57AM +0100, John Garry wrote:
> > On 11/04/2024 01:29, Luis Chamberlain wrote:
> > > On Tue, Mar 26, 2024 at 01:38:13PM +0000, John Garry wrote:
> > > > From: Alan Adamson <[email protected]>
> > > >
> > > > Add support to set block layer request_queue atomic write limits.
> > > > The limits will be derived from either the namespace or controller
> > > > atomic parameters.
> > > >
> > > > NVMe atomic-related parameters are grouped into "normal" and "power-
> fail"
> > > > (or PF) class of parameter. For atomic write support, only PF
> > > > parameters are of interest. The "normal" parameters are concerned
> > > > with racing reads and writes (which also applies to PF). See NVM
> > > > Command Set Specification Revision 1.0d section 2.1.4 for reference.
> > > >
> > > > Whether to use per namespace or controller atomic parameters is
> > > > decided by NSFEAT bit 1 - see Figure 97: Identify – Identify
> > > > Namespace Data Structure, NVM Command Set.
> > > >
> > > > NVMe namespaces may define an atomic boundary, whereby no atomic
> > > > guarantees are provided for a write which straddles this per-lba
> > > > space boundary. The block layer merging policy is such that no
> > > > merges may occur in which the resultant request would straddle such a
> boundary.
> > > >
> > > > Unlike SCSI, NVMe specifies no granularity or alignment rules,
> > > > apart from atomic boundary rule.
> > >
> > > Larger IU drives a larger alignment *preference*, and it can be
> > > multiples of the LBA format, it's called Namespace Preferred Write
> > > Granularity (NPWG) and the NVMe driver already parses it. So say you
> > > have a 4k LBA format but a 16k NPWG. I suspect this means we'd want
> > > atomics writes to align to 16k but I can let Dan confirm.

Apologies for my delayed reply.  I confirm.  

FYI: I authored the first draft of the OPTPERF section and I tried also at a point to help clarify the atomics section.  After my 1st drafts on these sections, there was a fair amount of translations from English into Standards type language.  So, some drive specifics get removed to enable other medias and so-forth.  

NPWG is a preference.  It does not dictate the atomic behavior though.  So, don't go assuming you can rely on that behavior.  
 

> >
> > If we need to be aligned to NPWG, then the min atomic write unit would
> > also need to be NPWG. Any NPWG relation to atomic writes is not
> > defined in the spec, AFAICS.
> 
> NPWG is just a preference, not a requirement, so it is different than logical
> block size. As far as I can tell we have no block topology information to
> represent it. LBS will help users opt-in to align to the NPWG, and a respective
> NAWUPF will ensure you can also atomically write the respective sector size.
> 
> For atomics, NABSPF is what we want to use.
> 
> The above statement on the commit log just seems a bit misleading then.
> 
> > We simply use the LBA data size as the min atomic unit in this patch.
> 
> I thought NABSPF is used.

Yes, use NABSPF.  But most SSDs don't actually have boundaries.  This is more of a legacy SSD need.  

> 
> > > > Note on NABSPF:
> > > > There seems to be some vagueness in the spec as to whether NABSPF
> > > > applies for NSFEAT bit 1 being unset. Figure 97 does not
> > > > explicitly mention NABSPF and how it is affected by bit 1. However
> > > > Figure 4 does tell to check Figure
> > > > 97 for info about per-namespace parameters, which NABSPF is, so it
> > > > is implied. However currently nvme_update_disk_info() does check
> > > > namespace parameter NABO regardless of this bit.

NABO is a parameter that was carried forward, and it was already in the spec.  I didn't get an ability to impact that one with my changes.  

The story that was relayed to me says this parameter first existed in SATA and SCSI, and NVMe just pulled over an equivalent parameter even though the problem was resolved in the landscape NVMe SSDs ship into.  I was told that NABO was a parameter from Windows 95-ish days.  Something about the BIOS being written in 512B sectors with an ending that didn't align to 4KB.  But all the HDDs were trying to move over to 4KB for efficiencies of their ECCs.  So, there was this NABO parameter added to get the OS portion of the drive to be aligned nicely with the HDD's ECC.  

Anyways: add in the offset as queried from the drive even though it will most likely always be zero.

> > >
> > > Yeah that its quirky.
> > >
> > > Also today we set the physical block size to min(npwg, atomic) and
> > > that means for a today's average 4k IU drive if they get 16k atomic
> > > the physical block size would still be 4k. As the physical block
> > > size in practice can also lift the sector size filesystems used it
> > > would seem odd only a larger npwg could lift it.
> > It seems to me that if you want to provide atomic guarantees for this
> > large "physical block size", then it needs to be based on (N)AWUPF and NPWG.
> 
> For atomicity, I read it as needing to use NABSPF. Aligning to NPWG will just
> help performance.
> 
> The NPWG comes from an internal mapping table constructed and kept on
> DRAM on a drive in units of an IU size [0], and so not aligning to the IU just
> causes having to work with entries in the able rather than just one, and also
> incurs a read-modify-write. Contrary to the logical block size, a write below
> NPWG but respecting the logical block size is allowed, its just not optimal.
> 
> [0]
> https://urldefense.com/v3/__https://protect2.fireeye.com/v1/url?k=eae43295-
> b57ffcf1-eae5b9da-000babda0201-21ccc05e04b9be40&q=1&e=a20d17e2-
> 9e56-47e4-b5e0-
> 05494254a286&u=https*3A*2F*2Fkernelnewbies.org*2FKernelProjects*2Flarg
> e-block-size*23Indirection_Unit_size_increases__;JSUlJSUl!!EwVzqGoTKBqv-
> 0DWAJBm!Wkzd2Bo5MWgYPXDhfJYso2nocO0kAtKHvtYD1NT6p4QIkC846TclDRJ
> pPd683WDGeJSTbRBLKq5Fy-V9mBa8$
> 
>   Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-11 19:07             ` Luis Chamberlain
@ 2024-04-12  8:15               ` John Garry
  2024-04-12 18:28                 ` Luis Chamberlain
  0 siblings, 1 reply; 40+ messages in thread
From: John Garry @ 2024-04-12  8:15 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Matthew Wilcox, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On 11/04/2024 20:07, Luis Chamberlain wrote:
>> So if you
>> have a 4K PBS and 512B LBS, then WRITE_ATOMIC_16 would be required to write
>> 16KB atomically.
> Ugh. Why does SCSI requires a special command for this?

The actual question from others is why does NVMe not have a dedicated 
command for this, like:
https://lore.kernel.org/linux-nvme/[email protected]/

It's a data integrity feature, and we want to know if it works properly.

> 
> Now we know what would be needed to bump the physical block size, it is
> certainly a different feature, however I think it would be good to
> evaluate that world too. For NVMe we don't have such special write
> requirements.
> 
> I put together this kludge with the last patches series of LBS + the
> bdev cache aops stuff (which as I said before needs an alternative
> solution) and just the scsi atomics topology + physical block size
> change to easily experiment to see what would break:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git/log/?h=20240408-lbs-scsi-kludge
> 
> Using a larger sector size works but it does not use the special scsi
> atomic write.

If you are using scsi_debug driver, then you can just pass the desired 
physblk_exp and sector_size args - they both default to 512B. Then you 
don't need bother with sd.c atomic stuff, which I think is what you want.

> 
>>>> To me, O_ATOMIC would be required for buffered atomic writes IO, as we want
>>>> a fixed-sized IO, so that would mean no mixing of atomic and non-atomic IO.
>>> Would using the same min and max order for the inode work instead?
>> Maybe, I would need to check further.
> I'd be happy to help review too.

Yeah, I'm starting to think that min and max inode would make life 
easier, as we don't need to deal with the scenario of an atomic write to 
a folio > atomic write size.

Thanks,
John


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-12  8:15               ` John Garry
@ 2024-04-12 18:28                 ` Luis Chamberlain
  0 siblings, 0 replies; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-12 18:28 UTC (permalink / raw)
  To: John Garry, Dan Helmick
  Cc: Matthew Wilcox, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

+ Dan,

On Fri, Apr 12, 2024 at 09:15:57AM +0100, John Garry wrote:
> On 11/04/2024 20:07, Luis Chamberlain wrote:
> > > So if you
> > > have a 4K PBS and 512B LBS, then WRITE_ATOMIC_16 would be required to write
> > > 16KB atomically.
> > Ugh. Why does SCSI requires a special command for this?
> 
> The actual question from others is why does NVMe not have a dedicated
> command for this, like:
> https://lore.kernel.org/linux-nvme/[email protected]/

Because we don't really need it for the hardware that supports it if the
host does the respective topology checks. For instance the respective
checks for NVMe are that atomics respect AWUN as the cap as the drive
already can go up to AWUN, and the limit for power-fail is implicit by
checking for AWUPF / NAWUPF. The alignment constraints can be dealt with
by the host software.

> It's a data integrity feature, and we want to know if it works properly.

For drives which already support this integrity is ensured already for
you. An NVMe specific atomic write command could be useful for for
existing drives for other reasons or future uses but its not a requirement
with the existing use cases if the NVMe alignment / atomic are respected by
the host.

> > Now we know what would be needed to bump the physical block size, it is
> > certainly a different feature, however I think it would be good to
> > evaluate that world too. For NVMe we don't have such special write
> > requirements.
> > 
> > I put together this kludge with the last patches series of LBS + the
> > bdev cache aops stuff (which as I said before needs an alternative
> > solution) and just the scsi atomics topology + physical block size
> > change to easily experiment to see what would break:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git/log/?h=20240408-lbs-scsi-kludge
> > 
> > Using a larger sector size works but it does not use the special scsi
> > atomic write.
> 
> If you are using scsi_debug driver, then you can just pass the desired
> physblk_exp and sector_size args - they both default to 512B. Then you don't
> need bother with sd.c atomic stuff, which I think is what you want.
> 
> > 
> > > > > To me, O_ATOMIC would be required for buffered atomic writes IO, as we want
> > > > > a fixed-sized IO, so that would mean no mixing of atomic and non-atomic IO.
> > > > Would using the same min and max order for the inode work instead?
> > > Maybe, I would need to check further.
> > I'd be happy to help review too.
> 
> Yeah, I'm starting to think that min and max inode would make life easier,
> as we don't need to deal with the scenario of an atomic write to a folio >
> atomic write size.

And aligments constraints could be dealt with as well.

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-10  4:05           ` Matthew Wilcox
  2024-04-10  6:20             ` Hannes Reinecke
@ 2024-04-14 20:50             ` Luis Chamberlain
  2024-04-15 21:18               ` Matthew Wilcox
  1 sibling, 1 reply; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-14 20:50 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: John Garry, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Wed, Apr 10, 2024 at 05:05:20AM +0100, Matthew Wilcox wrote:
> On Mon, Apr 08, 2024 at 10:50:47AM -0700, Luis Chamberlain wrote:
> > On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote:
> > > On 04/04/2024 17:48, Matthew Wilcox wrote:
> > > > > > The thing is that there's no requirement for an interface as complex as
> > > > > > the one you're proposing here.  I've talked to a few database people
> > > > > > and all they want is to increase the untorn write boundary from "one
> > > > > > disc block" to one database block, typically 8kB or 16kB.
> > > > > > 
> > > > > > So they would be quite happy with a much simpler interface where they
> > > > > > set the inode block size at inode creation time,
> > > > > We want to support untorn writes for bdev file operations - how can we set
> > > > > the inode block size there? Currently it is based on logical block size.
> > > > ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
> > > > think we can remove that limitation with the bs>PS patches.
> > 
> > I can say a bit more on this, as I explored that. Essentially Matthew,
> > yes, I got that to work but it requires a set of different patches. We have
> > what we tried and then based on feedback from Chinner we have a
> > direction on what to try next. The last effort on that front was having the
> > iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the
> > page cache limits. The crux on that front was that we end requiring
> > disabling BUFFER_HEAD and that is pretty limitting, so my old
> > implementation had dynamic aops so to let us use the buffer-head aops
> > only when using filesystems which require it and use iomap aops
> > otherwise. But as Chinner noted we learned through the DAX experience
> > that's not a route we want to again try, so the real solution is to
> > extend iomap bdev aops code with buffer-head compatibility.
> 
> Have you tried just using the buffer_head code?  I think you heard bad
> advice at last LSFMM.  Since then I've landed a bunch of patches which
> remove PAGE_SIZE assumptions throughout the buffer_head code, and while
> I haven't tried it, it might work.  And it might be easier to make work
> than adding more BH hacks to the iomap code.

I have considered it but the issue is that *may work* isn't good enough and
without a test plan for buffer-heads on a real filesystem this may never
suffice. Addressing a buffere-head iomap compat for the block device cache
is less error prone here for now.

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-14 20:50             ` Luis Chamberlain
@ 2024-04-15 21:18               ` Matthew Wilcox
  2024-04-16 21:11                 ` Luis Chamberlain
  0 siblings, 1 reply; 40+ messages in thread
From: Matthew Wilcox @ 2024-04-15 21:18 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: John Garry, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Sun, Apr 14, 2024 at 01:50:16PM -0700, Luis Chamberlain wrote:
> On Wed, Apr 10, 2024 at 05:05:20AM +0100, Matthew Wilcox wrote:
> > Have you tried just using the buffer_head code?  I think you heard bad
> > advice at last LSFMM.  Since then I've landed a bunch of patches which
> > remove PAGE_SIZE assumptions throughout the buffer_head code, and while
> > I haven't tried it, it might work.  And it might be easier to make work
> > than adding more BH hacks to the iomap code.
> 
> I have considered it but the issue is that *may work* isn't good enough and
> without a test plan for buffer-heads on a real filesystem this may never
> suffice. Addressing a buffere-head iomap compat for the block device cache
> is less error prone here for now.

Is it really your position that testing the code I already wrote is
harder than writing and testing some entirely new code?  Surely the
tests are the same for both.

Besides, we aren't talking about a filesystem on top of the bdev here.
We're talking about accessing the bdev's page cache directly.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v6 00/10] block atomic writes
  2024-04-15 21:18               ` Matthew Wilcox
@ 2024-04-16 21:11                 ` Luis Chamberlain
  0 siblings, 0 replies; 40+ messages in thread
From: Luis Chamberlain @ 2024-04-16 21:11 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: John Garry, Pankaj Raghav, Daniel Gomez, Javier González,
	axboe, kbusch, hch, sagi, jejb, martin.petersen, djwong, viro,
	brauner, dchinner, jack, linux-block, linux-kernel, linux-nvme,
	linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
	linux-btrfs, io-uring, nilay, ritesh.list

On Mon, Apr 15, 2024 at 10:18:30PM +0100, Matthew Wilcox wrote:
> On Sun, Apr 14, 2024 at 01:50:16PM -0700, Luis Chamberlain wrote:
> > On Wed, Apr 10, 2024 at 05:05:20AM +0100, Matthew Wilcox wrote:
> > > Have you tried just using the buffer_head code?  I think you heard bad
> > > advice at last LSFMM.  Since then I've landed a bunch of patches which
> > > remove PAGE_SIZE assumptions throughout the buffer_head code, and while
> > > I haven't tried it, it might work.  And it might be easier to make work
> > > than adding more BH hacks to the iomap code.
> > 
> > I have considered it but the issue is that *may work* isn't good enough and
> > without a test plan for buffer-heads on a real filesystem this may never
> > suffice. Addressing a buffere-head iomap compat for the block device cache
> > is less error prone here for now.
> 
> Is it really your position that testing the code I already wrote is
> harder than writing and testing some entirely new code?  Surely the
> tests are the same for both.

The compat code would only allow large folios for iomap, and use
buffer-heads for non-large folios, so nothing much would change except
a special wrapper.

> Besides, we aren't talking about a filesystem on top of the bdev here.
> We're talking about accessing the bdev's page cache directly.

Sure, but my concern was the lack of testing for buffer-head large
folios. While for iomap we'd at least have done the ton of work to
stress test testing large folios while testing XFS with it.

While the block device cache is not a proper full blown filesystem,
it just means since no filesystem has been tested with buffer heads with
large folios its a possible minefield waiting to explode due to lack of
testing.

Is writing a proper test plan for the block device cache code with
buffer-heads with large folios less work than writing the compat code
for the block device cache? I concede that I'm not sure.

I'm happy to try it out to see what blows up.

  Luis

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2024-04-16 21:11 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-26 13:38 [PATCH v6 00/10] block atomic writes John Garry
2024-03-26 13:38 ` [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
2024-04-10 22:58   ` Luis Chamberlain
2024-03-26 13:38 ` [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO() John Garry
2024-04-10 22:53   ` Luis Chamberlain
2024-04-11  8:06     ` John Garry
2024-03-26 13:38 ` [PATCH v6 03/10] fs: Initial atomic write support John Garry
2024-03-26 13:38 ` [PATCH v6 04/10] fs: Add initial atomic write support info to statx John Garry
2024-03-26 13:38 ` [PATCH v6 05/10] block: Add core atomic write support John Garry
2024-03-26 17:11   ` Randy Dunlap
2024-04-10 23:34     ` Luis Chamberlain
2024-04-11  8:15       ` John Garry
2024-03-26 13:38 ` [PATCH v6 06/10] block: Add atomic write support for statx John Garry
2024-03-26 13:38 ` [PATCH v6 07/10] block: Add fops atomic write support John Garry
2024-03-26 13:38 ` [PATCH v6 08/10] scsi: sd: Atomic " John Garry
2024-03-26 13:38 ` [PATCH v6 09/10] scsi: scsi_debug: " John Garry
2024-03-26 13:38 ` [PATCH v6 10/10] nvme: " John Garry
2024-04-11  0:29   ` Luis Chamberlain
2024-04-11  8:59     ` John Garry
2024-04-11 16:22       ` Luis Chamberlain
2024-04-11 23:32         ` Dan Helmick
2024-03-27  3:50 ` [PATCH v6 00/10] block atomic writes Matthew Wilcox
2024-03-27 13:37   ` John Garry
2024-04-04 16:48     ` Matthew Wilcox
2024-04-05 10:06       ` John Garry
2024-04-08 17:50         ` Luis Chamberlain
2024-04-10  4:05           ` Matthew Wilcox
2024-04-10  6:20             ` Hannes Reinecke
2024-04-11  0:38               ` Luis Chamberlain
2024-04-14 20:50             ` Luis Chamberlain
2024-04-15 21:18               ` Matthew Wilcox
2024-04-16 21:11                 ` Luis Chamberlain
2024-04-10  8:34           ` John Garry
2024-04-11 19:07             ` Luis Chamberlain
2024-04-12  8:15               ` John Garry
2024-04-12 18:28                 ` Luis Chamberlain
2024-03-27 20:31   ` Dave Chinner
2024-04-05 10:20     ` Kent Overstreet
2024-04-05 10:55       ` John Garry
2024-04-05  6:14   ` Kent Overstreet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox