* [PATCH v8 01/10] block: Pass blk_queue_get_max_sectors() a request pointer
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-10 10:43 ` [PATCH v8 02/10] block: Generalize chunk_sectors support as boundary support John Garry
` (9 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, John Garry, Luis Chamberlain
Currently blk_queue_get_max_sectors() is passed a enum req_op. In future
the value returned from blk_queue_get_max_sectors() may depend on certain
request flags, so pass a request pointer.
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Keith Busch <[email protected]>
Reviewed-by: Luis Chamberlain <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
block/blk-merge.c | 3 ++-
block/blk-mq.c | 2 +-
block/blk.h | 6 ++++--
3 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 8534c35e0497..8957e08e020c 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -593,7 +593,8 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
if (blk_rq_is_passthrough(rq))
return q->limits.max_hw_sectors;
- max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+ max_sectors = blk_queue_get_max_sectors(rq);
+
if (!q->limits.chunk_sectors ||
req_op(rq) == REQ_OP_DISCARD ||
req_op(rq) == REQ_OP_SECURE_ERASE)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 3b4df8e5ac9e..e690b9c6afb7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3041,7 +3041,7 @@ void blk_mq_submit_bio(struct bio *bio)
blk_status_t blk_insert_cloned_request(struct request *rq)
{
struct request_queue *q = rq->q;
- unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq));
+ unsigned int max_sectors = blk_queue_get_max_sectors(rq);
unsigned int max_segments = blk_rq_get_max_segments(rq);
blk_status_t ret;
diff --git a/block/blk.h b/block/blk.h
index 189bc25beb50..75c1683fc320 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -181,9 +181,11 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
return queue_max_segments(rq->q);
}
-static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,
- enum req_op op)
+static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
{
+ struct request_queue *q = rq->q;
+ enum req_op op = req_op(rq);
+
if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE))
return min(q->limits.max_discard_sectors,
UINT_MAX >> SECTOR_SHIFT);
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 02/10] block: Generalize chunk_sectors support as boundary support
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
2024-06-10 10:43 ` [PATCH v8 01/10] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-10 10:43 ` [PATCH v8 03/10] fs: Initial atomic write support John Garry
` (8 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, John Garry
The purpose of the chunk_sectors limit is to ensure that a mergeble request
fits within the boundary of the chunck_sector value.
Such a feature will be useful for other request_queue boundary limits, so
generalize the chunk_sectors merge code.
This idea was proposed by Hannes Reinecke.
Signed-off-by: John Garry <[email protected]>
---
block/blk-merge.c | 20 ++++++++++++++------
drivers/md/dm.c | 2 +-
include/linux/blkdev.h | 13 +++++++------
3 files changed, 22 insertions(+), 13 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 8957e08e020c..68969e27c831 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -154,6 +154,11 @@ static struct bio *bio_split_write_zeroes(struct bio *bio,
return bio_split(bio, lim->max_write_zeroes_sectors, GFP_NOIO, bs);
}
+static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim)
+{
+ return lim->chunk_sectors;
+}
+
/*
* Return the maximum number of sectors from the start of a bio that may be
* submitted as a single request to a block device. If enough sectors remain,
@@ -167,12 +172,13 @@ static inline unsigned get_max_io_size(struct bio *bio,
{
unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
+ unsigned boundary_sectors = blk_boundary_sectors(lim);
unsigned max_sectors = lim->max_sectors, start, end;
- if (lim->chunk_sectors) {
+ if (boundary_sectors) {
max_sectors = min(max_sectors,
- blk_chunk_sectors_left(bio->bi_iter.bi_sector,
- lim->chunk_sectors));
+ blk_boundary_sectors_left(bio->bi_iter.bi_sector,
+ boundary_sectors));
}
start = bio->bi_iter.bi_sector & (pbs - 1);
@@ -588,19 +594,21 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
sector_t offset)
{
struct request_queue *q = rq->q;
- unsigned int max_sectors;
+ struct queue_limits *lim = &q->limits;
+ unsigned int max_sectors, boundary_sectors;
if (blk_rq_is_passthrough(rq))
return q->limits.max_hw_sectors;
+ boundary_sectors = blk_boundary_sectors(lim);
max_sectors = blk_queue_get_max_sectors(rq);
- if (!q->limits.chunk_sectors ||
+ if (!boundary_sectors ||
req_op(rq) == REQ_OP_DISCARD ||
req_op(rq) == REQ_OP_SECURE_ERASE)
return max_sectors;
return min(max_sectors,
- blk_chunk_sectors_left(offset, q->limits.chunk_sectors));
+ blk_boundary_sectors_left(offset, boundary_sectors));
}
static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 13037d6a6f62..b648253c2300 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1188,7 +1188,7 @@ static sector_t __max_io_len(struct dm_target *ti, sector_t sector,
return len;
return min_t(sector_t, len,
min(max_sectors ? : queue_max_sectors(ti->table->md->queue),
- blk_chunk_sectors_left(target_offset, max_granularity)));
+ blk_boundary_sectors_left(target_offset, max_granularity)));
}
static inline sector_t max_io_len(struct dm_target *ti, sector_t sector)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ac8e0cb2353a..ddff90766f9f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -866,14 +866,15 @@ static inline bool bio_straddles_zones(struct bio *bio)
}
/*
- * Return how much of the chunk is left to be used for I/O at a given offset.
+ * Return how much within the boundary is left to be used for I/O at a given
+ * offset.
*/
-static inline unsigned int blk_chunk_sectors_left(sector_t offset,
- unsigned int chunk_sectors)
+static inline unsigned int blk_boundary_sectors_left(sector_t offset,
+ unsigned int boundary_sectors)
{
- if (unlikely(!is_power_of_2(chunk_sectors)))
- return chunk_sectors - sector_div(offset, chunk_sectors);
- return chunk_sectors - (offset & (chunk_sectors - 1));
+ if (unlikely(!is_power_of_2(boundary_sectors)))
+ return boundary_sectors - sector_div(offset, boundary_sectors);
+ return boundary_sectors - (offset & (boundary_sectors - 1));
}
/**
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 03/10] fs: Initial atomic write support
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
2024-06-10 10:43 ` [PATCH v8 01/10] block: Pass blk_queue_get_max_sectors() a request pointer John Garry
2024-06-10 10:43 ` [PATCH v8 02/10] block: Generalize chunk_sectors support as boundary support John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-12 20:51 ` Darrick J. Wong
2024-06-10 10:43 ` [PATCH v8 04/10] fs: Add initial atomic write support info to statx John Garry
` (7 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, Prasad Singamsetty, John Garry
From: Prasad Singamsetty <[email protected]>
An atomic write is a write issued with torn-write protection, meaning
that for a power failure or any other hardware failure, all or none of the
data from the write will be stored, but never a mix of old and new data.
Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the
write is to be issued with torn-write prevention, according to special
alignment and length rules.
For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for
iocb->ki_flags field to indicate the same.
A call to statx will give the relevant atomic write info for a file:
- atomic_write_unit_min
- atomic_write_unit_max
- atomic_write_segments_max
Both min and max values must be a power-of-2.
Applications can avail of atomic write feature by ensuring that the total
length of a write is a power-of-2 in size and also sized between
atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications
must ensure that the write is at a naturally-aligned offset in the file
wrt the total write length. The value in atomic_write_segments_max
indicates the upper limit for IOV_ITER iovcnt.
Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the
flag set will have RWF_ATOMIC rejected and not just ignored.
Add a type argument to kiocb_set_rw_flags() to allows reads which have
RWF_ATOMIC set to be rejected.
Helper function generic_atomic_write_valid() can be used by FSes to verify
compliant writes. There we check for iov_iter type is for ubuf, which
implies iovcnt==1 for pwritev2(), which is an initial restriction for
atomic_write_segments_max. Initially the only user will be bdev file
operations write handler. We will rely on the block BIO submission path to
ensure write sizes are compliant for the bdev, so we don't need to check
atomic writes sizes yet.
Signed-off-by: Prasad Singamsetty <[email protected]>
jpg: merge into single patch and much rewrite
Signed-off-by: John Garry <[email protected]>
---
fs/aio.c | 8 ++++----
fs/btrfs/ioctl.c | 2 +-
fs/read_write.c | 18 +++++++++++++++++-
include/linux/fs.h | 17 +++++++++++++++--
include/uapi/linux/fs.h | 5 ++++-
io_uring/rw.c | 9 ++++-----
6 files changed, 45 insertions(+), 14 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 57c9f7c077e6..93ef59d358b3 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1516,7 +1516,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res)
iocb_put(iocb);
}
-static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int rw_type)
{
int ret;
@@ -1542,7 +1542,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
} else
req->ki_ioprio = get_current_ioprio();
- ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
+ ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags, rw_type);
if (unlikely(ret))
return ret;
@@ -1594,7 +1594,7 @@ static int aio_read(struct kiocb *req, const struct iocb *iocb,
struct file *file;
int ret;
- ret = aio_prep_rw(req, iocb);
+ ret = aio_prep_rw(req, iocb, READ);
if (ret)
return ret;
file = req->ki_filp;
@@ -1621,7 +1621,7 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb,
struct file *file;
int ret;
- ret = aio_prep_rw(req, iocb);
+ ret = aio_prep_rw(req, iocb, WRITE);
if (ret)
return ret;
file = req->ki_filp;
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index efd5d6e9589e..6ad524b894fc 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -4627,7 +4627,7 @@ static int btrfs_ioctl_encoded_write(struct file *file, void __user *argp, bool
goto out_iov;
init_sync_kiocb(&kiocb, file);
- ret = kiocb_set_rw_flags(&kiocb, 0);
+ ret = kiocb_set_rw_flags(&kiocb, 0, WRITE);
if (ret)
goto out_iov;
kiocb.ki_pos = pos;
diff --git a/fs/read_write.c b/fs/read_write.c
index ef6339391351..285b0f5a9a9c 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -730,7 +730,7 @@ static ssize_t do_iter_readv_writev(struct file *filp, struct iov_iter *iter,
ssize_t ret;
init_sync_kiocb(&kiocb, filp);
- ret = kiocb_set_rw_flags(&kiocb, flags);
+ ret = kiocb_set_rw_flags(&kiocb, flags, type);
if (ret)
return ret;
kiocb.ki_pos = (ppos ? *ppos : 0);
@@ -1736,3 +1736,19 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out)
return 0;
}
+
+bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos)
+{
+ size_t len = iov_iter_count(iter);
+
+ if (!iter_is_ubuf(iter))
+ return false;
+
+ if (!is_power_of_2(len))
+ return false;
+
+ if (!IS_ALIGNED(pos, len))
+ return false;
+
+ return true;
+}
\ No newline at end of file
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 0283cf366c2a..e049414bef7d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -125,8 +125,10 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
#define FMODE_EXEC ((__force fmode_t)(1 << 5))
/* File writes are restricted (block device specific) */
#define FMODE_WRITE_RESTRICTED ((__force fmode_t)(1 << 6))
+/* File supports atomic writes */
+#define FMODE_CAN_ATOMIC_WRITE ((__force fmode_t)(1 << 7))
-/* FMODE_* bits 7 to 8 */
+/* FMODE_* bit 8 */
/* 32bit hashes as llseek() offset (for directories) */
#define FMODE_32BITHASH ((__force fmode_t)(1 << 9))
@@ -317,6 +319,7 @@ struct readahead_control;
#define IOCB_SYNC (__force int) RWF_SYNC
#define IOCB_NOWAIT (__force int) RWF_NOWAIT
#define IOCB_APPEND (__force int) RWF_APPEND
+#define IOCB_ATOMIC (__force int) RWF_ATOMIC
/* non-RWF related bits - start at 16 */
#define IOCB_EVENTFD (1 << 16)
@@ -351,6 +354,7 @@ struct readahead_control;
{ IOCB_SYNC, "SYNC" }, \
{ IOCB_NOWAIT, "NOWAIT" }, \
{ IOCB_APPEND, "APPEND" }, \
+ { IOCB_ATOMIC, "ATOMIC"}, \
{ IOCB_EVENTFD, "EVENTFD"}, \
{ IOCB_DIRECT, "DIRECT" }, \
{ IOCB_WRITE, "WRITE" }, \
@@ -3403,7 +3407,8 @@ static inline int iocb_flags(struct file *file)
return res;
}
-static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
+static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags,
+ int rw_type)
{
int kiocb_flags = 0;
@@ -3422,6 +3427,12 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
return -EOPNOTSUPP;
kiocb_flags |= IOCB_NOIO;
}
+ if (flags & RWF_ATOMIC) {
+ if (rw_type != WRITE)
+ return -EOPNOTSUPP;
+ if (!(ki->ki_filp->f_mode & FMODE_CAN_ATOMIC_WRITE))
+ return -EOPNOTSUPP;
+ }
kiocb_flags |= (__force int) (flags & RWF_SUPPORTED);
if (flags & RWF_SYNC)
kiocb_flags |= IOCB_DSYNC;
@@ -3613,4 +3624,6 @@ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
int advice);
+bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos);
+
#endif /* _LINUX_FS_H */
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 45e4e64fd664..191a7e88a8ab 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -329,9 +329,12 @@ typedef int __bitwise __kernel_rwf_t;
/* per-IO negation of O_APPEND */
#define RWF_NOAPPEND ((__force __kernel_rwf_t)0x00000020)
+/* Atomic Write */
+#define RWF_ATOMIC ((__force __kernel_rwf_t)0x00000040)
+
/* mask of flags supported by the kernel */
#define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
- RWF_APPEND | RWF_NOAPPEND)
+ RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC)
/* Pagemap ioctl */
#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 1a2128459cb4..c004d21e2f12 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -772,7 +772,7 @@ static bool need_complete_io(struct io_kiocb *req)
S_ISBLK(file_inode(req->file)->i_mode);
}
-static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
+static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type)
{
struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
struct kiocb *kiocb = &rw->kiocb;
@@ -787,7 +787,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
req->flags |= io_file_get_flags(file);
kiocb->ki_flags = file->f_iocb_flags;
- ret = kiocb_set_rw_flags(kiocb, rw->flags);
+ ret = kiocb_set_rw_flags(kiocb, rw->flags, rw_type);
if (unlikely(ret))
return ret;
kiocb->ki_flags |= IOCB_ALLOC_CACHE;
@@ -832,8 +832,7 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
if (unlikely(ret < 0))
return ret;
}
-
- ret = io_rw_init_file(req, FMODE_READ);
+ ret = io_rw_init_file(req, FMODE_READ, READ);
if (unlikely(ret))
return ret;
req->cqe.res = iov_iter_count(&io->iter);
@@ -1013,7 +1012,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
ssize_t ret, ret2;
loff_t *ppos;
- ret = io_rw_init_file(req, FMODE_WRITE);
+ ret = io_rw_init_file(req, FMODE_WRITE, WRITE);
if (unlikely(ret))
return ret;
req->cqe.res = iov_iter_count(&io->iter);
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v8 03/10] fs: Initial atomic write support
2024-06-10 10:43 ` [PATCH v8 03/10] fs: Initial atomic write support John Garry
@ 2024-06-12 20:51 ` Darrick J. Wong
0 siblings, 0 replies; 27+ messages in thread
From: Darrick J. Wong @ 2024-06-12 20:51 UTC (permalink / raw)
To: John Garry
Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack, linux-block, linux-kernel, linux-nvme,
linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
linux-btrfs, io-uring, nilay, ritesh.list, willy, agk, snitzer,
mpatocka, dm-devel, hare, Prasad Singamsetty
On Mon, Jun 10, 2024 at 10:43:22AM +0000, John Garry wrote:
> From: Prasad Singamsetty <[email protected]>
>
> An atomic write is a write issued with torn-write protection, meaning
> that for a power failure or any other hardware failure, all or none of the
> data from the write will be stored, but never a mix of old and new data.
>
> Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the
> write is to be issued with torn-write prevention, according to special
> alignment and length rules.
>
> For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for
> iocb->ki_flags field to indicate the same.
>
> A call to statx will give the relevant atomic write info for a file:
> - atomic_write_unit_min
> - atomic_write_unit_max
> - atomic_write_segments_max
>
> Both min and max values must be a power-of-2.
>
> Applications can avail of atomic write feature by ensuring that the total
> length of a write is a power-of-2 in size and also sized between
> atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications
> must ensure that the write is at a naturally-aligned offset in the file
> wrt the total write length. The value in atomic_write_segments_max
> indicates the upper limit for IOV_ITER iovcnt.
>
> Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the
> flag set will have RWF_ATOMIC rejected and not just ignored.
>
> Add a type argument to kiocb_set_rw_flags() to allows reads which have
> RWF_ATOMIC set to be rejected.
>
> Helper function generic_atomic_write_valid() can be used by FSes to verify
> compliant writes. There we check for iov_iter type is for ubuf, which
> implies iovcnt==1 for pwritev2(), which is an initial restriction for
> atomic_write_segments_max. Initially the only user will be bdev file
> operations write handler. We will rely on the block BIO submission path to
> ensure write sizes are compliant for the bdev, so we don't need to check
> atomic writes sizes yet.
>
> Signed-off-by: Prasad Singamsetty <[email protected]>
> jpg: merge into single patch and much rewrite
> Signed-off-by: John Garry <[email protected]>
Seems fine to me, though clearly others have had much stronger opinions
in the past so:
Acked-by: Darrick J. Wong <[email protected]>
--D
> ---
> fs/aio.c | 8 ++++----
> fs/btrfs/ioctl.c | 2 +-
> fs/read_write.c | 18 +++++++++++++++++-
> include/linux/fs.h | 17 +++++++++++++++--
> include/uapi/linux/fs.h | 5 ++++-
> io_uring/rw.c | 9 ++++-----
> 6 files changed, 45 insertions(+), 14 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index 57c9f7c077e6..93ef59d358b3 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -1516,7 +1516,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res)
> iocb_put(iocb);
> }
>
> -static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
> +static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int rw_type)
> {
> int ret;
>
> @@ -1542,7 +1542,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
> } else
> req->ki_ioprio = get_current_ioprio();
>
> - ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
> + ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags, rw_type);
> if (unlikely(ret))
> return ret;
>
> @@ -1594,7 +1594,7 @@ static int aio_read(struct kiocb *req, const struct iocb *iocb,
> struct file *file;
> int ret;
>
> - ret = aio_prep_rw(req, iocb);
> + ret = aio_prep_rw(req, iocb, READ);
> if (ret)
> return ret;
> file = req->ki_filp;
> @@ -1621,7 +1621,7 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb,
> struct file *file;
> int ret;
>
> - ret = aio_prep_rw(req, iocb);
> + ret = aio_prep_rw(req, iocb, WRITE);
> if (ret)
> return ret;
> file = req->ki_filp;
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index efd5d6e9589e..6ad524b894fc 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -4627,7 +4627,7 @@ static int btrfs_ioctl_encoded_write(struct file *file, void __user *argp, bool
> goto out_iov;
>
> init_sync_kiocb(&kiocb, file);
> - ret = kiocb_set_rw_flags(&kiocb, 0);
> + ret = kiocb_set_rw_flags(&kiocb, 0, WRITE);
> if (ret)
> goto out_iov;
> kiocb.ki_pos = pos;
> diff --git a/fs/read_write.c b/fs/read_write.c
> index ef6339391351..285b0f5a9a9c 100644
> --- a/fs/read_write.c
> +++ b/fs/read_write.c
> @@ -730,7 +730,7 @@ static ssize_t do_iter_readv_writev(struct file *filp, struct iov_iter *iter,
> ssize_t ret;
>
> init_sync_kiocb(&kiocb, filp);
> - ret = kiocb_set_rw_flags(&kiocb, flags);
> + ret = kiocb_set_rw_flags(&kiocb, flags, type);
> if (ret)
> return ret;
> kiocb.ki_pos = (ppos ? *ppos : 0);
> @@ -1736,3 +1736,19 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out)
>
> return 0;
> }
> +
> +bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos)
> +{
> + size_t len = iov_iter_count(iter);
> +
> + if (!iter_is_ubuf(iter))
> + return false;
> +
> + if (!is_power_of_2(len))
> + return false;
> +
> + if (!IS_ALIGNED(pos, len))
> + return false;
> +
> + return true;
> +}
> \ No newline at end of file
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 0283cf366c2a..e049414bef7d 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -125,8 +125,10 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
> #define FMODE_EXEC ((__force fmode_t)(1 << 5))
> /* File writes are restricted (block device specific) */
> #define FMODE_WRITE_RESTRICTED ((__force fmode_t)(1 << 6))
> +/* File supports atomic writes */
> +#define FMODE_CAN_ATOMIC_WRITE ((__force fmode_t)(1 << 7))
>
> -/* FMODE_* bits 7 to 8 */
> +/* FMODE_* bit 8 */
>
> /* 32bit hashes as llseek() offset (for directories) */
> #define FMODE_32BITHASH ((__force fmode_t)(1 << 9))
> @@ -317,6 +319,7 @@ struct readahead_control;
> #define IOCB_SYNC (__force int) RWF_SYNC
> #define IOCB_NOWAIT (__force int) RWF_NOWAIT
> #define IOCB_APPEND (__force int) RWF_APPEND
> +#define IOCB_ATOMIC (__force int) RWF_ATOMIC
>
> /* non-RWF related bits - start at 16 */
> #define IOCB_EVENTFD (1 << 16)
> @@ -351,6 +354,7 @@ struct readahead_control;
> { IOCB_SYNC, "SYNC" }, \
> { IOCB_NOWAIT, "NOWAIT" }, \
> { IOCB_APPEND, "APPEND" }, \
> + { IOCB_ATOMIC, "ATOMIC"}, \
> { IOCB_EVENTFD, "EVENTFD"}, \
> { IOCB_DIRECT, "DIRECT" }, \
> { IOCB_WRITE, "WRITE" }, \
> @@ -3403,7 +3407,8 @@ static inline int iocb_flags(struct file *file)
> return res;
> }
>
> -static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
> +static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags,
> + int rw_type)
> {
> int kiocb_flags = 0;
>
> @@ -3422,6 +3427,12 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
> return -EOPNOTSUPP;
> kiocb_flags |= IOCB_NOIO;
> }
> + if (flags & RWF_ATOMIC) {
> + if (rw_type != WRITE)
> + return -EOPNOTSUPP;
> + if (!(ki->ki_filp->f_mode & FMODE_CAN_ATOMIC_WRITE))
> + return -EOPNOTSUPP;
> + }
> kiocb_flags |= (__force int) (flags & RWF_SUPPORTED);
> if (flags & RWF_SYNC)
> kiocb_flags |= IOCB_DSYNC;
> @@ -3613,4 +3624,6 @@ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
> extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
> int advice);
>
> +bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos);
> +
> #endif /* _LINUX_FS_H */
> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index 45e4e64fd664..191a7e88a8ab 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -329,9 +329,12 @@ typedef int __bitwise __kernel_rwf_t;
> /* per-IO negation of O_APPEND */
> #define RWF_NOAPPEND ((__force __kernel_rwf_t)0x00000020)
>
> +/* Atomic Write */
> +#define RWF_ATOMIC ((__force __kernel_rwf_t)0x00000040)
> +
> /* mask of flags supported by the kernel */
> #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
> - RWF_APPEND | RWF_NOAPPEND)
> + RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC)
>
> /* Pagemap ioctl */
> #define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
> diff --git a/io_uring/rw.c b/io_uring/rw.c
> index 1a2128459cb4..c004d21e2f12 100644
> --- a/io_uring/rw.c
> +++ b/io_uring/rw.c
> @@ -772,7 +772,7 @@ static bool need_complete_io(struct io_kiocb *req)
> S_ISBLK(file_inode(req->file)->i_mode);
> }
>
> -static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
> +static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type)
> {
> struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
> struct kiocb *kiocb = &rw->kiocb;
> @@ -787,7 +787,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
> req->flags |= io_file_get_flags(file);
>
> kiocb->ki_flags = file->f_iocb_flags;
> - ret = kiocb_set_rw_flags(kiocb, rw->flags);
> + ret = kiocb_set_rw_flags(kiocb, rw->flags, rw_type);
> if (unlikely(ret))
> return ret;
> kiocb->ki_flags |= IOCB_ALLOC_CACHE;
> @@ -832,8 +832,7 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
> if (unlikely(ret < 0))
> return ret;
> }
> -
> - ret = io_rw_init_file(req, FMODE_READ);
> + ret = io_rw_init_file(req, FMODE_READ, READ);
> if (unlikely(ret))
> return ret;
> req->cqe.res = iov_iter_count(&io->iter);
> @@ -1013,7 +1012,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
> ssize_t ret, ret2;
> loff_t *ppos;
>
> - ret = io_rw_init_file(req, FMODE_WRITE);
> + ret = io_rw_init_file(req, FMODE_WRITE, WRITE);
> if (unlikely(ret))
> return ret;
> req->cqe.res = iov_iter_count(&io->iter);
> --
> 2.31.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v8 04/10] fs: Add initial atomic write support info to statx
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (2 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 03/10] fs: Initial atomic write support John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-12 20:54 ` Darrick J. Wong
2024-06-10 10:43 ` [PATCH v8 05/10] block: Add core atomic write support John Garry
` (6 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, Prasad Singamsetty, John Garry
From: Prasad Singamsetty <[email protected]>
Extend statx system call to return additional info for atomic write support
support for a file.
Helper function generic_fill_statx_atomic_writes() can be used by FSes to
fill in the relevant statx fields. For now atomic_write_segments_max will
always be 1, otherwise some rules would need to be imposed on iovec length
and alignment, which we don't want now.
Signed-off-by: Prasad Singamsetty <[email protected]>
jpg: relocate bdev support to another patch
Signed-off-by: John Garry <[email protected]>
---
fs/stat.c | 34 ++++++++++++++++++++++++++++++++++
include/linux/fs.h | 3 +++
include/linux/stat.h | 3 +++
include/uapi/linux/stat.h | 12 ++++++++++--
4 files changed, 50 insertions(+), 2 deletions(-)
diff --git a/fs/stat.c b/fs/stat.c
index 70bd3e888cfa..72d0e6357b91 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
}
EXPORT_SYMBOL(generic_fill_statx_attr);
+/**
+ * generic_fill_statx_atomic_writes - Fill in atomic writes statx attributes
+ * @stat: Where to fill in the attribute flags
+ * @unit_min: Minimum supported atomic write length in bytes
+ * @unit_max: Maximum supported atomic write length in bytes
+ *
+ * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
+ * atomic write unit_min and unit_max values.
+ */
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+ unsigned int unit_min,
+ unsigned int unit_max)
+{
+ /* Confirm that the request type is known */
+ stat->result_mask |= STATX_WRITE_ATOMIC;
+
+ /* Confirm that the file attribute type is known */
+ stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
+
+ if (unit_min) {
+ stat->atomic_write_unit_min = unit_min;
+ stat->atomic_write_unit_max = unit_max;
+ /* Initially only allow 1x segment */
+ stat->atomic_write_segments_max = 1;
+
+ /* Confirm atomic writes are actually supported */
+ stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
+ }
+}
+EXPORT_SYMBOL_GPL(generic_fill_statx_atomic_writes);
+
/**
* vfs_getattr_nosec - getattr without security checks
* @path: file to get attributes from
@@ -659,6 +690,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
tmp.stx_dio_mem_align = stat->dio_mem_align;
tmp.stx_dio_offset_align = stat->dio_offset_align;
tmp.stx_subvol = stat->subvol;
+ tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
+ tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
+ tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e049414bef7d..db26b4a70c62 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3235,6 +3235,9 @@ extern const struct inode_operations page_symlink_inode_operations;
extern void kfree_link(void *);
void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+ unsigned int unit_min,
+ unsigned int unit_max);
extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
void __inode_add_bytes(struct inode *inode, loff_t bytes);
diff --git a/include/linux/stat.h b/include/linux/stat.h
index bf92441dbad2..3d900c86981c 100644
--- a/include/linux/stat.h
+++ b/include/linux/stat.h
@@ -54,6 +54,9 @@ struct kstat {
u32 dio_offset_align;
u64 change_cookie;
u64 subvol;
+ u32 atomic_write_unit_min;
+ u32 atomic_write_unit_max;
+ u32 atomic_write_segments_max;
};
/* These definitions are internal to the kernel for now. Mainly used by nfsd. */
diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
index 67626d535316..887a25286441 100644
--- a/include/uapi/linux/stat.h
+++ b/include/uapi/linux/stat.h
@@ -126,9 +126,15 @@ struct statx {
__u64 stx_mnt_id;
__u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */
__u32 stx_dio_offset_align; /* File offset alignment for direct I/O */
- __u64 stx_subvol; /* Subvolume identifier */
/* 0xa0 */
- __u64 __spare3[11]; /* Spare space for future expansion */
+ __u64 stx_subvol; /* Subvolume identifier */
+ __u32 stx_atomic_write_unit_min; /* Min atomic write unit in bytes */
+ __u32 stx_atomic_write_unit_max; /* Max atomic write unit in bytes */
+ /* 0xb0 */
+ __u32 stx_atomic_write_segments_max; /* Max atomic write segment count */
+ __u32 __spare1[1];
+ /* 0xb8 */
+ __u64 __spare3[9]; /* Spare space for future expansion */
/* 0x100 */
};
@@ -157,6 +163,7 @@ struct statx {
#define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */
#define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */
#define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */
+#define STATX_WRITE_ATOMIC 0x00010000U /* Want/got atomic_write_* fields */
#define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */
@@ -192,6 +199,7 @@ struct statx {
#define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */
#define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */
#define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */
+#define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */
#endif /* _UAPI_LINUX_STAT_H */
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v8 04/10] fs: Add initial atomic write support info to statx
2024-06-10 10:43 ` [PATCH v8 04/10] fs: Add initial atomic write support info to statx John Garry
@ 2024-06-12 20:54 ` Darrick J. Wong
2024-06-13 7:25 ` John Garry
0 siblings, 1 reply; 27+ messages in thread
From: Darrick J. Wong @ 2024-06-12 20:54 UTC (permalink / raw)
To: John Garry
Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack, linux-block, linux-kernel, linux-nvme,
linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
linux-btrfs, io-uring, nilay, ritesh.list, willy, agk, snitzer,
mpatocka, dm-devel, hare, Prasad Singamsetty
On Mon, Jun 10, 2024 at 10:43:23AM +0000, John Garry wrote:
> From: Prasad Singamsetty <[email protected]>
>
> Extend statx system call to return additional info for atomic write support
> support for a file.
>
> Helper function generic_fill_statx_atomic_writes() can be used by FSes to
> fill in the relevant statx fields. For now atomic_write_segments_max will
> always be 1, otherwise some rules would need to be imposed on iovec length
> and alignment, which we don't want now.
>
> Signed-off-by: Prasad Singamsetty <[email protected]>
> jpg: relocate bdev support to another patch
> Signed-off-by: John Garry <[email protected]>
Looks fine to me, assuming there's a manpage update lurking somewhere?
Reviewed-by: Darrick J. Wong <[email protected]>
--D
> ---
> fs/stat.c | 34 ++++++++++++++++++++++++++++++++++
> include/linux/fs.h | 3 +++
> include/linux/stat.h | 3 +++
> include/uapi/linux/stat.h | 12 ++++++++++--
> 4 files changed, 50 insertions(+), 2 deletions(-)
>
> diff --git a/fs/stat.c b/fs/stat.c
> index 70bd3e888cfa..72d0e6357b91 100644
> --- a/fs/stat.c
> +++ b/fs/stat.c
> @@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
> }
> EXPORT_SYMBOL(generic_fill_statx_attr);
>
> +/**
> + * generic_fill_statx_atomic_writes - Fill in atomic writes statx attributes
> + * @stat: Where to fill in the attribute flags
> + * @unit_min: Minimum supported atomic write length in bytes
> + * @unit_max: Maximum supported atomic write length in bytes
> + *
> + * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
> + * atomic write unit_min and unit_max values.
> + */
> +void generic_fill_statx_atomic_writes(struct kstat *stat,
> + unsigned int unit_min,
> + unsigned int unit_max)
> +{
> + /* Confirm that the request type is known */
> + stat->result_mask |= STATX_WRITE_ATOMIC;
> +
> + /* Confirm that the file attribute type is known */
> + stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
> +
> + if (unit_min) {
> + stat->atomic_write_unit_min = unit_min;
> + stat->atomic_write_unit_max = unit_max;
> + /* Initially only allow 1x segment */
> + stat->atomic_write_segments_max = 1;
> +
> + /* Confirm atomic writes are actually supported */
> + stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
> + }
> +}
> +EXPORT_SYMBOL_GPL(generic_fill_statx_atomic_writes);
> +
> /**
> * vfs_getattr_nosec - getattr without security checks
> * @path: file to get attributes from
> @@ -659,6 +690,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
> tmp.stx_dio_mem_align = stat->dio_mem_align;
> tmp.stx_dio_offset_align = stat->dio_offset_align;
> tmp.stx_subvol = stat->subvol;
> + tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
> + tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
> + tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
>
> return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
> }
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index e049414bef7d..db26b4a70c62 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -3235,6 +3235,9 @@ extern const struct inode_operations page_symlink_inode_operations;
> extern void kfree_link(void *);
> void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
> void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
> +void generic_fill_statx_atomic_writes(struct kstat *stat,
> + unsigned int unit_min,
> + unsigned int unit_max);
> extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
> extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
> void __inode_add_bytes(struct inode *inode, loff_t bytes);
> diff --git a/include/linux/stat.h b/include/linux/stat.h
> index bf92441dbad2..3d900c86981c 100644
> --- a/include/linux/stat.h
> +++ b/include/linux/stat.h
> @@ -54,6 +54,9 @@ struct kstat {
> u32 dio_offset_align;
> u64 change_cookie;
> u64 subvol;
> + u32 atomic_write_unit_min;
> + u32 atomic_write_unit_max;
> + u32 atomic_write_segments_max;
> };
>
> /* These definitions are internal to the kernel for now. Mainly used by nfsd. */
> diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
> index 67626d535316..887a25286441 100644
> --- a/include/uapi/linux/stat.h
> +++ b/include/uapi/linux/stat.h
> @@ -126,9 +126,15 @@ struct statx {
> __u64 stx_mnt_id;
> __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */
> __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */
> - __u64 stx_subvol; /* Subvolume identifier */
> /* 0xa0 */
> - __u64 __spare3[11]; /* Spare space for future expansion */
> + __u64 stx_subvol; /* Subvolume identifier */
> + __u32 stx_atomic_write_unit_min; /* Min atomic write unit in bytes */
> + __u32 stx_atomic_write_unit_max; /* Max atomic write unit in bytes */
> + /* 0xb0 */
> + __u32 stx_atomic_write_segments_max; /* Max atomic write segment count */
> + __u32 __spare1[1];
> + /* 0xb8 */
> + __u64 __spare3[9]; /* Spare space for future expansion */
> /* 0x100 */
> };
>
> @@ -157,6 +163,7 @@ struct statx {
> #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */
> #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */
> #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */
> +#define STATX_WRITE_ATOMIC 0x00010000U /* Want/got atomic_write_* fields */
>
> #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */
>
> @@ -192,6 +199,7 @@ struct statx {
> #define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */
> #define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */
> #define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */
> +#define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */
>
>
> #endif /* _UAPI_LINUX_STAT_H */
> --
> 2.31.1
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 04/10] fs: Add initial atomic write support info to statx
2024-06-12 20:54 ` Darrick J. Wong
@ 2024-06-13 7:25 ` John Garry
0 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-13 7:25 UTC (permalink / raw)
To: Darrick J. Wong
Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack, linux-block, linux-kernel, linux-nvme,
linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
linux-btrfs, io-uring, nilay, ritesh.list, willy, agk, snitzer,
mpatocka, dm-devel, hare, Prasad Singamsetty
On 12/06/2024 21:54, Darrick J. Wong wrote:
> On Mon, Jun 10, 2024 at 10:43:23AM +0000, John Garry wrote:
>> From: Prasad Singamsetty <[email protected]>
>>
>> Extend statx system call to return additional info for atomic write support
>> support for a file.
>>
>> Helper function generic_fill_statx_atomic_writes() can be used by FSes to
>> fill in the relevant statx fields. For now atomic_write_segments_max will
>> always be 1, otherwise some rules would need to be imposed on iovec length
>> and alignment, which we don't want now.
>>
>> Signed-off-by: Prasad Singamsetty <[email protected]>
>> jpg: relocate bdev support to another patch
>> Signed-off-by: John Garry <[email protected]>
>
> Looks fine to me, assuming there's a manpage update lurking somewhere?
Sure, see
https://lore.kernel.org/lkml/[email protected]/T/#m520dca97a9748de352b5a723d3155a4bb1e46456
I'll post a rebase, but the API is still the same.
> Reviewed-by: Darrick J. Wong <[email protected]>
Thanks,
John
>
> --D
>
>> ---
>> fs/stat.c | 34 ++++++++++++++++++++++++++++++++++
>> include/linux/fs.h | 3 +++
>> include/linux/stat.h | 3 +++
>> include/uapi/linux/stat.h | 12 ++++++++++--
>> 4 files changed, 50 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/stat.c b/fs/stat.c
>> index 70bd3e888cfa..72d0e6357b91 100644
>> --- a/fs/stat.c
>> +++ b/fs/stat.c
>> @@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
>> }
>> EXPORT_SYMBOL(generic_fill_statx_attr);
>>
>> +/**
>> + * generic_fill_statx_atomic_writes - Fill in atomic writes statx attributes
>> + * @stat: Where to fill in the attribute flags
>> + * @unit_min: Minimum supported atomic write length in bytes
>> + * @unit_max: Maximum supported atomic write length in bytes
>> + *
>> + * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
>> + * atomic write unit_min and unit_max values.
>> + */
>> +void generic_fill_statx_atomic_writes(struct kstat *stat,
>> + unsigned int unit_min,
>> + unsigned int unit_max)
>> +{
>> + /* Confirm that the request type is known */
>> + stat->result_mask |= STATX_WRITE_ATOMIC;
>> +
>> + /* Confirm that the file attribute type is known */
>> + stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
>> +
>> + if (unit_min) {
>> + stat->atomic_write_unit_min = unit_min;
>> + stat->atomic_write_unit_max = unit_max;
>> + /* Initially only allow 1x segment */
>> + stat->atomic_write_segments_max = 1;
>> +
>> + /* Confirm atomic writes are actually supported */
>> + stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
>> + }
>> +}
>> +EXPORT_SYMBOL_GPL(generic_fill_statx_atomic_writes);
>> +
>> /**
>> * vfs_getattr_nosec - getattr without security checks
>> * @path: file to get attributes from
>> @@ -659,6 +690,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
>> tmp.stx_dio_mem_align = stat->dio_mem_align;
>> tmp.stx_dio_offset_align = stat->dio_offset_align;
>> tmp.stx_subvol = stat->subvol;
>> + tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
>> + tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
>> + tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
>>
>> return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
>> }
>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>> index e049414bef7d..db26b4a70c62 100644
>> --- a/include/linux/fs.h
>> +++ b/include/linux/fs.h
>> @@ -3235,6 +3235,9 @@ extern const struct inode_operations page_symlink_inode_operations;
>> extern void kfree_link(void *);
>> void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
>> void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
>> +void generic_fill_statx_atomic_writes(struct kstat *stat,
>> + unsigned int unit_min,
>> + unsigned int unit_max);
>> extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
>> extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
>> void __inode_add_bytes(struct inode *inode, loff_t bytes);
>> diff --git a/include/linux/stat.h b/include/linux/stat.h
>> index bf92441dbad2..3d900c86981c 100644
>> --- a/include/linux/stat.h
>> +++ b/include/linux/stat.h
>> @@ -54,6 +54,9 @@ struct kstat {
>> u32 dio_offset_align;
>> u64 change_cookie;
>> u64 subvol;
>> + u32 atomic_write_unit_min;
>> + u32 atomic_write_unit_max;
>> + u32 atomic_write_segments_max;
>> };
>>
>> /* These definitions are internal to the kernel for now. Mainly used by nfsd. */
>> diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
>> index 67626d535316..887a25286441 100644
>> --- a/include/uapi/linux/stat.h
>> +++ b/include/uapi/linux/stat.h
>> @@ -126,9 +126,15 @@ struct statx {
>> __u64 stx_mnt_id;
>> __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */
>> __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */
>> - __u64 stx_subvol; /* Subvolume identifier */
>> /* 0xa0 */
>> - __u64 __spare3[11]; /* Spare space for future expansion */
>> + __u64 stx_subvol; /* Subvolume identifier */
>> + __u32 stx_atomic_write_unit_min; /* Min atomic write unit in bytes */
>> + __u32 stx_atomic_write_unit_max; /* Max atomic write unit in bytes */
>> + /* 0xb0 */
>> + __u32 stx_atomic_write_segments_max; /* Max atomic write segment count */
>> + __u32 __spare1[1];
>> + /* 0xb8 */
>> + __u64 __spare3[9]; /* Spare space for future expansion */
>> /* 0x100 */
>> };
>>
>> @@ -157,6 +163,7 @@ struct statx {
>> #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */
>> #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */
>> #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */
>> +#define STATX_WRITE_ATOMIC 0x00010000U /* Want/got atomic_write_* fields */
>>
>> #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */
>>
>> @@ -192,6 +199,7 @@ struct statx {
>> #define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */
>> #define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */
>> #define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */
>> +#define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */
>>
>>
>> #endif /* _UAPI_LINUX_STAT_H */
>> --
>> 2.31.1
>>
>>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v8 05/10] block: Add core atomic write support
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (3 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 04/10] fs: Add initial atomic write support info to statx John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-17 18:56 ` Keith Busch
2024-06-10 10:43 ` [PATCH v8 06/10] block: Add atomic write support for statx John Garry
` (5 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, John Garry, Himanshu Madhani
Add atomic write support, as follows:
- add helper functions to get request_queue atomic write limits
- report request_queue atomic write support limits to sysfs and update Doc
- support to safely merge atomic writes
- deal with splitting atomic writes
- misc helper functions
- add a per-request atomic write flag
New request_queue limits are added, as follows:
- atomic_write_hw_max is set by the block driver and is the maximum length
of an atomic write which the device may support. It is not
necessarily a power-of-2.
- atomic_write_max_sectors is derived from atomic_write_hw_max_sectors and
max_hw_sectors. It is always a power-of-2. Atomic writes may be merged,
and atomic_write_max_sectors would be the limit on a merged atomic write
request size. This value is not capped at max_sectors, as the value in
max_sectors can be controlled from userspace, and it would only cause
trouble if userspace could limit atomic_write_unit_max_bytes and the
other atomic write limits.
- atomic_write_hw_unit_{min,max} are set by the block driver and are the
min/max length of an atomic write unit which the device may support. They
both must be a power-of-2. Typically atomic_write_hw_unit_max will hold
the same value as atomic_write_hw_max.
- atomic_write_unit_{min,max} are derived from
atomic_write_hw_unit_{min,max}, max_hw_sectors, and block core limits.
Both min and max values must be a power-of-2.
- atomic_write_hw_boundary is set by the block driver. If non-zero, it
indicates an LBA space boundary at which an atomic write straddles no
longer is atomically executed by the disk. The value must be a
power-of-2. Note that it would be acceptable to enforce a rule that
atomic_write_hw_boundary_sectors is a multiple of
atomic_write_hw_unit_max, but the resultant code would be more
complicated.
All atomic writes limits are by default set 0 to indicate no atomic write
support. Even though it is assumed by Linux that a logical block can always
be atomically written, we ignore this as it is not of particular interest.
Stacked devices are just not supported either for now.
An atomic write must always be submitted to the block driver as part of a
single request. As such, only a single BIO must be submitted to the block
layer for an atomic write. When a single atomic write BIO is submitted, it
cannot be split. As such, atomic_write_unit_{max, min}_bytes are limited
by the maximum guaranteed BIO size which will not be required to be split.
This max size is calculated by request_queue max segments and the number
of bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace
issuing a write with iovcnt=1 for pwritev2() - as such, we can rely on each
segment containing PAGE_SIZE of data, apart from the first+last, which each
can fit logical block size of data. The first+last will be LBS
length/aligned as we rely on direct IO alignment rules also.
New sysfs files are added to report the following atomic write limits:
- atomic_write_unit_max_bytes - same as atomic_write_unit_max_sectors in
bytes
- atomic_write_unit_min_bytes - same as atomic_write_unit_min_sectors in
bytes
- atomic_write_boundary_bytes - same as atomic_write_hw_boundary_sectors in
bytes
- atomic_write_max_bytes - same as atomic_write_max_sectors in bytes
Atomic writes may only be merged with other atomic writes and only under
the following conditions:
- total resultant request length <= atomic_write_max_bytes
- the merged write does not straddle a boundary
Helper function bdev_can_atomic_write() is added to indicate whether
atomic writes may be issued to a bdev. If a bdev is a partition, the
partition start must be aligned with both atomic_write_unit_min_sectors
and atomic_write_hw_boundary_sectors.
FSes will rely on the block layer to validate that an atomic write BIO
submitted will be of valid size, so add blk_validate_atomic_write_op_size()
for this purpose. Userspace expects an atomic write which is of invalid
size to be rejected with -EINVAL, so add BLK_STS_INVAL for this. Also use
BLK_STS_INVAL for when a BIO needs to be split, as this should mean an
invalid size BIO.
Flag REQ_ATOMIC is used for indicating an atomic write.
Co-developed-by: Himanshu Madhani <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
Documentation/ABI/stable/sysfs-block | 53 ++++++++++++++++++++
block/blk-core.c | 19 +++++++
block/blk-merge.c | 50 +++++++++++++++++--
block/blk-settings.c | 75 ++++++++++++++++++++++++++++
block/blk-sysfs.c | 33 ++++++++++++
block/blk.h | 3 ++
include/linux/blk_types.h | 8 ++-
include/linux/blkdev.h | 55 ++++++++++++++++++++
8 files changed, 291 insertions(+), 5 deletions(-)
diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 831f19a32e08..cea8856f798d 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -21,6 +21,59 @@ Description:
device is offset from the internal allocation unit's
natural alignment.
+What: /sys/block/<disk>/atomic_write_max_bytes
+Date: February 2024
+Contact: Himanshu Madhani <[email protected]>
+Description:
+ [RO] This parameter specifies the maximum atomic write
+ size reported by the device. This parameter is relevant
+ for merging of writes, where a merged atomic write
+ operation must not exceed this number of bytes.
+ This parameter may be greater than the value in
+ atomic_write_unit_max_bytes as
+ atomic_write_unit_max_bytes will be rounded down to a
+ power-of-two and atomic_write_unit_max_bytes may also be
+ limited by some other queue limits, such as max_segments.
+ This parameter - along with atomic_write_unit_min_bytes
+ and atomic_write_unit_max_bytes - will not be larger than
+ max_hw_sectors_kb, but may be larger than max_sectors_kb.
+
+
+What: /sys/block/<disk>/atomic_write_unit_min_bytes
+Date: February 2024
+Contact: Himanshu Madhani <[email protected]>
+Description:
+ [RO] This parameter specifies the smallest block which can
+ be written atomically with an atomic write operation. All
+ atomic write operations must begin at a
+ atomic_write_unit_min boundary and must be multiples of
+ atomic_write_unit_min. This value must be a power-of-two.
+
+
+What: /sys/block/<disk>/atomic_write_unit_max_bytes
+Date: February 2024
+Contact: Himanshu Madhani <[email protected]>
+Description:
+ [RO] This parameter defines the largest block which can be
+ written atomically with an atomic write operation. This
+ value must be a multiple of atomic_write_unit_min and must
+ be a power-of-two. This value will not be larger than
+ atomic_write_max_bytes.
+
+
+What: /sys/block/<disk>/atomic_write_boundary_bytes
+Date: February 2024
+Contact: Himanshu Madhani <[email protected]>
+Description:
+ [RO] A device may need to internally split an atomic write I/O
+ which straddles a given logical block address boundary. This
+ parameter specifies the size in bytes of the atomic boundary if
+ one is reported by the device. This value must be a
+ power-of-two and at least the size as in
+ atomic_write_unit_max_bytes.
+ Any attempt to merge atomic write I/Os must not result in a
+ merged I/O which crosses this boundary (if any).
+
What: /sys/block/<disk>/diskseq
Date: February 2021
diff --git a/block/blk-core.c b/block/blk-core.c
index 82c3ae22d76d..d9f58fe71758 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -174,6 +174,8 @@ static const struct {
/* Command duration limit device-side timeout */
[BLK_STS_DURATION_LIMIT] = { -ETIME, "duration limit exceeded" },
+ [BLK_STS_INVAL] = { -EINVAL, "invalid" },
+
/* everything else not covered above: */
[BLK_STS_IOERR] = { -EIO, "I/O" },
};
@@ -739,6 +741,18 @@ void submit_bio_noacct_nocheck(struct bio *bio)
__submit_bio_noacct(bio);
}
+static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
+ struct bio *bio)
+{
+ if (bio->bi_iter.bi_size > queue_atomic_write_unit_max_bytes(q))
+ return BLK_STS_INVAL;
+
+ if (bio->bi_iter.bi_size % queue_atomic_write_unit_min_bytes(q))
+ return BLK_STS_INVAL;
+
+ return BLK_STS_OK;
+}
+
/**
* submit_bio_noacct - re-submit a bio to the block device layer for I/O
* @bio: The bio describing the location in memory and on the device.
@@ -797,6 +811,11 @@ void submit_bio_noacct(struct bio *bio)
switch (bio_op(bio)) {
case REQ_OP_READ:
case REQ_OP_WRITE:
+ if (bio->bi_opf & REQ_ATOMIC) {
+ status = blk_validate_atomic_write_op_size(q, bio);
+ if (status != BLK_STS_OK)
+ goto end_io;
+ }
break;
case REQ_OP_FLUSH:
/*
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 68969e27c831..b158d31940d1 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -154,8 +154,16 @@ static struct bio *bio_split_write_zeroes(struct bio *bio,
return bio_split(bio, lim->max_write_zeroes_sectors, GFP_NOIO, bs);
}
-static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim)
+static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim,
+ bool is_atomic)
{
+ /*
+ * If chunk_sectors and atomic_write_boundary_sectors are both set,
+ * then they must be equal.
+ */
+ if (is_atomic)
+ return lim->atomic_write_boundary_sectors;
+
return lim->chunk_sectors;
}
@@ -172,8 +180,18 @@ static inline unsigned get_max_io_size(struct bio *bio,
{
unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
- unsigned boundary_sectors = blk_boundary_sectors(lim);
- unsigned max_sectors = lim->max_sectors, start, end;
+ bool is_atomic = bio->bi_opf & REQ_ATOMIC;
+ unsigned boundary_sectors = blk_boundary_sectors(lim, is_atomic);
+ unsigned max_sectors, start, end;
+
+ /*
+ * We ignore lim->max_sectors for atomic writes because it may less
+ * than the actual bio size, which we cannot tolerate.
+ */
+ if (is_atomic)
+ max_sectors = lim->atomic_write_max_sectors;
+ else
+ max_sectors = lim->max_sectors;
if (boundary_sectors) {
max_sectors = min(max_sectors,
@@ -311,6 +329,11 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
*segs = nsegs;
return NULL;
split:
+ if (bio->bi_opf & REQ_ATOMIC) {
+ bio->bi_status = BLK_STS_INVAL;
+ bio_endio(bio);
+ return ERR_PTR(-EINVAL);
+ }
/*
* We can't sanely support splitting for a REQ_NOWAIT bio. End it
* with EAGAIN if splitting is required and return an error pointer.
@@ -596,11 +619,12 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
struct request_queue *q = rq->q;
struct queue_limits *lim = &q->limits;
unsigned int max_sectors, boundary_sectors;
+ bool is_atomic = rq->cmd_flags & REQ_ATOMIC;
if (blk_rq_is_passthrough(rq))
return q->limits.max_hw_sectors;
- boundary_sectors = blk_boundary_sectors(lim);
+ boundary_sectors = blk_boundary_sectors(lim, is_atomic);
max_sectors = blk_queue_get_max_sectors(rq);
if (!boundary_sectors ||
@@ -806,6 +830,18 @@ static enum elv_merge blk_try_req_merge(struct request *req,
return ELEVATOR_NO_MERGE;
}
+static bool blk_atomic_write_mergeable_rq_bio(struct request *rq,
+ struct bio *bio)
+{
+ return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC);
+}
+
+static bool blk_atomic_write_mergeable_rqs(struct request *rq,
+ struct request *next)
+{
+ return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC);
+}
+
/*
* For non-mq, this has to be called with the request spinlock acquired.
* For mq with scheduling, the appropriate queue wide lock should be held.
@@ -829,6 +865,9 @@ static struct request *attempt_merge(struct request_queue *q,
if (req->ioprio != next->ioprio)
return NULL;
+ if (!blk_atomic_write_mergeable_rqs(req, next))
+ return NULL;
+
/*
* If we are allowed to merge, then append bio list
* from next to rq and release next. merge_requests_fn
@@ -960,6 +999,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
if (rq->ioprio != bio_prio(bio))
return false;
+ if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false)
+ return false;
+
return true;
}
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 996f247fc98e..140e13616462 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -97,6 +97,79 @@ static int blk_validate_zoned_limits(struct queue_limits *lim)
return 0;
}
+/*
+ * Returns max guaranteed bytes which we can fit in a bio.
+ *
+ * We request that an atomic_write is ITER_UBUF iov_iter (so a single vector),
+ * so we assume that we can fit in at least PAGE_SIZE in a segment, apart from
+ * the first and last segments.
+ */
+static
+unsigned int blk_queue_max_guaranteed_bio(struct queue_limits *lim)
+{
+ unsigned int max_segments = min(BIO_MAX_VECS, lim->max_segments);
+ unsigned int length;
+
+ length = min(max_segments, 2) * lim->logical_block_size;
+ if (max_segments > 2)
+ length += (max_segments - 2) * PAGE_SIZE;
+
+ return length;
+}
+
+static void blk_atomic_writes_update_limits(struct queue_limits *lim)
+{
+ unsigned int unit_limit = min(lim->max_hw_sectors << SECTOR_SHIFT,
+ blk_queue_max_guaranteed_bio(lim));
+
+ unit_limit = rounddown_pow_of_two(unit_limit);
+
+ lim->atomic_write_max_sectors =
+ min(lim->atomic_write_hw_max >> SECTOR_SHIFT,
+ lim->max_hw_sectors);
+ lim->atomic_write_unit_min =
+ min(lim->atomic_write_hw_unit_min, unit_limit);
+ lim->atomic_write_unit_max =
+ min(lim->atomic_write_hw_unit_max, unit_limit);
+ lim->atomic_write_boundary_sectors =
+ lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
+}
+
+static void blk_validate_atomic_write_limits(struct queue_limits *lim)
+{
+ unsigned int boundary_sectors_hw;
+
+ if (!lim->atomic_write_hw_max)
+ goto unsupported;
+
+ boundary_sectors_hw = lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
+
+ if (boundary_sectors_hw) {
+ /* It doesn't make sense to allow different non-zero values */
+ if (lim->chunk_sectors &&
+ lim->chunk_sectors != boundary_sectors_hw)
+ goto unsupported;
+
+ /* The boundary size just needs to be a multiple of unit_max
+ * (and not necessarily a power-of-2), so this following check
+ * could be relaxed in future.
+ * Furthermore, if needed, unit_max could be reduced so that
+ * it is compliant with a !power-of-2 boundary.
+ */
+ if (!is_power_of_2(lim->atomic_write_hw_boundary))
+ goto unsupported;
+ }
+
+ blk_atomic_writes_update_limits(lim);
+ return;
+
+unsupported:
+ lim->atomic_write_max_sectors = 0;
+ lim->atomic_write_boundary_sectors = 0;
+ lim->atomic_write_unit_min = 0;
+ lim->atomic_write_unit_max = 0;
+}
+
/*
* Check that the limits in lim are valid, initialize defaults for unset
* values, and cap values based on others where needed.
@@ -230,6 +303,8 @@ static int blk_validate_limits(struct queue_limits *lim)
lim->misaligned = 0;
}
+ blk_validate_atomic_write_limits(lim);
+
return blk_validate_zoned_limits(lim);
}
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index f0f9314ab65c..42fbbaa52ccf 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -118,6 +118,30 @@ static ssize_t queue_max_discard_segments_show(struct request_queue *q,
return queue_var_show(queue_max_discard_segments(q), page);
}
+static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q,
+ char *page)
+{
+ return queue_var_show(queue_atomic_write_max_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_boundary_show(struct request_queue *q,
+ char *page)
+{
+ return queue_var_show(queue_atomic_write_boundary_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q,
+ char *page)
+{
+ return queue_var_show(queue_atomic_write_unit_min_bytes(q), page);
+}
+
+static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q,
+ char *page)
+{
+ return queue_var_show(queue_atomic_write_unit_max_bytes(q), page);
+}
+
static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
{
return queue_var_show(q->limits.max_integrity_segments, page);
@@ -495,6 +519,11 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes");
QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes");
QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
+QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
+QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
+
QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
@@ -618,6 +647,10 @@ static struct attribute *queue_attrs[] = {
&queue_discard_max_entry.attr,
&queue_discard_max_hw_entry.attr,
&queue_discard_zeroes_data_entry.attr,
+ &queue_atomic_write_max_bytes_entry.attr,
+ &queue_atomic_write_boundary_entry.attr,
+ &queue_atomic_write_unit_min_entry.attr,
+ &queue_atomic_write_unit_max_entry.attr,
&queue_write_same_max_entry.attr,
&queue_write_zeroes_max_entry.attr,
&queue_zone_append_max_entry.attr,
diff --git a/block/blk.h b/block/blk.h
index 75c1683fc320..b2fa42657f62 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -193,6 +193,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
if (unlikely(op == REQ_OP_WRITE_ZEROES))
return q->limits.max_write_zeroes_sectors;
+ if (rq->cmd_flags & REQ_ATOMIC)
+ return q->limits.atomic_write_max_sectors;
+
return q->limits.max_sectors;
}
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 781c4500491b..632edd71f8c6 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -162,6 +162,11 @@ typedef u16 blk_short_t;
*/
#define BLK_STS_DURATION_LIMIT ((__force blk_status_t)17)
+/*
+ * Invalid size or alignment.
+ */
+#define BLK_STS_INVAL ((__force blk_status_t)19)
+
/**
* blk_path_error - returns true if error may be path related
* @error: status the request was completed with
@@ -370,7 +375,7 @@ enum req_flag_bits {
__REQ_SWAP, /* swap I/O */
__REQ_DRV, /* for driver use */
__REQ_FS_PRIVATE, /* for file system (submitter) use */
-
+ __REQ_ATOMIC, /* for atomic write operations */
/*
* Command specific flags, keep last:
*/
@@ -402,6 +407,7 @@ enum req_flag_bits {
#define REQ_SWAP (__force blk_opf_t)(1ULL << __REQ_SWAP)
#define REQ_DRV (__force blk_opf_t)(1ULL << __REQ_DRV)
#define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE)
+#define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC)
#define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ddff90766f9f..930debeba3f0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -310,6 +310,16 @@ struct queue_limits {
unsigned int discard_alignment;
unsigned int zone_write_granularity;
+ /* atomic write limits */
+ unsigned int atomic_write_hw_max;
+ unsigned int atomic_write_max_sectors;
+ unsigned int atomic_write_hw_boundary;
+ unsigned int atomic_write_boundary_sectors;
+ unsigned int atomic_write_hw_unit_min;
+ unsigned int atomic_write_unit_min;
+ unsigned int atomic_write_hw_unit_max;
+ unsigned int atomic_write_unit_max;
+
unsigned short max_segments;
unsigned short max_integrity_segments;
unsigned short max_discard_segments;
@@ -1355,6 +1365,30 @@ static inline int queue_dma_alignment(const struct request_queue *q)
return q ? q->limits.dma_alignment : 511;
}
+static inline unsigned int
+queue_atomic_write_unit_max_bytes(const struct request_queue *q)
+{
+ return q->limits.atomic_write_unit_max;
+}
+
+static inline unsigned int
+queue_atomic_write_unit_min_bytes(const struct request_queue *q)
+{
+ return q->limits.atomic_write_unit_min;
+}
+
+static inline unsigned int
+queue_atomic_write_boundary_bytes(const struct request_queue *q)
+{
+ return q->limits.atomic_write_boundary_sectors << SECTOR_SHIFT;
+}
+
+static inline unsigned int
+queue_atomic_write_max_bytes(const struct request_queue *q)
+{
+ return q->limits.atomic_write_max_sectors << SECTOR_SHIFT;
+}
+
static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
{
return queue_dma_alignment(bdev_get_queue(bdev));
@@ -1596,6 +1630,27 @@ struct io_comp_batch {
void (*complete)(struct io_comp_batch *);
};
+static inline bool bdev_can_atomic_write(struct block_device *bdev)
+{
+ struct request_queue *bd_queue = bdev->bd_queue;
+ struct queue_limits *limits = &bd_queue->limits;
+
+ if (!limits->atomic_write_unit_min)
+ return false;
+
+ if (bdev_is_partition(bdev)) {
+ sector_t bd_start_sect = bdev->bd_start_sect;
+ unsigned int alignment =
+ max(limits->atomic_write_unit_min,
+ limits->atomic_write_hw_boundary);
+
+ if (!IS_ALIGNED(bd_start_sect, alignment >> SECTOR_SHIFT))
+ return false;
+ }
+
+ return true;
+}
+
#define DEFINE_IO_COMP_BATCH(name) struct io_comp_batch name = { }
#endif /* _LINUX_BLKDEV_H */
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-10 10:43 ` [PATCH v8 05/10] block: Add core atomic write support John Garry
@ 2024-06-17 18:56 ` Keith Busch
2024-06-18 6:51 ` Christoph Hellwig
0 siblings, 1 reply; 27+ messages in thread
From: Keith Busch @ 2024-06-17 18:56 UTC (permalink / raw)
To: John Garry
Cc: axboe, hch, sagi, jejb, martin.petersen, viro, brauner, dchinner,
jack, djwong, linux-block, linux-kernel, linux-nvme,
linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
linux-btrfs, io-uring, nilay, ritesh.list, willy, agk, snitzer,
mpatocka, dm-devel, hare, Himanshu Madhani
On Mon, Jun 10, 2024 at 10:43:24AM +0000, John Garry wrote:
> +static void blk_validate_atomic_write_limits(struct queue_limits *lim)
> +{
> + unsigned int boundary_sectors_hw;
> +
> + if (!lim->atomic_write_hw_max)
> + goto unsupported;
> +
> + boundary_sectors_hw = lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
> +
> + if (boundary_sectors_hw) {
> + /* It doesn't make sense to allow different non-zero values */
> + if (lim->chunk_sectors &&
> + lim->chunk_sectors != boundary_sectors_hw)
> + goto unsupported;
I'm not sure I follow why these two need to be the same. I can see
checking for 'chunk_sectors % boundary_sectors_hw == 0', but am I
missing something else?
The reason I ask, zone block devices redefine the "chunk_sectors" to
mean the zone size, and I'm pretty sure the typical zone size is much
larger than the any common atomic write size.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-17 18:56 ` Keith Busch
@ 2024-06-18 6:51 ` Christoph Hellwig
2024-06-18 7:46 ` John Garry
0 siblings, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-06-18 6:51 UTC (permalink / raw)
To: Keith Busch
Cc: John Garry, axboe, hch, sagi, jejb, martin.petersen, viro,
brauner, dchinner, jack, djwong, linux-block, linux-kernel,
linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy, agk,
snitzer, mpatocka, dm-devel, hare, Himanshu Madhani
On Mon, Jun 17, 2024 at 12:56:01PM -0600, Keith Busch wrote:
> I'm not sure I follow why these two need to be the same. I can see
> checking for 'chunk_sectors % boundary_sectors_hw == 0', but am I
> missing something else?
>
> The reason I ask, zone block devices redefine the "chunk_sectors" to
> mean the zone size, and I'm pretty sure the typical zone size is much
> larger than the any common atomic write size.
Yeah. Then again atomic writes in the traditional sense don't really
make sense for zoned devices anyway as the zoned devices never overwrite
and require all data up to the write pointer to be valid. In theory
they could be interpreted so that you don't get a partical write failure
if you stick to the atomic write boundaries, but that is mostly
pointless.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-18 6:51 ` Christoph Hellwig
@ 2024-06-18 7:46 ` John Garry
2024-06-18 17:25 ` Keith Busch
0 siblings, 1 reply; 27+ messages in thread
From: John Garry @ 2024-06-18 7:46 UTC (permalink / raw)
To: Christoph Hellwig, Keith Busch
Cc: axboe, sagi, jejb, martin.petersen, viro, brauner, dchinner, jack,
djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, Himanshu Madhani
On 18/06/2024 07:51, Christoph Hellwig wrote:
> On Mon, Jun 17, 2024 at 12:56:01PM -0600, Keith Busch wrote:
>> I'm not sure I follow why these two need to be the same. I can see
>> checking for 'chunk_sectors % boundary_sectors_hw == 0', but am I
>> missing something else?
For simplicity, initially I was just asking for them to be the same.
If we relax to chunk_sectors % boundary_sectors_hw == 0, then for normal
writing we could use a larger chunk size (than atomic boundary_sectors_hw).
I just don't know if this stuff exists which will have a larger
chunk_size than atomic boundary_sectors_hw and whether it is worth
trying to support them.
>>
>> The reason I ask, zone block devices redefine the "chunk_sectors" to
>> mean the zone size, and I'm pretty sure the typical zone size is much
>> larger than the any common atomic write size.
>
> Yeah. Then again atomic writes in the traditional sense don't really
> make sense for zoned devices anyway as the zoned devices never overwrite
> and require all data up to the write pointer to be valid. In theory
> they could be interpreted so that you don't get a partical write failure
> if you stick to the atomic write boundaries, but that is mostly
> pointless.
>
About NVMe, the spec says that NABSN and NOIOB may not be related to one
another (command set spec 1.0d 5.8.2.1), but I am wondering if people
really build HW which would have different NABSN/NABSPF and NOIOB. I
don't know.
Thanks,
John
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-18 7:46 ` John Garry
@ 2024-06-18 17:25 ` Keith Busch
2024-06-19 7:59 ` John Garry
0 siblings, 1 reply; 27+ messages in thread
From: Keith Busch @ 2024-06-18 17:25 UTC (permalink / raw)
To: John Garry
Cc: Christoph Hellwig, axboe, sagi, jejb, martin.petersen, viro,
brauner, dchinner, jack, djwong, linux-block, linux-kernel,
linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy, agk,
snitzer, mpatocka, dm-devel, hare, Himanshu Madhani
On Tue, Jun 18, 2024 at 08:46:31AM +0100, John Garry wrote:
>
> About NVMe, the spec says that NABSN and NOIOB may not be related to one
> another (command set spec 1.0d 5.8.2.1), but I am wondering if people really
> build HW which would have different NABSN/NABSPF and NOIOB. I don't know.
The history of NOIOB is from an nvme drive that had two back-end
controllers with their own isolated storage, and then striped together
on the front end for the host to see. A command crossing the stripe
boundary takes a slow path to split it for each backend controller's
portion and merge the results. Subsequent implementations may have
different reasons for advertising this boundary, but that was the
original.
Anyway, there was an idea that the stripe size could be user
configurable, though that never shipped as far as I know. If it had,
then the optimal NOIOB could be made larger, but the atomic write size
doesn't change.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-18 17:25 ` Keith Busch
@ 2024-06-19 7:59 ` John Garry
2024-06-19 8:02 ` Christoph Hellwig
0 siblings, 1 reply; 27+ messages in thread
From: John Garry @ 2024-06-19 7:59 UTC (permalink / raw)
To: Keith Busch
Cc: Christoph Hellwig, axboe, sagi, jejb, martin.petersen, viro,
brauner, dchinner, jack, djwong, linux-block, linux-kernel,
linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy, agk,
snitzer, mpatocka, dm-devel, hare, Himanshu Madhani
On 18/06/2024 18:25, Keith Busch wrote:
> On Tue, Jun 18, 2024 at 08:46:31AM +0100, John Garry wrote:
>> About NVMe, the spec says that NABSN and NOIOB may not be related to one
>> another (command set spec 1.0d 5.8.2.1), but I am wondering if people really
>> build HW which would have different NABSN/NABSPF and NOIOB. I don't know.
> The history of NOIOB is from an nvme drive that had two back-end
> controllers with their own isolated storage, and then striped together
> on the front end for the host to see. A command crossing the stripe
> boundary takes a slow path to split it for each backend controller's
> portion and merge the results. Subsequent implementations may have
> different reasons for advertising this boundary, but that was the
> original.
In this case, I would expect NOIOB >= atomic write boundary.
Would it be sane to have a NOIOB < atomic write boundary in some other
config?
I can support these possibilities, but the code will just get more complex.
>
> Anyway, there was an idea that the stripe size could be user
> configurable, though that never shipped as far as I know. If it had,
> then the optimal NOIOB could be made larger, but the atomic write size
> doesn't change.
Thanks,
John
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-19 7:59 ` John Garry
@ 2024-06-19 8:02 ` Christoph Hellwig
2024-06-19 10:42 ` John Garry
2024-06-19 16:07 ` Martin K. Petersen
0 siblings, 2 replies; 27+ messages in thread
From: Christoph Hellwig @ 2024-06-19 8:02 UTC (permalink / raw)
To: John Garry
Cc: Keith Busch, Christoph Hellwig, axboe, sagi, jejb,
martin.petersen, viro, brauner, dchinner, jack, djwong,
linux-block, linux-kernel, linux-nvme, linux-fsdevel, tytso,
jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs, io-uring,
nilay, ritesh.list, willy, agk, snitzer, mpatocka, dm-devel, hare,
Himanshu Madhani
On Wed, Jun 19, 2024 at 08:59:33AM +0100, John Garry wrote:
> In this case, I would expect NOIOB >= atomic write boundary.
>
> Would it be sane to have a NOIOB < atomic write boundary in some other
> config?
>
> I can support these possibilities, but the code will just get more complex.
I'd be tempted to simply not support the case where NOIOB is not a
multiple of the atomic write boundary for now and disable atomic writes
with a big fat warning (and a good comment in the soure code). If users
show up with a device that hits this and want to use atomic writes we
can resolved it.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-19 8:02 ` Christoph Hellwig
@ 2024-06-19 10:42 ` John Garry
2024-06-19 16:07 ` Martin K. Petersen
1 sibling, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-19 10:42 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Keith Busch, axboe, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack, djwong, linux-block, linux-kernel, linux-nvme,
linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
linux-btrfs, io-uring, nilay, ritesh.list, willy, agk, snitzer,
mpatocka, dm-devel, hare, Himanshu Madhani
On 19/06/2024 09:02, Christoph Hellwig wrote:
> Wed, Jun 19, 2024 at 08:59:33AM +0100, John Garry wrote:
>> In this case, I would expect NOIOB >= atomic write boundary.
>>
>> Would it be sane to have a NOIOB < atomic write boundary in some other
>> config?
>>
>> I can support these possibilities, but the code will just get more complex.
> I'd be tempted to simply not support the case where NOIOB is not a
> multiple of the atomic write boundary for now and disable atomic writes
> with a big fat warning (and a good comment in the soure code). If users
> show up with a device that hits this and want to use atomic writes we
> can resolved it.
Fine by me.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 05/10] block: Add core atomic write support
2024-06-19 8:02 ` Christoph Hellwig
2024-06-19 10:42 ` John Garry
@ 2024-06-19 16:07 ` Martin K. Petersen
1 sibling, 0 replies; 27+ messages in thread
From: Martin K. Petersen @ 2024-06-19 16:07 UTC (permalink / raw)
To: Christoph Hellwig
Cc: John Garry, Keith Busch, axboe, sagi, jejb, martin.petersen, viro,
brauner, dchinner, jack, djwong, linux-block, linux-kernel,
linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy, agk,
snitzer, mpatocka, dm-devel, hare, Himanshu Madhani
Christoph,
> I'd be tempted to simply not support the case where NOIOB is not a
> multiple of the atomic write boundary for now and disable atomic
> writes with a big fat warning (and a good comment in the soure code).
> If users show up with a device that hits this and want to use atomic
> writes we can resolved it.
I agree.
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v8 06/10] block: Add atomic write support for statx
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (4 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 05/10] block: Add core atomic write support John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-10 10:43 ` [PATCH v8 07/10] block: Add fops atomic write support John Garry
` (4 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, Prasad Singamsetty, John Garry
From: Prasad Singamsetty <[email protected]>
Extend statx system call to return additional info for atomic write support
support if the specified file is a block device.
Signed-off-by: Prasad Singamsetty <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
block/bdev.c | 36 ++++++++++++++++++++++++++----------
fs/stat.c | 16 +++++++++-------
include/linux/blkdev.h | 6 ++++--
3 files changed, 39 insertions(+), 19 deletions(-)
diff --git a/block/bdev.c b/block/bdev.c
index 353677ac49b3..3976a652fcc7 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -1260,23 +1260,39 @@ void sync_bdevs(bool wait)
}
/*
- * Handle STATX_DIOALIGN for block devices.
- *
- * Note that the inode passed to this is the inode of a block device node file,
- * not the block device's internal inode. Therefore it is *not* valid to use
- * I_BDEV() here; the block device has to be looked up by i_rdev instead.
+ * Handle STATX_{DIOALIGN, WRITE_ATOMIC} for block devices.
*/
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+void bdev_statx(struct inode *backing_inode, struct kstat *stat,
+ u32 request_mask)
{
struct block_device *bdev;
- bdev = blkdev_get_no_open(inode->i_rdev);
+ if (!(request_mask & (STATX_DIOALIGN | STATX_WRITE_ATOMIC)))
+ return;
+
+ /*
+ * Note that backing_inode is the inode of a block device node file,
+ * not the block device's internal inode. Therefore it is *not* valid
+ * to use I_BDEV() here; the block device has to be looked up by i_rdev
+ * instead.
+ */
+ bdev = blkdev_get_no_open(backing_inode->i_rdev);
if (!bdev)
return;
- stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
- stat->dio_offset_align = bdev_logical_block_size(bdev);
- stat->result_mask |= STATX_DIOALIGN;
+ if (request_mask & STATX_DIOALIGN) {
+ stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
+ stat->dio_offset_align = bdev_logical_block_size(bdev);
+ stat->result_mask |= STATX_DIOALIGN;
+ }
+
+ if (request_mask & STATX_WRITE_ATOMIC && bdev_can_atomic_write(bdev)) {
+ struct request_queue *bd_queue = bdev->bd_queue;
+
+ generic_fill_statx_atomic_writes(stat,
+ queue_atomic_write_unit_min_bytes(bd_queue),
+ queue_atomic_write_unit_max_bytes(bd_queue));
+ }
blkdev_put_no_open(bdev);
}
diff --git a/fs/stat.c b/fs/stat.c
index 72d0e6357b91..bd0698dfd7b3 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -265,6 +265,7 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
{
struct path path;
unsigned int lookup_flags = getname_statx_lookup_flags(flags);
+ struct inode *backing_inode;
int error;
if (flags & ~(AT_SYMLINK_NOFOLLOW | AT_NO_AUTOMOUNT | AT_EMPTY_PATH |
@@ -290,13 +291,14 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
stat->attributes |= STATX_ATTR_MOUNT_ROOT;
stat->attributes_mask |= STATX_ATTR_MOUNT_ROOT;
- /* Handle STATX_DIOALIGN for block devices. */
- if (request_mask & STATX_DIOALIGN) {
- struct inode *inode = d_backing_inode(path.dentry);
-
- if (S_ISBLK(inode->i_mode))
- bdev_statx_dioalign(inode, stat);
- }
+ /*
+ * If this is a block device inode, override the filesystem
+ * attributes with the block device specific parameters that need to be
+ * obtained from the bdev backing inode.
+ */
+ backing_inode = d_backing_inode(path.dentry);
+ if (S_ISBLK(backing_inode->i_mode))
+ bdev_statx(backing_inode, stat, request_mask);
path_put(&path);
if (retry_estale(error, lookup_flags)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 930debeba3f0..d861715c0f4e 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1590,7 +1590,8 @@ int sync_blockdev(struct block_device *bdev);
int sync_blockdev_range(struct block_device *bdev, loff_t lstart, loff_t lend);
int sync_blockdev_nowait(struct block_device *bdev);
void sync_bdevs(bool wait);
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat);
+void bdev_statx(struct inode *backing_inode, struct kstat *stat,
+ u32 request_mask);
void printk_all_partitions(void);
int __init early_lookup_bdev(const char *pathname, dev_t *dev);
#else
@@ -1608,7 +1609,8 @@ static inline int sync_blockdev_nowait(struct block_device *bdev)
static inline void sync_bdevs(bool wait)
{
}
-static inline void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+static inline void bdev_statx(struct inode *backing_inode, struct kstat *stat,
+ u32 request_mask)
{
}
static inline void printk_all_partitions(void)
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 07/10] block: Add fops atomic write support
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (5 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 06/10] block: Add atomic write support for statx John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-10 10:43 ` [PATCH v8 08/10] scsi: sd: Atomic " John Garry
` (3 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, John Garry
Support atomic writes by submitting a single BIO with the REQ_ATOMIC set.
It must be ensured that the atomic write adheres to its rules, like
naturally aligned offset, so call blkdev_dio_invalid() ->
blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to
blkdev_dio_invalid()] for this purpose. The BIO submission path currently
checks for atomic writes which are too large, so no need to check here.
In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot
produce a single BIO, so error in this case.
Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes
and the associated file flag is for O_DIRECT.
Signed-off-by: John Garry <[email protected]>
---
block/fops.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/block/fops.c b/block/fops.c
index 376265935714..be36c9fbd500 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -34,9 +34,12 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
return opf;
}
-static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos,
- struct iov_iter *iter)
+static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
+ struct iov_iter *iter, bool is_atomic)
{
+ if (is_atomic && !generic_atomic_write_valid(iter, pos))
+ return true;
+
return pos & (bdev_logical_block_size(bdev) - 1) ||
!bdev_iter_is_aligned(bdev, iter);
}
@@ -72,6 +75,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT;
bio.bi_write_hint = file_inode(iocb->ki_filp)->i_write_hint;
bio.bi_ioprio = iocb->ki_ioprio;
+ if (iocb->ki_flags & IOCB_ATOMIC)
+ bio.bi_opf |= REQ_ATOMIC;
ret = bio_iov_iter_get_pages(&bio, iter);
if (unlikely(ret))
@@ -343,6 +348,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
task_io_account_write(bio->bi_iter.bi_size);
}
+ if (iocb->ki_flags & IOCB_ATOMIC)
+ bio->bi_opf |= REQ_ATOMIC;
+
if (iocb->ki_flags & IOCB_NOWAIT)
bio->bi_opf |= REQ_NOWAIT;
@@ -359,12 +367,13 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
{
struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+ bool is_atomic = iocb->ki_flags & IOCB_ATOMIC;
unsigned int nr_pages;
if (!iov_iter_count(iter))
return 0;
- if (blkdev_dio_unaligned(bdev, iocb->ki_pos, iter))
+ if (blkdev_dio_invalid(bdev, iocb->ki_pos, iter, is_atomic))
return -EINVAL;
nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
@@ -373,6 +382,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
return __blkdev_direct_IO_simple(iocb, iter, bdev,
nr_pages);
return __blkdev_direct_IO_async(iocb, iter, bdev, nr_pages);
+ } else if (is_atomic) {
+ return -EINVAL;
}
return __blkdev_direct_IO(iocb, iter, bdev, bio_max_segs(nr_pages));
}
@@ -612,6 +623,9 @@ static int blkdev_open(struct inode *inode, struct file *filp)
if (!bdev)
return -ENXIO;
+ if (bdev_can_atomic_write(bdev) && filp->f_flags & O_DIRECT)
+ filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
+
ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
if (ret)
blkdev_put_no_open(bdev);
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 08/10] scsi: sd: Atomic write support
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (6 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 07/10] block: Add fops atomic write support John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-10 10:43 ` [PATCH v8 09/10] scsi: scsi_debug: " John Garry
` (2 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, John Garry
Support is divided into two main areas:
- reading VPD pages and setting sdev request_queue limits
- support WRITE ATOMIC (16) command and tracing
The relevant block limits VPD page need to be read to allow the block layer
request_queue atomic write limits to be set. These VPD page limits are
described in sbc4r22 section 6.6.4 - Block limits VPD page.
There are five limits of interest:
- MAXIMUM ATOMIC TRANSFER LENGTH
- ATOMIC ALIGNMENT
- ATOMIC TRANSFER LENGTH GRANULARITY
- MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY
- MAXIMUM ATOMIC BOUNDARY SIZE
MAXIMUM ATOMIC TRANSFER LENGTH is the maximum length for a WRITE ATOMIC
(16) command. It will not be greater than the device MAXIMUM TRANSFER
LENGTH.
ATOMIC ALIGNMENT and ATOMIC TRANSFER LENGTH GRANULARITY are the minimum
alignment and length values for an atomic write in terms of logical blocks.
Unlike NVMe, SCSI does not specify an LBA space boundary, but does specify
a per-IO boundary granularity. The maximum boundary size is specified in
MAXIMUM ATOMIC BOUNDARY SIZE. When used, this boundary value is set in the
WRITE ATOMIC (16) ATOMIC BOUNDARY field - layout for the WRITE_ATOMIC_16
command can be found in sbc4r22 section 5.48. This boundary value is the
granularity size at which the device may atomically write the data. A value
of zero in WRITE ATOMIC (16) ATOMIC BOUNDARY field means that all data must
be atomically written together.
MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY is the maximum atomic write
length if a non-zero boundary value is set.
For atomic write support, the WRITE ATOMIC (16) boundary is not of much
interest, as the block layer expects each request submitted to be executed
atomically. However, the SCSI spec does leave itself open to a quirky
scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero, yet MAXIMUM ATOMIC
TRANSFER LENGTH WITH BOUNDARY and MAXIMUM ATOMIC BOUNDARY SIZE are both
non-zero. This case will be supported.
To set the block layer request_queue atomic write capabilities, sanitize
the VPD page limits and set limits as follows:
- atomic_write_unit_min is derived from granularity and alignment values.
If no granularity value is not set, use physical block size
- atomic_write_unit_max is derived from MAXIMUM ATOMIC TRANSFER LENGTH. In
the scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero and boundary
limits are non-zero, use MAXIMUM ATOMIC BOUNDARY SIZE for
atomic_write_unit_max. New flag scsi_disk.use_atomic_write_boundary is
set for this scenario.
- atomic_write_boundary_bytes is set to zero always
SCSI also supports a WRITE ATOMIC (32) command, which is for type 2
protection enabled. This is not going to be supported now, so check for
T10_PI_TYPE2_PROTECTION when setting any request_queue limits.
To handle an atomic write request, add support for WRITE ATOMIC (16)
command in handler sd_setup_atomic_cmnd(). Flag use_atomic_write_boundary
is checked here for encoding ATOMIC BOUNDARY field.
Trace info is also added for WRITE_ATOMIC_16 command.
Signed-off-by: John Garry <[email protected]>
---
drivers/scsi/scsi_trace.c | 22 +++++++++
drivers/scsi/sd.c | 93 ++++++++++++++++++++++++++++++++++++-
drivers/scsi/sd.h | 8 ++++
include/scsi/scsi_proto.h | 1 +
include/trace/events/scsi.h | 1 +
5 files changed, 124 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/scsi_trace.c b/drivers/scsi/scsi_trace.c
index 41a950075913..3e47c4472a80 100644
--- a/drivers/scsi/scsi_trace.c
+++ b/drivers/scsi/scsi_trace.c
@@ -325,6 +325,26 @@ scsi_trace_zbc_out(struct trace_seq *p, unsigned char *cdb, int len)
return ret;
}
+static const char *
+scsi_trace_atomic_write16_out(struct trace_seq *p, unsigned char *cdb, int len)
+{
+ const char *ret = trace_seq_buffer_ptr(p);
+ unsigned int boundary_size;
+ unsigned int nr_blocks;
+ sector_t lba;
+
+ lba = get_unaligned_be64(&cdb[2]);
+ boundary_size = get_unaligned_be16(&cdb[10]);
+ nr_blocks = get_unaligned_be16(&cdb[12]);
+
+ trace_seq_printf(p, "lba=%llu txlen=%u boundary_size=%u",
+ lba, nr_blocks, boundary_size);
+
+ trace_seq_putc(p, 0);
+
+ return ret;
+}
+
static const char *
scsi_trace_varlen(struct trace_seq *p, unsigned char *cdb, int len)
{
@@ -385,6 +405,8 @@ scsi_trace_parse_cdb(struct trace_seq *p, unsigned char *cdb, int len)
return scsi_trace_zbc_in(p, cdb, len);
case ZBC_OUT:
return scsi_trace_zbc_out(p, cdb, len);
+ case WRITE_ATOMIC_16:
+ return scsi_trace_atomic_write16_out(p, cdb, len);
default:
return scsi_trace_misc(p, cdb, len);
}
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index d957e29b17a9..a79da08d3cce 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -933,6 +933,64 @@ static blk_status_t sd_setup_unmap_cmnd(struct scsi_cmnd *cmd)
return scsi_alloc_sgtables(cmd);
}
+static void sd_config_atomic(struct scsi_disk *sdkp, struct queue_limits *lim)
+{
+ unsigned int logical_block_size = sdkp->device->sector_size,
+ physical_block_size_sectors, max_atomic, unit_min, unit_max;
+
+ if ((!sdkp->max_atomic && !sdkp->max_atomic_with_boundary) ||
+ sdkp->protection_type == T10_PI_TYPE2_PROTECTION)
+ return;
+
+ physical_block_size_sectors = sdkp->physical_block_size /
+ sdkp->device->sector_size;
+
+ unit_min = rounddown_pow_of_two(sdkp->atomic_granularity ?
+ sdkp->atomic_granularity :
+ physical_block_size_sectors);
+
+ /*
+ * Only use atomic boundary when we have the odd scenario of
+ * sdkp->max_atomic == 0, which the spec does permit.
+ */
+ if (sdkp->max_atomic) {
+ max_atomic = sdkp->max_atomic;
+ unit_max = rounddown_pow_of_two(sdkp->max_atomic);
+ sdkp->use_atomic_write_boundary = 0;
+ } else {
+ max_atomic = sdkp->max_atomic_with_boundary;
+ unit_max = rounddown_pow_of_two(sdkp->max_atomic_boundary);
+ sdkp->use_atomic_write_boundary = 1;
+ }
+
+ /*
+ * Ensure compliance with granularity and alignment. For now, keep it
+ * simple and just don't support atomic writes for values mismatched
+ * with max_{boundary}atomic, physical block size, and
+ * atomic_granularity itself.
+ *
+ * We're really being distrustful by checking unit_max also...
+ */
+ if (sdkp->atomic_granularity > 1) {
+ if (unit_min > 1 && unit_min % sdkp->atomic_granularity)
+ return;
+ if (unit_max > 1 && unit_max % sdkp->atomic_granularity)
+ return;
+ }
+
+ if (sdkp->atomic_alignment > 1) {
+ if (unit_min > 1 && unit_min % sdkp->atomic_alignment)
+ return;
+ if (unit_max > 1 && unit_max % sdkp->atomic_alignment)
+ return;
+ }
+
+ lim->atomic_write_hw_max = max_atomic * logical_block_size;
+ lim->atomic_write_hw_boundary = 0;
+ lim->atomic_write_hw_unit_min = unit_min * logical_block_size;
+ lim->atomic_write_hw_unit_max = unit_max * logical_block_size;
+}
+
static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd,
bool unmap)
{
@@ -1231,6 +1289,26 @@ static int sd_cdl_dld(struct scsi_disk *sdkp, struct scsi_cmnd *scmd)
return (hint - IOPRIO_HINT_DEV_DURATION_LIMIT_1) + 1;
}
+static blk_status_t sd_setup_atomic_cmnd(struct scsi_cmnd *cmd,
+ sector_t lba, unsigned int nr_blocks,
+ bool boundary, unsigned char flags)
+{
+ cmd->cmd_len = 16;
+ cmd->cmnd[0] = WRITE_ATOMIC_16;
+ cmd->cmnd[1] = flags;
+ put_unaligned_be64(lba, &cmd->cmnd[2]);
+ put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+ if (boundary)
+ put_unaligned_be16(nr_blocks, &cmd->cmnd[10]);
+ else
+ put_unaligned_be16(0, &cmd->cmnd[10]);
+ put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+ cmd->cmnd[14] = 0;
+ cmd->cmnd[15] = 0;
+
+ return BLK_STS_OK;
+}
+
static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
{
struct request *rq = scsi_cmd_to_rq(cmd);
@@ -1296,6 +1374,10 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) {
ret = sd_setup_rw32_cmnd(cmd, write, lba, nr_blocks,
protect | fua, dld);
+ } else if (rq->cmd_flags & REQ_ATOMIC && write) {
+ ret = sd_setup_atomic_cmnd(cmd, lba, nr_blocks,
+ sdkp->use_atomic_write_boundary,
+ protect | fua);
} else if (sdp->use_16_for_rw || (nr_blocks > 0xffff)) {
ret = sd_setup_rw16_cmnd(cmd, write, lba, nr_blocks,
protect | fua, dld);
@@ -3256,7 +3338,7 @@ static void sd_read_block_limits(struct scsi_disk *sdkp,
sdkp->max_ws_blocks = (u32)get_unaligned_be64(&vpd->data[36]);
if (!sdkp->lbpme)
- goto out;
+ goto config_atomic;
lba_count = get_unaligned_be32(&vpd->data[20]);
desc_count = get_unaligned_be32(&vpd->data[24]);
@@ -3271,6 +3353,15 @@ static void sd_read_block_limits(struct scsi_disk *sdkp,
get_unaligned_be32(&vpd->data[32]) & ~(1 << 31);
sd_config_discard(sdkp, lim, sd_discard_mode(sdkp));
+
+config_atomic:
+ sdkp->max_atomic = get_unaligned_be32(&vpd->data[44]);
+ sdkp->atomic_alignment = get_unaligned_be32(&vpd->data[48]);
+ sdkp->atomic_granularity = get_unaligned_be32(&vpd->data[52]);
+ sdkp->max_atomic_with_boundary = get_unaligned_be32(&vpd->data[56]);
+ sdkp->max_atomic_boundary = get_unaligned_be32(&vpd->data[60]);
+
+ sd_config_atomic(sdkp, lim);
}
out:
diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
index b4170b17bad4..c7ee1e9c2ba4 100644
--- a/drivers/scsi/sd.h
+++ b/drivers/scsi/sd.h
@@ -115,6 +115,13 @@ struct scsi_disk {
u32 max_unmap_blocks;
u32 unmap_granularity;
u32 unmap_alignment;
+
+ u32 max_atomic;
+ u32 atomic_alignment;
+ u32 atomic_granularity;
+ u32 max_atomic_with_boundary;
+ u32 max_atomic_boundary;
+
u32 index;
unsigned int physical_block_size;
unsigned int max_medium_access_timeouts;
@@ -148,6 +155,7 @@ struct scsi_disk {
unsigned security : 1;
unsigned ignore_medium_access_errors : 1;
unsigned rscs : 1; /* reduced stream control support */
+ unsigned use_atomic_write_boundary : 1;
};
#define to_scsi_disk(obj) container_of(obj, struct scsi_disk, disk_dev)
diff --git a/include/scsi/scsi_proto.h b/include/scsi/scsi_proto.h
index 843106e1109f..70e1262b2e20 100644
--- a/include/scsi/scsi_proto.h
+++ b/include/scsi/scsi_proto.h
@@ -120,6 +120,7 @@
#define WRITE_SAME_16 0x93
#define ZBC_OUT 0x94
#define ZBC_IN 0x95
+#define WRITE_ATOMIC_16 0x9c
#define SERVICE_ACTION_BIDIRECTIONAL 0x9d
#define SERVICE_ACTION_IN_16 0x9e
#define SERVICE_ACTION_OUT_16 0x9f
diff --git a/include/trace/events/scsi.h b/include/trace/events/scsi.h
index 8e2d9b1b0e77..05f1945ed204 100644
--- a/include/trace/events/scsi.h
+++ b/include/trace/events/scsi.h
@@ -102,6 +102,7 @@
scsi_opcode_name(WRITE_32), \
scsi_opcode_name(WRITE_SAME_32), \
scsi_opcode_name(ATA_16), \
+ scsi_opcode_name(WRITE_ATOMIC_16), \
scsi_opcode_name(ATA_12))
#define scsi_hostbyte_name(result) { result, #result }
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 09/10] scsi: scsi_debug: Atomic write support
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (7 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 08/10] scsi: sd: Atomic " John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-10 10:43 ` [PATCH v8 10/10] nvme: " John Garry
2024-06-14 2:01 ` [PATCH v8 00/10] block atomic writes Martin K. Petersen
10 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, John Garry
Add initial support for atomic writes.
As is standard method, feed device properties via modules param, those
being:
- atomic_max_size_blks
- atomic_alignment_blks
- atomic_granularity_blks
- atomic_max_size_with_boundary_blks
- atomic_max_boundary_blks
These just match sbc4r22 section 6.6.4 - Block limits VPD page.
We just support ATOMIC WRITE (16).
The major change in the driver is how we lock the device for RW accesses.
Currently the driver uses a per-device lock for accessing device metadata
and "media" data (calls to do_device_access()) atomically for the duration
of the whole read/write command.
This should not suit verifying atomic writes. Reason being that currently
all reads/writes are atomic, so using atomic writes does not prove
anything.
Change device access model to basis that regular writes only atomic on a
per-sector basis, while reads and atomic writes are fully atomic.
As mentioned, since accessing metadata and device media is atomic,
continue to have regular writes involving metadata - like discard or PI -
as atomic. We can improve this later.
Currently we only support model where overlapping going reads or writes
wait for current access to complete before commencing an atomic write.
This is described in 4.29.3.2 section of the SBC. However, we simplify,
things and wait for all accesses to complete (when issuing an atomic
write).
Signed-off-by: John Garry <[email protected]>
---
drivers/scsi/scsi_debug.c | 588 +++++++++++++++++++++++++++++---------
1 file changed, 454 insertions(+), 134 deletions(-)
diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 7f340a59fdc5..fcc9640fa18a 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -69,6 +69,8 @@ static const char *sdebug_version_date = "20210520";
/* Additional Sense Code (ASC) */
#define NO_ADDITIONAL_SENSE 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASC 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASCQ 0x23
#define LOGICAL_UNIT_NOT_READY 0x4
#define LOGICAL_UNIT_COMMUNICATION_FAILURE 0x8
#define UNRECOVERED_READ_ERR 0x11
@@ -103,6 +105,7 @@ static const char *sdebug_version_date = "20210520";
#define READ_BOUNDARY_ASCQ 0x7
#define ATTEMPT_ACCESS_GAP 0x9
#define INSUFF_ZONE_ASCQ 0xe
+/* see drivers/scsi/sense_codes.h */
/* Additional Sense Code Qualifier (ASCQ) */
#define ACK_NAK_TO 0x3
@@ -152,6 +155,12 @@ static const char *sdebug_version_date = "20210520";
#define DEF_VIRTUAL_GB 0
#define DEF_VPD_USE_HOSTNO 1
#define DEF_WRITESAME_LENGTH 0xFFFF
+#define DEF_ATOMIC_WR 0
+#define DEF_ATOMIC_WR_MAX_LENGTH 8192
+#define DEF_ATOMIC_WR_ALIGN 2
+#define DEF_ATOMIC_WR_GRAN 2
+#define DEF_ATOMIC_WR_MAX_LENGTH_BNDRY (DEF_ATOMIC_WR_MAX_LENGTH)
+#define DEF_ATOMIC_WR_MAX_BNDRY 128
#define DEF_STRICT 0
#define DEF_STATISTICS false
#define DEF_SUBMIT_QUEUES 1
@@ -374,7 +383,9 @@ struct sdebug_host_info {
/* There is an xarray of pointers to this struct's objects, one per host */
struct sdeb_store_info {
- rwlock_t macc_lck; /* for atomic media access on this store */
+ rwlock_t macc_data_lck; /* for media data access on this store */
+ rwlock_t macc_meta_lck; /* for atomic media meta access on this store */
+ rwlock_t macc_sector_lck; /* per-sector media data access on this store */
u8 *storep; /* user data storage (ram) */
struct t10_pi_tuple *dif_storep; /* protection info */
void *map_storep; /* provisioning map */
@@ -398,12 +409,20 @@ struct sdebug_defer {
enum sdeb_defer_type defer_t;
};
+struct sdebug_device_access_info {
+ bool atomic_write;
+ u64 lba;
+ u32 num;
+ struct scsi_cmnd *self;
+};
+
struct sdebug_queued_cmd {
/* corresponding bit set in in_use_bm[] in owning struct sdebug_queue
* instance indicates this slot is in use.
*/
struct sdebug_defer sd_dp;
struct scsi_cmnd *scmd;
+ struct sdebug_device_access_info *i;
};
struct sdebug_scsi_cmd {
@@ -463,7 +482,8 @@ enum sdeb_opcode_index {
SDEB_I_PRE_FETCH = 29, /* 10, 16 */
SDEB_I_ZONE_OUT = 30, /* 0x94+SA; includes no data xfer */
SDEB_I_ZONE_IN = 31, /* 0x95+SA; all have data-in */
- SDEB_I_LAST_ELEM_P1 = 32, /* keep this last (previous + 1) */
+ SDEB_I_ATOMIC_WRITE_16 = 32,
+ SDEB_I_LAST_ELEM_P1 = 33, /* keep this last (previous + 1) */
};
@@ -497,7 +517,8 @@ static const unsigned char opcode_ind_arr[256] = {
0, 0, 0, SDEB_I_VERIFY,
SDEB_I_PRE_FETCH, SDEB_I_SYNC_CACHE, 0, SDEB_I_WRITE_SAME,
SDEB_I_ZONE_OUT, SDEB_I_ZONE_IN, 0, 0,
- 0, 0, 0, 0, 0, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
+ 0, 0, 0, 0,
+ SDEB_I_ATOMIC_WRITE_16, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
/* 0xa0; 0xa0->0xbf: 12 byte cdbs */
SDEB_I_REPORT_LUNS, SDEB_I_ATA_PT, 0, SDEB_I_MAINT_IN,
SDEB_I_MAINT_OUT, 0, 0, 0,
@@ -547,6 +568,7 @@ static int resp_write_buffer(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_sync_cache(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_pre_fetch(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_report_zones(struct scsi_cmnd *, struct sdebug_dev_info *);
+static int resp_atomic_write(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_open_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_close_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_finish_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
@@ -788,6 +810,11 @@ static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = {
resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */
{16, 0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} },
+/* 31 */
+ {0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO,
+ resp_atomic_write, NULL, /* ATOMIC WRITE 16 */
+ {16, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} },
/* sentinel */
{0xff, 0, 0, 0, NULL, NULL, /* terminating element */
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
@@ -835,6 +862,13 @@ static unsigned int sdebug_unmap_granularity = DEF_UNMAP_GRANULARITY;
static unsigned int sdebug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS;
static unsigned int sdebug_unmap_max_desc = DEF_UNMAP_MAX_DESC;
static unsigned int sdebug_write_same_length = DEF_WRITESAME_LENGTH;
+static unsigned int sdebug_atomic_wr = DEF_ATOMIC_WR;
+static unsigned int sdebug_atomic_wr_max_length = DEF_ATOMIC_WR_MAX_LENGTH;
+static unsigned int sdebug_atomic_wr_align = DEF_ATOMIC_WR_ALIGN;
+static unsigned int sdebug_atomic_wr_gran = DEF_ATOMIC_WR_GRAN;
+static unsigned int sdebug_atomic_wr_max_length_bndry =
+ DEF_ATOMIC_WR_MAX_LENGTH_BNDRY;
+static unsigned int sdebug_atomic_wr_max_bndry = DEF_ATOMIC_WR_MAX_BNDRY;
static int sdebug_uuid_ctl = DEF_UUID_CTL;
static bool sdebug_random = DEF_RANDOM;
static bool sdebug_per_host_store = DEF_PER_HOST_STORE;
@@ -1188,6 +1222,11 @@ static inline bool scsi_debug_lbp(void)
(sdebug_lbpu || sdebug_lbpws || sdebug_lbpws10);
}
+static inline bool scsi_debug_atomic_write(void)
+{
+ return sdebug_fake_rw == 0 && sdebug_atomic_wr;
+}
+
static void *lba2fake_store(struct sdeb_store_info *sip,
unsigned long long lba)
{
@@ -1815,6 +1854,14 @@ static int inquiry_vpd_b0(unsigned char *arr)
/* Maximum WRITE SAME Length */
put_unaligned_be64(sdebug_write_same_length, &arr[32]);
+ if (sdebug_atomic_wr) {
+ put_unaligned_be32(sdebug_atomic_wr_max_length, &arr[40]);
+ put_unaligned_be32(sdebug_atomic_wr_align, &arr[44]);
+ put_unaligned_be32(sdebug_atomic_wr_gran, &arr[48]);
+ put_unaligned_be32(sdebug_atomic_wr_max_length_bndry, &arr[52]);
+ put_unaligned_be32(sdebug_atomic_wr_max_bndry, &arr[56]);
+ }
+
return 0x3c; /* Mandatory page length for Logical Block Provisioning */
}
@@ -3377,16 +3424,238 @@ static inline struct sdeb_store_info *devip2sip(struct sdebug_dev_info *devip,
return xa_load(per_store_ap, devip->sdbg_host->si_idx);
}
+static inline void
+sdeb_read_lock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __acquire(lock);
+ else
+ read_lock(lock);
+}
+
+static inline void
+sdeb_read_unlock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __release(lock);
+ else
+ read_unlock(lock);
+}
+
+static inline void
+sdeb_write_lock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __acquire(lock);
+ else
+ write_lock(lock);
+}
+
+static inline void
+sdeb_write_unlock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __release(lock);
+ else
+ write_unlock(lock);
+}
+
+static inline void
+sdeb_data_read_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_read_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_sector_read_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_read_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_unlock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_unlock(&sip->macc_sector_lck);
+}
+
+/*
+ * Atomic locking:
+ * We simplify the atomic model to allow only 1x atomic write and many non-
+ * atomic reads or writes for all LBAs.
+
+ * A RW lock has a similar bahaviour:
+ * Only 1x writer and many readers.
+
+ * So use a RW lock for per-device read and write locking:
+ * An atomic access grabs the lock as a writer and non-atomic grabs the lock
+ * as a reader.
+ */
+
+static inline void
+sdeb_data_lock(struct sdeb_store_info *sip, bool atomic)
+{
+ if (atomic)
+ sdeb_data_write_lock(sip);
+ else
+ sdeb_data_read_lock(sip);
+}
+
+static inline void
+sdeb_data_unlock(struct sdeb_store_info *sip, bool atomic)
+{
+ if (atomic)
+ sdeb_data_write_unlock(sip);
+ else
+ sdeb_data_read_unlock(sip);
+}
+
+/* Allow many reads but only 1x write per sector */
+static inline void
+sdeb_data_sector_lock(struct sdeb_store_info *sip, bool do_write)
+{
+ if (do_write)
+ sdeb_data_sector_write_lock(sip);
+ else
+ sdeb_data_sector_read_lock(sip);
+}
+
+static inline void
+sdeb_data_sector_unlock(struct sdeb_store_info *sip, bool do_write)
+{
+ if (do_write)
+ sdeb_data_sector_write_unlock(sip);
+ else
+ sdeb_data_sector_read_unlock(sip);
+}
+
+static inline void
+sdeb_meta_read_lock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __acquire(&sip->macc_meta_lck);
+ else
+ __acquire(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ read_lock(&sip->macc_meta_lck);
+ else
+ read_lock(&sdeb_fake_rw_lck);
+ }
+}
+
+static inline void
+sdeb_meta_read_unlock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __release(&sip->macc_meta_lck);
+ else
+ __release(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ read_unlock(&sip->macc_meta_lck);
+ else
+ read_unlock(&sdeb_fake_rw_lck);
+ }
+}
+
+static inline void
+sdeb_meta_write_lock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __acquire(&sip->macc_meta_lck);
+ else
+ __acquire(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ write_lock(&sip->macc_meta_lck);
+ else
+ write_lock(&sdeb_fake_rw_lck);
+ }
+}
+
+static inline void
+sdeb_meta_write_unlock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __release(&sip->macc_meta_lck);
+ else
+ __release(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ write_unlock(&sip->macc_meta_lck);
+ else
+ write_unlock(&sdeb_fake_rw_lck);
+ }
+}
+
/* Returns number of bytes copied or -1 if error. */
static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
- u32 sg_skip, u64 lba, u32 num, bool do_write,
- u8 group_number)
+ u32 sg_skip, u64 lba, u32 num, u8 group_number,
+ bool do_write, bool atomic)
{
int ret;
- u64 block, rest = 0;
+ u64 block;
enum dma_data_direction dir;
struct scsi_data_buffer *sdb = &scp->sdb;
u8 *fsp;
+ int i;
+
+ /*
+ * Even though reads are inherently atomic (in this driver), we expect
+ * the atomic flag only for writes.
+ */
+ if (!do_write && atomic)
+ return -1;
if (do_write) {
dir = DMA_TO_DEVICE;
@@ -3406,21 +3675,26 @@ static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
fsp = sip->storep;
block = do_div(lba, sdebug_store_sectors);
- if (block + num > sdebug_store_sectors)
- rest = block + num - sdebug_store_sectors;
- ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
+ /* Only allow 1x atomic write or multiple non-atomic writes at any given time */
+ sdeb_data_lock(sip, atomic);
+ for (i = 0; i < num; i++) {
+ /* We shouldn't need to lock for atomic writes, but do it anyway */
+ sdeb_data_sector_lock(sip, do_write);
+ ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
fsp + (block * sdebug_sector_size),
- (num - rest) * sdebug_sector_size, sg_skip, do_write);
- if (ret != (num - rest) * sdebug_sector_size)
- return ret;
-
- if (rest) {
- ret += sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
- fsp, rest * sdebug_sector_size,
- sg_skip + ((num - rest) * sdebug_sector_size),
- do_write);
+ sdebug_sector_size, sg_skip, do_write);
+ sdeb_data_sector_unlock(sip, do_write);
+ if (ret != sdebug_sector_size) {
+ ret += (i * sdebug_sector_size);
+ break;
+ }
+ sg_skip += sdebug_sector_size;
+ if (++block >= sdebug_store_sectors)
+ block = 0;
}
+ ret = num * sdebug_sector_size;
+ sdeb_data_unlock(sip, atomic);
return ret;
}
@@ -3596,70 +3870,6 @@ static int prot_verify_read(struct scsi_cmnd *scp, sector_t start_sec,
return ret;
}
-static inline void
-sdeb_read_lock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __acquire(&sip->macc_lck);
- else
- __acquire(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- read_lock(&sip->macc_lck);
- else
- read_lock(&sdeb_fake_rw_lck);
- }
-}
-
-static inline void
-sdeb_read_unlock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __release(&sip->macc_lck);
- else
- __release(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- read_unlock(&sip->macc_lck);
- else
- read_unlock(&sdeb_fake_rw_lck);
- }
-}
-
-static inline void
-sdeb_write_lock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __acquire(&sip->macc_lck);
- else
- __acquire(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- write_lock(&sip->macc_lck);
- else
- write_lock(&sdeb_fake_rw_lck);
- }
-}
-
-static inline void
-sdeb_write_unlock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __release(&sip->macc_lck);
- else
- __release(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- write_unlock(&sip->macc_lck);
- else
- write_unlock(&sdeb_fake_rw_lck);
- }
-}
-
static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
{
bool check_prot;
@@ -3669,6 +3879,7 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
u64 lba;
struct sdeb_store_info *sip = devip2sip(devip, true);
u8 *cmd = scp->cmnd;
+ bool meta_data_locked = false;
switch (cmd[0]) {
case READ_16:
@@ -3727,6 +3938,10 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
atomic_set(&sdeb_inject_pending, 0);
}
+ /*
+ * When checking device access params, for reads we only check data
+ * versus what is set at init time, so no need to lock.
+ */
ret = check_device_access_params(scp, lba, num, false);
if (ret)
return ret;
@@ -3746,29 +3961,33 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
return check_condition_result;
}
- sdeb_read_lock(sip);
+ if (sdebug_dev_is_zoned(devip) ||
+ (sdebug_dix && scsi_prot_sg_count(scp))) {
+ sdeb_meta_read_lock(sip);
+ meta_data_locked = true;
+ }
/* DIX + T10 DIF */
if (unlikely(sdebug_dix && scsi_prot_sg_count(scp))) {
switch (prot_verify_read(scp, lba, num, ei_lba)) {
case 1: /* Guard tag error */
if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
return check_condition_result;
} else if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
return illegal_condition_result;
}
break;
case 3: /* Reference tag error */
if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
return check_condition_result;
} else if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
return illegal_condition_result;
}
@@ -3776,8 +3995,9 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
}
}
- ret = do_device_access(sip, scp, 0, lba, num, false, 0);
- sdeb_read_unlock(sip);
+ ret = do_device_access(sip, scp, 0, lba, num, 0, false, false);
+ if (meta_data_locked)
+ sdeb_meta_read_unlock(sip);
if (unlikely(ret == -1))
return DID_ERROR << 16;
@@ -3967,6 +4187,7 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
u64 lba;
struct sdeb_store_info *sip = devip2sip(devip, true);
u8 *cmd = scp->cmnd;
+ bool meta_data_locked = false;
switch (cmd[0]) {
case WRITE_16:
@@ -4025,10 +4246,17 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
"to DIF device\n");
}
- sdeb_write_lock(sip);
+ if (sdebug_dev_is_zoned(devip) ||
+ (sdebug_dix && scsi_prot_sg_count(scp)) ||
+ scsi_debug_lbp()) {
+ sdeb_meta_write_lock(sip);
+ meta_data_locked = true;
+ }
+
ret = check_device_access_params(scp, lba, num, true);
if (ret) {
- sdeb_write_unlock(sip);
+ if (meta_data_locked)
+ sdeb_meta_write_unlock(sip);
return ret;
}
@@ -4037,22 +4265,22 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
switch (prot_verify_write(scp, lba, num, ei_lba)) {
case 1: /* Guard tag error */
if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
return illegal_condition_result;
} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
return check_condition_result;
}
break;
case 3: /* Reference tag error */
if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
return illegal_condition_result;
} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
return check_condition_result;
}
@@ -4060,13 +4288,16 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
}
}
- ret = do_device_access(sip, scp, 0, lba, num, true, group);
+ ret = do_device_access(sip, scp, 0, lba, num, group, true, false);
if (unlikely(scsi_debug_lbp()))
map_region(sip, lba, num);
+
/* If ZBC zone then bump its write pointer */
if (sdebug_dev_is_zoned(devip))
zbc_inc_wp(devip, lba, num);
- sdeb_write_unlock(sip);
+ if (meta_data_locked)
+ sdeb_meta_write_unlock(sip);
+
if (unlikely(-1 == ret))
return DID_ERROR << 16;
else if (unlikely(sdebug_verbose &&
@@ -4176,7 +4407,8 @@ static int resp_write_scat(struct scsi_cmnd *scp,
goto err_out;
}
- sdeb_write_lock(sip);
+ /* Just keep it simple and always lock for now */
+ sdeb_meta_write_lock(sip);
sg_off = lbdof_blen;
/* Spec says Buffer xfer Length field in number of LBs in dout */
cum_lb = 0;
@@ -4219,7 +4451,11 @@ static int resp_write_scat(struct scsi_cmnd *scp,
}
}
- ret = do_device_access(sip, scp, sg_off, lba, num, true, group);
+ /*
+ * Write ranges atomically to keep as close to pre-atomic
+ * writes behaviour as possible.
+ */
+ ret = do_device_access(sip, scp, sg_off, lba, num, group, true, true);
/* If ZBC zone then bump its write pointer */
if (sdebug_dev_is_zoned(devip))
zbc_inc_wp(devip, lba, num);
@@ -4258,7 +4494,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
}
ret = 0;
err_out_unlock:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
err_out:
kfree(lrdp);
return ret;
@@ -4277,14 +4513,16 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
scp->device->hostdata, true);
u8 *fs1p;
u8 *fsp;
+ bool meta_data_locked = false;
- sdeb_write_lock(sip);
+ if (sdebug_dev_is_zoned(devip) || scsi_debug_lbp()) {
+ sdeb_meta_write_lock(sip);
+ meta_data_locked = true;
+ }
ret = check_device_access_params(scp, lba, num, true);
- if (ret) {
- sdeb_write_unlock(sip);
- return ret;
- }
+ if (ret)
+ goto out;
if (unmap && scsi_debug_lbp()) {
unmap_region(sip, lba, num);
@@ -4295,6 +4533,7 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
/* if ndob then zero 1 logical block, else fetch 1 logical block */
fsp = sip->storep;
fs1p = fsp + (block * lb_size);
+ sdeb_data_write_lock(sip);
if (ndob) {
memset(fs1p, 0, lb_size);
ret = 0;
@@ -4302,8 +4541,8 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
ret = fetch_to_dev_buffer(scp, fs1p, lb_size);
if (-1 == ret) {
- sdeb_write_unlock(sip);
- return DID_ERROR << 16;
+ ret = DID_ERROR << 16;
+ goto out;
} else if (sdebug_verbose && !ndob && (ret < lb_size))
sdev_printk(KERN_INFO, scp->device,
"%s: %s: lb size=%u, IO sent=%d bytes\n",
@@ -4320,10 +4559,12 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
/* If ZBC zone then bump its write pointer */
if (sdebug_dev_is_zoned(devip))
zbc_inc_wp(devip, lba, num);
+ sdeb_data_write_unlock(sip);
+ ret = 0;
out:
- sdeb_write_unlock(sip);
-
- return 0;
+ if (meta_data_locked)
+ sdeb_meta_write_unlock(sip);
+ return ret;
}
static int resp_write_same_10(struct scsi_cmnd *scp,
@@ -4466,25 +4707,30 @@ static int resp_comp_write(struct scsi_cmnd *scp,
return check_condition_result;
}
- sdeb_write_lock(sip);
-
ret = do_dout_fetch(scp, dnum, arr);
if (ret == -1) {
retval = DID_ERROR << 16;
- goto cleanup;
+ goto cleanup_free;
} else if (sdebug_verbose && (ret < (dnum * lb_size)))
sdev_printk(KERN_INFO, scp->device, "%s: compare_write: cdb "
"indicated=%u, IO sent=%d bytes\n", my_name,
dnum * lb_size, ret);
+
+ sdeb_data_write_lock(sip);
+ sdeb_meta_write_lock(sip);
if (!comp_write_worker(sip, lba, num, arr, false)) {
mk_sense_buffer(scp, MISCOMPARE, MISCOMPARE_VERIFY_ASC, 0);
retval = check_condition_result;
- goto cleanup;
+ goto cleanup_unlock;
}
+
+ /* Cover sip->map_storep (which map_region()) sets with data lock */
if (scsi_debug_lbp())
map_region(sip, lba, num);
-cleanup:
- sdeb_write_unlock(sip);
+cleanup_unlock:
+ sdeb_meta_write_unlock(sip);
+ sdeb_data_write_unlock(sip);
+cleanup_free:
kfree(arr);
return retval;
}
@@ -4528,7 +4774,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
desc = (void *)&buf[8];
- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);
for (i = 0 ; i < descriptors ; i++) {
unsigned long long lba = get_unaligned_be64(&desc[i].lba);
@@ -4544,7 +4790,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
ret = 0;
out:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
kfree(buf);
return ret;
@@ -4702,12 +4948,13 @@ static int resp_pre_fetch(struct scsi_cmnd *scp,
rest = block + nblks - sdebug_store_sectors;
/* Try to bring the PRE-FETCH range into CPU's cache */
- sdeb_read_lock(sip);
+ sdeb_data_read_lock(sip);
prefetch_range(fsp + (sdebug_sector_size * block),
(nblks - rest) * sdebug_sector_size);
if (rest)
prefetch_range(fsp, rest * sdebug_sector_size);
- sdeb_read_unlock(sip);
+
+ sdeb_data_read_unlock(sip);
fini:
if (cmd[1] & 0x2)
res = SDEG_RES_IMMED_MASK;
@@ -4866,7 +5113,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
return check_condition_result;
}
/* Not changing store, so only need read access */
- sdeb_read_lock(sip);
+ sdeb_data_read_lock(sip);
ret = do_dout_fetch(scp, a_num, arr);
if (ret == -1) {
@@ -4888,7 +5135,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
goto cleanup;
}
cleanup:
- sdeb_read_unlock(sip);
+ sdeb_data_read_unlock(sip);
kfree(arr);
return ret;
}
@@ -4934,7 +5181,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
return check_condition_result;
}
- sdeb_read_lock(sip);
+ sdeb_meta_read_lock(sip);
desc = arr + 64;
for (lba = zs_lba; lba < sdebug_capacity;
@@ -5032,11 +5279,70 @@ static int resp_report_zones(struct scsi_cmnd *scp,
ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
fini:
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
kfree(arr);
return ret;
}
+static int resp_atomic_write(struct scsi_cmnd *scp,
+ struct sdebug_dev_info *devip)
+{
+ struct sdeb_store_info *sip;
+ u8 *cmd = scp->cmnd;
+ u16 boundary, len;
+ u64 lba, lba_tmp;
+ int ret;
+
+ if (!scsi_debug_atomic_write()) {
+ mk_sense_invalid_opcode(scp);
+ return check_condition_result;
+ }
+
+ sip = devip2sip(devip, true);
+
+ lba = get_unaligned_be64(cmd + 2);
+ boundary = get_unaligned_be16(cmd + 10);
+ len = get_unaligned_be16(cmd + 12);
+
+ lba_tmp = lba;
+ if (sdebug_atomic_wr_align &&
+ do_div(lba_tmp, sdebug_atomic_wr_align)) {
+ /* Does not meet alignment requirement */
+ mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+
+ if (sdebug_atomic_wr_gran && len % sdebug_atomic_wr_gran) {
+ /* Does not meet alignment requirement */
+ mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+
+ if (boundary > 0) {
+ if (boundary > sdebug_atomic_wr_max_bndry) {
+ mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+ return check_condition_result;
+ }
+
+ if (len > sdebug_atomic_wr_max_length_bndry) {
+ mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+ return check_condition_result;
+ }
+ } else {
+ if (len > sdebug_atomic_wr_max_length) {
+ mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+ return check_condition_result;
+ }
+ }
+
+ ret = do_device_access(sip, scp, 0, lba, len, 0, true, true);
+ if (unlikely(ret == -1))
+ return DID_ERROR << 16;
+ if (unlikely(ret != len * sdebug_sector_size))
+ return DID_ERROR << 16;
+ return 0;
+}
+
/* Logic transplanted from tcmu-runner, file_zbc.c */
static void zbc_open_all(struct sdebug_dev_info *devip)
{
@@ -5063,8 +5369,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
mk_sense_invalid_opcode(scp);
return check_condition_result;
}
-
- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);
if (all) {
/* Check if all closed zones can be open */
@@ -5113,7 +5418,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
zbc_open_zone(devip, zsp, true);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}
@@ -5140,7 +5445,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
return check_condition_result;
}
- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);
if (all) {
zbc_close_all(devip);
@@ -5169,7 +5474,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
zbc_close_zone(devip, zsp);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}
@@ -5212,7 +5517,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
return check_condition_result;
}
- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);
if (all) {
zbc_finish_all(devip);
@@ -5241,7 +5546,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
zbc_finish_zone(devip, zsp, true);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}
@@ -5292,7 +5597,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
return check_condition_result;
}
- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);
if (all) {
zbc_rwp_all(devip);
@@ -5320,7 +5625,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
zbc_rwp_zone(devip, zsp);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}
@@ -6284,6 +6589,7 @@ module_param_named(lbprz, sdebug_lbprz, int, S_IRUGO);
module_param_named(lbpu, sdebug_lbpu, int, S_IRUGO);
module_param_named(lbpws, sdebug_lbpws, int, S_IRUGO);
module_param_named(lbpws10, sdebug_lbpws10, int, S_IRUGO);
+module_param_named(atomic_wr, sdebug_atomic_wr, int, S_IRUGO);
module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
@@ -6318,6 +6624,11 @@ module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO);
module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO);
module_param_named(unmap_max_blocks, sdebug_unmap_max_blocks, int, S_IRUGO);
module_param_named(unmap_max_desc, sdebug_unmap_max_desc, int, S_IRUGO);
+module_param_named(atomic_wr_max_length, sdebug_atomic_wr_max_length, int, S_IRUGO);
+module_param_named(atomic_wr_align, sdebug_atomic_wr_align, int, S_IRUGO);
+module_param_named(atomic_wr_gran, sdebug_atomic_wr_gran, int, S_IRUGO);
+module_param_named(atomic_wr_max_length_bndry, sdebug_atomic_wr_max_length_bndry, int, S_IRUGO);
+module_param_named(atomic_wr_max_bndry, sdebug_atomic_wr_max_bndry, int, S_IRUGO);
module_param_named(uuid_ctl, sdebug_uuid_ctl, int, S_IRUGO);
module_param_named(virtual_gb, sdebug_virtual_gb, int, S_IRUGO | S_IWUSR);
module_param_named(vpd_use_hostno, sdebug_vpd_use_hostno, int,
@@ -6361,6 +6672,7 @@ MODULE_PARM_DESC(lbprz,
MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)");
MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)");
MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)");
+MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=0)");
MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
@@ -6392,6 +6704,11 @@ MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"
MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)");
MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)");
MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)");
+MODULE_PARM_DESC(atomic_wr_max_length, "max # of blocks can be atomically written in one cmd (def=8192)");
+MODULE_PARM_DESC(atomic_wr_align, "minimum alignment of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_gran, "minimum granularity of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_max_length_bndry, "max # of blocks can be atomically written in one cmd with boundary set (def=8192)");
+MODULE_PARM_DESC(atomic_wr_max_bndry, "max # boundaries per atomic write (def=128)");
MODULE_PARM_DESC(uuid_ctl,
"1->use uuid for lu name, 0->don't, 2->all use same (def=0)");
MODULE_PARM_DESC(virtual_gb, "virtual gigabyte (GiB) size (def=0 -> use dev_size_mb)");
@@ -7563,6 +7880,7 @@ static int __init scsi_debug_init(void)
return -EINVAL;
}
}
+
xa_init_flags(per_store_ap, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
if (want_store) {
idx = sdebug_add_store();
@@ -7770,7 +8088,9 @@ static int sdebug_add_store(void)
map_region(sip, 0, 2);
}
- rwlock_init(&sip->macc_lck);
+ rwlock_init(&sip->macc_data_lck);
+ rwlock_init(&sip->macc_meta_lck);
+ rwlock_init(&sip->macc_sector_lck);
return (int)n_idx;
err:
sdebug_erase_store((int)n_idx, sip);
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 10/10] nvme: Atomic write support
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (8 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 09/10] scsi: scsi_debug: " John Garry
@ 2024-06-10 10:43 ` John Garry
2024-06-17 17:24 ` Kanchan Joshi
2024-06-14 2:01 ` [PATCH v8 00/10] block atomic writes Martin K. Petersen
10 siblings, 1 reply; 27+ messages in thread
From: John Garry @ 2024-06-10 10:43 UTC (permalink / raw)
To: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, Alan Adamson, John Garry
From: Alan Adamson <[email protected]>
Add support to set block layer request_queue atomic write limits. The
limits will be derived from either the namespace or controller atomic
parameters.
NVMe atomic-related parameters are grouped into "normal" and "power-fail"
(or PF) class of parameter. For atomic write support, only PF parameters
are of interest. The "normal" parameters are concerned with racing reads
and writes (which also applies to PF). See NVM Command Set Specification
Revision 1.0d section 2.1.4 for reference.
Whether to use per namespace or controller atomic parameters is decided by
NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data
Structure, NVM Command Set.
NVMe namespaces may define an atomic boundary, whereby no atomic guarantees
are provided for a write which straddles this per-lba space boundary. The
block layer merging policy is such that no merges may occur in which the
resultant request would straddle such a boundary.
Unlike SCSI, NVMe specifies no granularity or alignment rules, apart from
atomic boundary rule. In addition, again unlike SCSI, there is no
dedicated atomic write command - a write which adheres to the atomic size
limit and boundary is implicitly atomic.
If NSFEAT bit 1 is set, the following parameters are of interest:
- NAWUPF (Namespace Atomic Write Unit Power Fail)
- NABSPF (Namespace Atomic Boundary Size Power Fail)
- NABO (Namespace Atomic Boundary Offset)
and we set request_queue limits as follows:
- atomic_write_unit_max = rounddown_pow_of_two(NAWUPF)
- atomic_write_max_bytes = NAWUPF
- atomic_write_boundary = NABSPF
If in the unlikely scenario that NABO is non-zero, then atomic writes will
not be supported at all as dealing with this adds extra complexity. This
policy may change in future.
In all cases, atomic_write_unit_min is set to the logical block size.
If NSFEAT bit 1 is unset, the following parameter is of interest:
- AWUPF (Atomic Write Unit Power Fail)
and we set request_queue limits as follows:
- atomic_write_unit_max = rounddown_pow_of_two(AWUPF)
- atomic_write_max_bytes = AWUPF
- atomic_write_boundary = 0
A new function, nvme_valid_atomic_write(), is also called from submission
path to verify that a request has been submitted to the driver will
actually be executed atomically. As mentioned, there is no dedicated NVMe
atomic write command (which may error for a command which exceeds the
controller atomic write limits).
Note on NABSPF:
There seems to be some vagueness in the spec as to whether NABSPF applies
for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF
and how it is affected by bit 1. However Figure 4 does tell to check Figure
97 for info about per-namespace parameters, which NABSPF is, so it is
implied. However currently nvme_update_disk_info() does check namespace
parameter NABO regardless of this bit.
Signed-off-by: Alan Adamson <[email protected]>
Reviewed-by: Keith Busch <[email protected]>
jpg: total rewrite
Signed-off-by: John Garry <[email protected]>
---
drivers/nvme/host/core.c | 49 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f5d150c62955..91001892f60b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -927,6 +927,30 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
return BLK_STS_OK;
}
+static bool nvme_valid_atomic_write(struct request *req)
+{
+ struct request_queue *q = req->q;
+ u32 boundary_bytes = queue_atomic_write_boundary_bytes(q);
+
+ if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q))
+ return false;
+
+ if (boundary_bytes) {
+ u64 mask = boundary_bytes - 1, imask = ~mask;
+ u64 start = blk_rq_pos(req) << SECTOR_SHIFT;
+ u64 end = start + blk_rq_bytes(req) - 1;
+
+ /* If greater then must be crossing a boundary */
+ if (blk_rq_bytes(req) > boundary_bytes)
+ return false;
+
+ if ((start & imask) != (end & imask))
+ return false;
+ }
+
+ return true;
+}
+
static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
struct request *req, struct nvme_command *cmnd,
enum nvme_opcode op)
@@ -941,6 +965,12 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
if (req->cmd_flags & REQ_RAHEAD)
dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
+ /*
+ * Ensure that nothing has been sent which cannot be executed
+ * atomically.
+ */
+ if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req))
+ return BLK_STS_INVAL;
cmnd->rw.opcode = op;
cmnd->rw.flags = 0;
@@ -1921,6 +1951,23 @@ static void nvme_configure_metadata(struct nvme_ctrl *ctrl,
}
}
+
+static void nvme_update_atomic_write_disk_info(struct nvme_ns *ns,
+ struct nvme_id_ns *id, struct queue_limits *lim,
+ u32 bs, u32 atomic_bs)
+{
+ unsigned int boundary = 0;
+
+ if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) {
+ if (le16_to_cpu(id->nabspf))
+ boundary = (le16_to_cpu(id->nabspf) + 1) * bs;
+ }
+ lim->atomic_write_hw_max = atomic_bs;
+ lim->atomic_write_hw_boundary = boundary;
+ lim->atomic_write_hw_unit_min = bs;
+ lim->atomic_write_hw_unit_max = rounddown_pow_of_two(atomic_bs);
+}
+
static u32 nvme_max_drv_segments(struct nvme_ctrl *ctrl)
{
return ctrl->max_hw_sectors / (NVME_CTRL_PAGE_SIZE >> SECTOR_SHIFT) + 1;
@@ -1967,6 +2014,8 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id,
atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
else
atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs;
+
+ nvme_update_atomic_write_disk_info(ns, id, lim, bs, atomic_bs);
}
if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
--
2.31.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v8 10/10] nvme: Atomic write support
2024-06-10 10:43 ` [PATCH v8 10/10] nvme: " John Garry
@ 2024-06-17 17:24 ` Kanchan Joshi
2024-06-17 18:04 ` John Garry
0 siblings, 1 reply; 27+ messages in thread
From: Kanchan Joshi @ 2024-06-17 17:24 UTC (permalink / raw)
To: John Garry, axboe, kbusch, hch, sagi, jejb, martin.petersen, viro,
brauner, dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, Alan Adamson
On 6/10/2024 4:13 PM, John Garry wrote:
> +static bool nvme_valid_atomic_write(struct request *req)
> +{
> + struct request_queue *q = req->q;
> + u32 boundary_bytes = queue_atomic_write_boundary_bytes(q);
> +
> + if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q))
> + return false;
> +
> + if (boundary_bytes) {
> + u64 mask = boundary_bytes - 1, imask = ~mask;
> + u64 start = blk_rq_pos(req) << SECTOR_SHIFT;
> + u64 end = start + blk_rq_bytes(req) - 1;
> +
> + /* If greater then must be crossing a boundary */
> + if (blk_rq_bytes(req) > boundary_bytes)
> + return false;
Nit: I'd cache blk_rq_bytes(req), since that is repeating and this
function is called for each atomic IO.
> +
> + if ((start & imask) != (end & imask))
> + return false;
> + }
> +
> + return true;
> +}
> +
> static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
> struct request *req, struct nvme_command *cmnd,
> enum nvme_opcode op)
> @@ -941,6 +965,12 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>
> if (req->cmd_flags & REQ_RAHEAD)
> dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
> + /*
> + * Ensure that nothing has been sent which cannot be executed
> + * atomically.
> + */
> + if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req))
> + return BLK_STS_INVAL;
>
Is this validity check specific to NVMe or should this be moved up to
block layer as it also knows the limits?
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 10/10] nvme: Atomic write support
2024-06-17 17:24 ` Kanchan Joshi
@ 2024-06-17 18:04 ` John Garry
2024-06-18 6:49 ` Christoph Hellwig
0 siblings, 1 reply; 27+ messages in thread
From: John Garry @ 2024-06-17 18:04 UTC (permalink / raw)
To: Kanchan Joshi, axboe, kbusch, hch, sagi, jejb, martin.petersen,
viro, brauner, dchinner, jack
Cc: djwong, linux-block, linux-kernel, linux-nvme, linux-fsdevel,
tytso, jbongio, linux-scsi, ojaswin, linux-aio, linux-btrfs,
io-uring, nilay, ritesh.list, willy, agk, snitzer, mpatocka,
dm-devel, hare, Alan Adamson
On 17/06/2024 18:24, Kanchan Joshi wrote:
> On 6/10/2024 4:13 PM, John Garry wrote:
>> +static bool nvme_valid_atomic_write(struct request *req)
>> +{
>> + struct request_queue *q = req->q;
>> + u32 boundary_bytes = queue_atomic_write_boundary_bytes(q);
>> +
>> + if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q))
>> + return false;
>> +
>> + if (boundary_bytes) {
>> + u64 mask = boundary_bytes - 1, imask = ~mask;
>> + u64 start = blk_rq_pos(req) << SECTOR_SHIFT;
>> + u64 end = start + blk_rq_bytes(req) - 1;
>> +
>> + /* If greater then must be crossing a boundary */
>> + if (blk_rq_bytes(req) > boundary_bytes)
>> + return false;
>
> Nit: I'd cache blk_rq_bytes(req), since that is repeating and this
> function is called for each atomic IO.
blk_rq_bytes() is just a wrapper for rq->__data_len. I suppose that we
could cache that value to stop re-reading that memory, but I would
hope/expect that memory to be in the CPU cache anyway.
>
>> +
>> + if ((start & imask) != (end & imask))
>> + return false;
>> + }
>> +
>> + return true;
>> +}
>> +
>> static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>> struct request *req, struct nvme_command *cmnd,
>> enum nvme_opcode op)
>> @@ -941,6 +965,12 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>>
>> if (req->cmd_flags & REQ_RAHEAD)
>> dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>> + /*
>> + * Ensure that nothing has been sent which cannot be executed
>> + * atomically.
>> + */
>> + if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req))
>> + return BLK_STS_INVAL;
>>
>
> Is this validity check specific to NVMe or should this be moved up to
> block layer as it also knows the limits?
Only NVMe supports an LBA space boundary, so that part is specific to NVMe.
Regardless, the block layer already should ensure that the atomic write
length and boundary is respected. nvme_valid_atomic_write() is just an
insurance policy against the block layer or some other component not
doing its job.
For SCSI, the device would error - for example - if the atomic write
length was larger than the device supported. NVMe silently just does not
execute the write atomically in that scenario, which we must avoid.
Thanks,
John
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 10/10] nvme: Atomic write support
2024-06-17 18:04 ` John Garry
@ 2024-06-18 6:49 ` Christoph Hellwig
2024-06-18 7:22 ` John Garry
0 siblings, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-06-18 6:49 UTC (permalink / raw)
To: John Garry
Cc: Kanchan Joshi, axboe, kbusch, hch, sagi, jejb, martin.petersen,
viro, brauner, dchinner, jack, djwong, linux-block, linux-kernel,
linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy, agk,
snitzer, mpatocka, dm-devel, hare, Alan Adamson
On Mon, Jun 17, 2024 at 07:04:23PM +0100, John Garry wrote:
>> Nit: I'd cache blk_rq_bytes(req), since that is repeating and this
>> function is called for each atomic IO.
>
> blk_rq_bytes() is just a wrapper for rq->__data_len. I suppose that we
> could cache that value to stop re-reading that memory, but I would
> hope/expect that memory to be in the CPU cache anyway.
Yes, that feels a bit pointless.
> Only NVMe supports an LBA space boundary, so that part is specific to NVMe.
>
> Regardless, the block layer already should ensure that the atomic write
> length and boundary is respected. nvme_valid_atomic_write() is just an
> insurance policy against the block layer or some other component not doing
> its job.
>
> For SCSI, the device would error - for example - if the atomic write length
> was larger than the device supported. NVMe silently just does not execute
> the write atomically in that scenario, which we must avoid.
It might be worth to expand the comment to include this information to
help future readers.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 10/10] nvme: Atomic write support
2024-06-18 6:49 ` Christoph Hellwig
@ 2024-06-18 7:22 ` John Garry
0 siblings, 0 replies; 27+ messages in thread
From: John Garry @ 2024-06-18 7:22 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, axboe, kbusch, sagi, jejb, martin.petersen, viro,
brauner, dchinner, jack, djwong, linux-block, linux-kernel,
linux-nvme, linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin,
linux-aio, linux-btrfs, io-uring, nilay, ritesh.list, willy, agk,
snitzer, mpatocka, dm-devel, hare, Alan Adamson
On 18/06/2024 07:49, Christoph Hellwig wrote:
>> Only NVMe supports an LBA space boundary, so that part is specific to NVMe.
>>
>> Regardless, the block layer already should ensure that the atomic write
>> length and boundary is respected. nvme_valid_atomic_write() is just an
>> insurance policy against the block layer or some other component not doing
>> its job.
>>
>> For SCSI, the device would error - for example - if the atomic write length
>> was larger than the device supported. NVMe silently just does not execute
>> the write atomically in that scenario, which we must avoid.
> It might be worth to expand the comment to include this information to
> help future readers.
OK, will do. I have been asked this more than once now.
John
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 00/10] block atomic writes
2024-06-10 10:43 [PATCH v8 00/10] block atomic writes John Garry
` (9 preceding siblings ...)
2024-06-10 10:43 ` [PATCH v8 10/10] nvme: " John Garry
@ 2024-06-14 2:01 ` Martin K. Petersen
10 siblings, 0 replies; 27+ messages in thread
From: Martin K. Petersen @ 2024-06-14 2:01 UTC (permalink / raw)
To: John Garry
Cc: axboe, kbusch, hch, sagi, jejb, martin.petersen, viro, brauner,
dchinner, jack, djwong, linux-block, linux-kernel, linux-nvme,
linux-fsdevel, tytso, jbongio, linux-scsi, ojaswin, linux-aio,
linux-btrfs, io-uring, nilay, ritesh.list, willy, agk, snitzer,
mpatocka, dm-devel, hare
John,
> This series introduces a proposal to implementing atomic writes in the
> kernel for torn-write protection.
Reviewed-by: Martin K. Petersen <[email protected]>
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 27+ messages in thread