public inbox for [email protected]
 help / color / mirror / Atom feed
From: Damien Le Moal <[email protected]>
To: "[email protected]" <[email protected]>
Cc: Kanchan Joshi <[email protected]>, Jens Axboe <[email protected]>,
	Pavel Begunkov <[email protected]>,
	Kanchan Joshi <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	Matthew Wilcox <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	SelvaKumar S <[email protected]>,
	Nitesh Shetty <[email protected]>,
	Javier Gonzalez <[email protected]>,
	Johannes Thumshirn <[email protected]>,
	Naohiro Aota <[email protected]>
Subject: Re: [PATCH v4 6/6] io_uring: add support for zone-append
Date: Fri, 31 Jul 2020 10:16:49 +0000	[thread overview]
Message-ID: <MWHPR04MB3758A4B2967DB1FABAAD9265E74E0@MWHPR04MB3758.namprd04.prod.outlook.com> (raw)
In-Reply-To: [email protected]

On 2020/07/31 18:41, [email protected] wrote:
> On Fri, Jul 31, 2020 at 09:34:50AM +0000, Damien Le Moal wrote:
>> Sync writes are done under the inode lock, so there cannot be other writers at
>> the same time. And for the sync case, since the actual written offset is
>> necessarily equal to the file size before the write, there is no need to report
>> it (there is no system call that can report that anyway). For this sync case,
>> the only change that the use of zone append introduces compared to regular
>> writes is the potential for more short writes.
>>
>> Adding a flag for "report the actual offset for appending writes" is fine with
>> me, but do you also mean to use this flag for driving zone append write vs
>> regular writes in zonefs ?
> 
> Let's keep semantics and implementation separate.  For the case
> where we report the actual offset we need a size imitation and no
> short writes.

OK. So the name of the flag confused me. The flag name should reflect "Do zone
append and report written offset", right ?

Just to clarify, here was my thinking for zonefs:
1) file open with O_APPEND/aio has RWF_APPEND: then it is OK to assume that the
application did not set the aio offset since APPEND means offset==file size. In
that case, do zone append and report back the written offset.
2) file open without O_APPEND/aio does not have RWF_APPEND: the application
specified an aio offset and we must respect it, write it that exact same order,
so use regular writes.

For regular file systems, with case (1) condition, the FS use whatever it wants
for the implementation, and report back the written offset, which  will always
be the file size at the time the aio was issued.

Your method with a new flag to switch between (1) and (2) is OK with me, but the
"no short writes" may be difficult to achieve in a regular FS, no ? I do not
think current FSes have such guarantees... Especially in the case of buffered
async writes I think.

> Anything with those semantics can be implemented using Zone Append
> trivially in zonefs, and we don't even need the exclusive lock in that
> case.  But even without that flag anything that has an exclusive lock can
> at least in theory be implemented using Zone Append, it just need
> support for submitting another request from the I/O completion handler
> of the first.  I just don't think it is worth it - with the exclusive
> lock we do have access to the zone serialied so a normal write works
> just fine.  Both for the sync and async case.

We did switch to have zonefs do append writes in the sync case, always. Hmmm...
Not sure anymore it was such a good idea.

> 
>> The fcntl or ioctl for getting the max atomic write size would be fine too.
>> Given that zonefs is very close to the underlying zoned drive, I was assuming
>> that the application can simply consult the device sysfs zone_append_max_bytes
>> queue attribute.
> 
> For zonefs we can, yes.  But in many ways that is a lot more cumbersome
> that having an API that works on the fd you want to write on.

Got it. Makes sense.

>> For regular file systems, this value would be used internally
>> only. I do not really see how it can be useful to applications. Furthermore, the
>> file system may have a hard time giving that information to the application
>> depending on its underlying storage configuration (e.g. erasure
>> coding/declustered RAID).
> 
> File systems might have all kinds of limits of their own (e.g. extent
> sizes).  And a good API that just works everywhere and is properly
> documented is much better than heaps of cargo culted crap all over
> applications.

OK. Makes sense. That said, taking Naohiro's work on btrfs as an example, zone
append is used for every data write, no matter if it is O_APPEND/RWF_APPEND or
not. The size limitation for zone append writes is not needed at all by
applications. Maximum extent size is aligned to the max append write size
internally, and if the application issued a larger write, it loops over multiple
extents, similarly to any regular write may do (if there is overwrite etc...).

For the regular FS case, my thinking on the semantic really was: if asked to do
so, return the written offset for a RWF_APPEND aios. And I think that
implementing that does not depend in any way on what the FS does internally.

But I think I am starting to see the picture you are drawing here:
1) Introduce a fcntl() to get "maximum size for atomic append writes"
2) Introduce an aio flag specifying "Do atomic append write and report written
offset"
3) For an aio specifying "Do atomic append write and report written offset", if
the aio is larger than "maximum size for atomic append writes", fail it on
submission, no short writes.
4) For any other aio, it is business as usual, aio is processed as they are now.

And the implementation is actually completely free to use zone append writes or
regular writes regardless of the "Do atomic append write and report written
offset" being used or not.

Is it your thinking ? That would work for me. That actually end up completely
unifying the interface behavior for zonefs and regular FS. Same semantic.


-- 
Damien Le Moal
Western Digital Research

  reply	other threads:[~2020-07-31 10:16 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20200724155244epcas5p2902f57e36e490ee8772da19aa9408cdc@epcas5p2.samsung.com>
2020-07-24 15:49 ` [PATCH v4 0/6] zone-append support in io-uring and aio Kanchan Joshi
     [not found]   ` <CGME20200724155258epcas5p1a75b926950a18cd1e6c8e7a047e6c589@epcas5p1.samsung.com>
2020-07-24 15:49     ` [PATCH v4 1/6] fs: introduce FMODE_ZONE_APPEND and IOCB_ZONE_APPEND Kanchan Joshi
2020-07-24 16:34       ` Jens Axboe
2020-07-26 15:18       ` Christoph Hellwig
2020-07-28  1:49         ` Matthew Wilcox
2020-07-28  7:26           ` Christoph Hellwig
     [not found]   ` <CGME20200724155324epcas5p18e1d3b4402d1e4a8eca87d0b56a3fa9b@epcas5p1.samsung.com>
2020-07-24 15:49     ` [PATCH v4 2/6] fs: change ki_complete interface to support 64bit ret2 Kanchan Joshi
2020-07-26 15:18       ` Christoph Hellwig
     [not found]   ` <CGME20200724155329epcas5p345ba6bad0b8fe18056bb4bcd26c10019@epcas5p3.samsung.com>
2020-07-24 15:49     ` [PATCH v4 3/6] uio: return status with iov truncation Kanchan Joshi
     [not found]   ` <CGME20200724155341epcas5p15bfc55927f2abb60f19784270fe8e377@epcas5p1.samsung.com>
2020-07-24 15:49     ` [PATCH v4 4/6] block: add zone append handling for direct I/O path Kanchan Joshi
2020-07-26 15:19       ` Christoph Hellwig
     [not found]   ` <CGME20200724155346epcas5p2cfb383fe9904a45280c6145f4c13e1b4@epcas5p2.samsung.com>
2020-07-24 15:49     ` [PATCH v4 5/6] block: enable zone-append for iov_iter of bvec type Kanchan Joshi
2020-07-26 15:20       ` Christoph Hellwig
     [not found]   ` <CGME20200724155350epcas5p3b8f1d59eda7f8fbb38c828f692d42fd6@epcas5p3.samsung.com>
2020-07-24 15:49     ` [PATCH v4 6/6] io_uring: add support for zone-append Kanchan Joshi
2020-07-24 16:29       ` Jens Axboe
2020-07-27 19:16         ` Kanchan Joshi
2020-07-27 20:34           ` Jens Axboe
2020-07-30 16:08             ` Pavel Begunkov
2020-07-30 16:13               ` Jens Axboe
2020-07-30 16:26                 ` Pavel Begunkov
2020-07-30 17:16                   ` Jens Axboe
2020-07-30 17:38                     ` Pavel Begunkov
2020-07-30 17:51                       ` Kanchan Joshi
2020-07-30 17:54                         ` Jens Axboe
2020-07-30 18:25                           ` Kanchan Joshi
2020-07-31  6:42                             ` Damien Le Moal
2020-07-31  6:45                               ` hch
2020-07-31  6:59                                 ` Damien Le Moal
2020-07-31  7:58                                   ` Kanchan Joshi
2020-07-31  8:14                                     ` Damien Le Moal
2020-07-31  9:14                                       ` hch
2020-07-31  9:34                                         ` Damien Le Moal
2020-07-31  9:41                                           ` hch
2020-07-31 10:16                                             ` Damien Le Moal [this message]
2020-07-31 12:51                                               ` hch
2020-07-31 13:08                                                 ` hch
2020-07-31 15:07                                                   ` Kanchan Joshi
2022-03-02 20:47                                                   ` Luis Chamberlain
2020-08-05  7:35                                                 ` Damien Le Moal
2020-08-14  8:14                                                   ` hch
2020-08-14  8:27                                                     ` Damien Le Moal
2020-08-14 12:04                                                       ` hch
2020-08-14 12:20                                                         ` Damien Le Moal
2020-09-07  7:01                                                     ` Kanchan Joshi
2020-09-08 15:18                                                       ` hch
2020-09-24 17:19                                                         ` Kanchan Joshi
2020-09-25  2:52                                                           ` Damien Le Moal
2020-09-28 18:58                                                             ` Kanchan Joshi
2020-09-29  1:24                                                               ` Damien Le Moal
2020-09-29 18:49                                                                 ` Kanchan Joshi
2022-03-02 20:43                                                         ` Luis Chamberlain
2020-07-31  9:38                                       ` Kanchan Joshi
2022-03-02 20:51                                 ` Luis Chamberlain
2020-07-31  7:08                               ` Kanchan Joshi
2020-07-30 15:57       ` Pavel Begunkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MWHPR04MB3758A4B2967DB1FABAAD9265E74E0@MWHPR04MB3758.namprd04.prod.outlook.com \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox