public inbox for [email protected]
 help / color / mirror / Atom feed
From: Keith Busch <[email protected]>
To: Christoph Hellwig <[email protected]>
Cc: Dave Chinner <[email protected]>,
	Pierre Labat <[email protected]>,
	Kanchan Joshi <[email protected]>,
	Keith Busch <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>,
	"[email protected]" <[email protected]>
Subject: Re: [EXT] Re: [PATCHv11 0/9] write hints with nvme fdp and scsi streams
Date: Mon, 18 Nov 2024 16:37:08 -0700	[thread overview]
Message-ID: <ZzvPpD5O8wJzeHth@kbusch-mbp> (raw)
In-Reply-To: <[email protected]>

On Fri, Nov 15, 2024 at 05:53:48PM +0100, Christoph Hellwig wrote:
> On Fri, Nov 15, 2024 at 09:28:05AM -0700, Keith Busch wrote:
> > SSDs operates that way. FDP just reports more stuff because that's what
> > people kept asking for. But it doesn't require you fundamentally change
> > how you acces it. You've singled out FDP to force a sequential write
> > requirement that that requires unique support from every filesystem
> > despite the feature not needing that.
> 
> No I haven't.  If you think so you are fundamentally misunderstanding
> what I'm saying.

We have an API that has existed for 10+ years. You are gatekeeping that
interface by declaring NVMe's FDP is not allowed to use it. Do I have
that wrong? You initially blocked this because you didn't like how the
spec committe worked. Now you've shifted to trying to pretend FDP
devices require explicit filesystem handholding that was explicely NOT
part of that protocol.
 
> > We have demonstrated 40% reduction in write amplifcation from doing the
> > most simplist possible thing that doesn't require any filesystem or
> > kernel-user ABI changes at all. You might think that's not significant
> > enough to let people realize those gains without more invasive block
> > stack changes, but you may not buying NAND in bulk if that's the case.
> 
> And as iterared multiple times you are doing that by bypassing the
> file system layer in a forceful way that breaks all abstractions and
> makes your feature unavailabe for file systems.

Your filesystem layering breaks the abstraction and capabilities the
drives are providing. You're doing more harm than good trying to game
how the media works here.

> I've also thrown your a nugget by first explaining and then even writing
> protype code to show how you get what you want while using the proper
> abstractions.  

Oh, the untested prototype that wasn't posted to any mailing list for
a serious review? The one that forces FDP to subscribe to the zoned
interface only for XFS, despite these devices being squarly in the
"conventional" SSD catagory and absolutely NOT zone devices? Despite I
have other users using other filesystems successfuly using the existing
interfaces that your prototype doesn't do a thing for? Yah, thanks...

I appreciate you put the time into getting your thoughts into actual
code and it does look very valuable for ACTUAL ZONE block devices. But
it seems to have missed the entire point of what this hardware feature
does. If you're doing low level media garbage collection with FDP and
tracking fake media write pointers, then you're doing it wrong. Please
use Open Channel and ZNS SSDs if you want that interface and stop
gatekeeping the EXISTING interface that has proven value in production
software today.

> But instead of a picking up on that you just whine like
> this.  Either spend a little bit of effort to actually get the interface
> right or just shut up.

Why the fuck should I make an effort to do improve your pet project that
I don't have a customer for? They want to use the interface that was
created 10 years ago, exactly for the reason it was created, and no one
wants to introduce the risks of an untested and unproven major and
invasive filesystem and block stack change in the kernel in the near
term!

  reply	other threads:[~2024-11-18 23:37 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-08 19:36 [PATCHv11 0/9] write hints with nvme fdp and scsi streams Keith Busch
2024-11-08 19:36 ` [PATCHv11 1/9] block: use generic u16 for write hints Keith Busch
2024-11-08 19:36 ` [PATCHv11 2/9] block: introduce max_write_hints queue limit Keith Busch
2024-11-08 19:36 ` [PATCHv11 3/9] statx: add write hint information Keith Busch
2024-11-08 19:36 ` [PATCHv11 4/9] block: allow ability to limit partition write hints Keith Busch
2024-11-08 19:36 ` [PATCHv11 5/9] block, fs: add write hint to kiocb Keith Busch
2024-11-08 19:36 ` [PATCHv11 6/9] io_uring: enable per-io hinting capability Keith Busch
2024-11-08 19:36 ` [PATCHv11 7/9] block: export placement hint feature Keith Busch
2024-11-11 10:29 ` [PATCHv11 0/9] write hints with nvme fdp and scsi streams Christoph Hellwig
2024-11-11 16:27   ` Keith Busch
2024-11-11 16:34     ` Christoph Hellwig
2024-11-12 13:26   ` Kanchan Joshi
2024-11-12 13:34     ` Christoph Hellwig
2024-11-12 14:25       ` Keith Busch
2024-11-12 16:50         ` Christoph Hellwig
2024-11-12 17:19           ` Christoph Hellwig
2024-11-12 18:18         ` [EXT] " Pierre Labat
2024-11-13  4:47           ` Christoph Hellwig
2024-11-13 23:51             ` Dave Chinner
2024-11-14  3:09               ` Martin K. Petersen
2024-11-14  6:07               ` Christoph Hellwig
2024-11-15 16:28                 ` Keith Busch
2024-11-15 16:53                   ` Christoph Hellwig
2024-11-18 23:37                     ` Keith Busch [this message]
2024-11-19  7:15                       ` Christoph Hellwig
2024-11-20 17:21                         ` Darrick J. Wong
2024-11-20 18:11                           ` Keith Busch
2024-11-21  7:17                             ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZzvPpD5O8wJzeHth@kbusch-mbp \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox