public inbox for [email protected]
 help / color / mirror / Atom feed
From: "Alexander V. Buev" <[email protected]>
To: Christoph Hellwig <[email protected]>
Cc: <[email protected]>, <[email protected]>,
	Jens Axboe <[email protected]>,
	"Martin K . Petersen" <[email protected]>,
	"Pavel Begunkov" <[email protected]>,
	Chaitanya Kulkarni <[email protected]>,
	Mikhail Malygin <[email protected]>, <[email protected]>
Subject: Re: [PATCH v4 1/3] block: bio-integrity: add PI iovec to bio
Date: Fri, 9 Sep 2022 19:19:57 +0300	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

> «Внимание! Данное письмо от внешнего адресата!»
> 
> On Fri, Sep 09, 2022 at 03:20:38PM +0300, Alexander V. Buev wrote:
> > Added functions to attach user PI iovec pages to bio and release this
> > pages via bio_integrity_free.
> 
> Before I get into nitpicking on the nitty gritty details:
> 
> what is the reason for pinning down the memory for the iovecs here?
> Other interfaces like the nvme passthrough code simply copy from
> user assuming that the amount of metadata passed will usually be
> rather small, and thus faster doing a copy.

In short, for the universality of the solution.
From my point of view we have a data & metadata (PI) 
and process data & PI with the same method.

We also worked with large IO and PI can be greater than PAGE_SIZE.
I think that allocating & copying of data with PAGE_SIZE bytes of length (an in the feature more) 
per one IO is not good idea.
Also any block driver can register it's own integrity profile 
with tuple_size more than 8 or 16 bytes.

May be I am wrong but in the feature we can register some amount buffers
and pin them once at start. This is very same idea as "SELECT BUFFERS" technics but
for vector operations and with PI support.

For now we want to be able make IO with PI to block device
with minimal restriction in interface.

But I think you are right - on small IO it's may be faster to allocate & copy 
instead of pin pages. May be this is point for feature optimization?



-- 
Alexander Buev

  reply	other threads:[~2022-09-09 16:20 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-09 12:20 [PATCH v4 0/3] implement direct IO with integrity Alexander V. Buev
2022-09-09 12:20 ` [PATCH v4 1/3] block: bio-integrity: add PI iovec to bio Alexander V. Buev
2022-09-09 14:38   ` Christoph Hellwig
2022-09-09 16:19     ` Alexander V. Buev [this message]
2022-09-09 12:20 ` [PATCH v4 2/3] block: io-uring: add READV_PI/WRITEV_PI operations Alexander V. Buev
2022-09-15 23:22   ` kernel test robot
2022-09-09 12:20 ` [PATCH v4 3/3] block: fops: handle IOCB_USE_PI in direct IO Alexander V. Buev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox