public inbox for [email protected]
 help / color / mirror / Atom feed
From: Ming Lei <[email protected]>
To: Kanchan Joshi <[email protected]>
Cc: [email protected], [email protected], [email protected],
	[email protected], [email protected],
	[email protected], [email protected],
	[email protected], Anuj Gupta <[email protected]>,
	[email protected]
Subject: Re: [PATCH for-next v3 4/4] nvme: wire up async polling for io passthrough commands
Date: Wed, 9 Aug 2023 09:15:57 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

Hi Kanchan,

On Tue, Aug 23, 2022 at 09:44:43PM +0530, Kanchan Joshi wrote:
> Store a cookie during submission, and use that to implement
> completion-polling inside the ->uring_cmd_iopoll handler.
> This handler makes use of existing bio poll facility.
> 
> Signed-off-by: Kanchan Joshi <[email protected]>
> Signed-off-by: Anuj Gupta <[email protected]>
> ---

...

>  
> +int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd)
> +{
> +	struct bio *bio;
> +	int ret = 0;
> +	struct nvme_ns *ns;
> +	struct request_queue *q;
> +
> +	rcu_read_lock();
> +	bio = READ_ONCE(ioucmd->cookie);
> +	ns = container_of(file_inode(ioucmd->file)->i_cdev,
> +			struct nvme_ns, cdev);
> +	q = ns->queue;
> +	if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio && bio->bi_bdev)
> +		ret = bio_poll(bio, NULL, 0);
> +	rcu_read_unlock();
> +	return ret;
> +}

It looks not good to call bio_poll() with holding rcu read lock,
since set_page_dirty_lock() may sleep from end_io code path.

blk_rq_unmap_user
	bio_release_pages
		__bio_release_pages
			set_page_dirty_lock
				lock_page

Probably you need to move dirtying pages into wq context, such as
bio_check_pages_dirty(), then I guess pt io poll perf may drop.

Maybe we need to investigate how to remove the rcu read lock here.


>  #ifdef CONFIG_NVME_MULTIPATH
>  static int nvme_ns_head_ctrl_ioctl(struct nvme_ns *ns, unsigned int cmd,
>  		void __user *argp, struct nvme_ns_head *head, int srcu_idx)
> @@ -685,6 +721,29 @@ int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd,
>  	srcu_read_unlock(&head->srcu, srcu_idx);
>  	return ret;
>  }
> +
> +int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd)
> +{
> +	struct cdev *cdev = file_inode(ioucmd->file)->i_cdev;
> +	struct nvme_ns_head *head = container_of(cdev, struct nvme_ns_head, cdev);
> +	int srcu_idx = srcu_read_lock(&head->srcu);
> +	struct nvme_ns *ns = nvme_find_path(head);
> +	struct bio *bio;
> +	int ret = 0;
> +	struct request_queue *q;
> +
> +	if (ns) {
> +		rcu_read_lock();
> +		bio = READ_ONCE(ioucmd->cookie);
> +		q = ns->queue;
> +		if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio
> +				&& bio->bi_bdev)
> +			ret = bio_poll(bio, NULL, 0);
> +		rcu_read_unlock();

Same with above.


thanks,
Ming


  reply	other threads:[~2023-08-09  1:17 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20220823162504epcas5p22a67e394c0fe1f563432b2f411b2fad3@epcas5p2.samsung.com>
2022-08-23 16:14 ` [PATCH for-next v3 0/4] iopoll support for io_uring/nvme Kanchan Joshi
     [not found]   ` <CGME20220823162508epcas5p3ae39903d3ee1079134fb70ed675159fc@epcas5p3.samsung.com>
2022-08-23 16:14     ` [PATCH for-next v3 1/4] fs: add file_operations->uring_cmd_iopoll Kanchan Joshi
     [not found]   ` <CGME20220823162511epcas5p46fc0e384524f0a386651bc694ff21976@epcas5p4.samsung.com>
2022-08-23 16:14     ` [PATCH for-next v3 2/4] io_uring: add iopoll infrastructure for io_uring_cmd Kanchan Joshi
     [not found]   ` <CGME20220823162514epcas5p1a86cebaed6993eacd976b59fc2c68f29@epcas5p1.samsung.com>
2022-08-23 16:14     ` [PATCH for-next v3 3/4] block: export blk_rq_is_poll Kanchan Joshi
     [not found]   ` <CGME20220823162517epcas5p2f1b808e60bae4bc1161b2d3a3a388534@epcas5p2.samsung.com>
2022-08-23 16:14     ` [PATCH for-next v3 4/4] nvme: wire up async polling for io passthrough commands Kanchan Joshi
2023-08-09  1:15       ` Ming Lei [this message]
2022-09-02 15:35   ` [PATCH for-next v3 0/4] iopoll support for io_uring/nvme Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox