public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd
@ 2023-10-09  9:33 Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 1/7] ublk: don't get ublk device reference in ublk_abort_queue() Ming Lei
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

Hello,

Simplify ublk request & io command aborting handling with the new added
cancelable uring_cmd. With this change, the aborting logic becomes
simpler and more reliable, and it becomes easy to add new feature, such
as relaxing queue/ublk daemon association.

Pass `blktests ublk` test, and pass lockdep when running `./check ublk`
of blktests.


Ming Lei (7):
  ublk: don't get ublk device reference in ublk_abort_queue()
  ublk: make sure io cmd handled in submitter task context
  ublk: move ublk_cancel_dev() out of ub->mutex
  ublk: rename mm_lock as lock
  ublk: quiesce request queue when aborting queue
  ublk: replace monitor with cancelable uring_cmd
  ublk: simplify aborting request

 drivers/block/ublk_drv.c | 306 ++++++++++++++++++++++++---------------
 1 file changed, 190 insertions(+), 116 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH for-6.7/io_uring 1/7] ublk: don't get ublk device reference in ublk_abort_queue()
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
@ 2023-10-09  9:33 ` Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 2/7] ublk: make sure io cmd handled in submitter task context Ming Lei
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

ublk_abort_queue() is called in ublk_daemon_monitor_work(), in which
it is guaranteed that the device is live because monitor work is
canceled when removing device, so no need to get the device reference.

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 630ddfe6657b..9b3c0b3dd36e 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1419,9 +1419,6 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
 {
 	int i;
 
-	if (!ublk_get_device(ub))
-		return;
-
 	for (i = 0; i < ubq->q_depth; i++) {
 		struct ublk_io *io = &ubq->ios[i];
 
@@ -1437,7 +1434,6 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
 				__ublk_fail_req(ubq, io, rq);
 		}
 	}
-	ublk_put_device(ub);
 }
 
 static void ublk_daemon_monitor_work(struct work_struct *work)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-6.7/io_uring 2/7] ublk: make sure io cmd handled in submitter task context
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 1/7] ublk: don't get ublk device reference in ublk_abort_queue() Ming Lei
@ 2023-10-09  9:33 ` Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 3/7] ublk: move ublk_cancel_dev() out of ub->mutex Ming Lei
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

In well-done ublk server implementation, ublk io command won't be
linked into any link chain. Meantime they are always handled in no-wait
style, so basically io cmd is always handled in submitter task context.

However, the server may set IOSQE_ASYNC, or io command is linked to one
chain mistakenly, then we may still run into io-wq context and
ctx->uring_lock isn't held.

So in case of IO_URING_F_UNLOCKED, schedule this command by
io_uring_cmd_complete_in_task to force running it in submitter task. Then
ublk_ch_uring_cmd_local() is guaranteed to run with context uring_lock held,
and we needn't to worry about sync among submission code path any more.

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 9b3c0b3dd36e..46d499d96ca3 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1810,7 +1810,8 @@ static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
 	return NULL;
 }
 
-static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+static inline int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
+		unsigned int issue_flags)
 {
 	/*
 	 * Not necessary for async retry, but let's keep it simple and always
@@ -1824,9 +1825,28 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
 		.addr = READ_ONCE(ub_src->addr)
 	};
 
+	WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED);
+
 	return __ublk_ch_uring_cmd(cmd, issue_flags, &ub_cmd);
 }
 
+static void ublk_ch_uring_cmd_cb(struct io_uring_cmd *cmd,
+		unsigned int issue_flags)
+{
+	ublk_ch_uring_cmd_local(cmd, issue_flags);
+}
+
+static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+{
+	/* well-implemented server won't run into unlocked */
+	if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) {
+		io_uring_cmd_complete_in_task(cmd, ublk_ch_uring_cmd_cb);
+		return -EIOCBQUEUED;
+	}
+
+	return ublk_ch_uring_cmd_local(cmd, issue_flags);
+}
+
 static inline bool ublk_check_ubuf_dir(const struct request *req,
 		int ubuf_dir)
 {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-6.7/io_uring 3/7] ublk: move ublk_cancel_dev() out of ub->mutex
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 1/7] ublk: don't get ublk device reference in ublk_abort_queue() Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 2/7] ublk: make sure io cmd handled in submitter task context Ming Lei
@ 2023-10-09  9:33 ` Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 4/7] ublk: rename mm_lock as lock Ming Lei
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

ublk_cancel_dev() just calls ublk_cancel_queue() to cancel all pending
io commands after ublk request queue is idle. The only protection is just
the read & write of ubq->nr_io_ready and avoid duplicated command cancel,
so add one per-queue lock with cancel flag for providing this protection,
meantime move ublk_cancel_dev() out of ub->mutex.

Then we needn't to call io_uring_cmd_complete_in_task() to cancel
pending command. And the same cancel logic will be re-used for
cancelable uring command.

This patch basically reverts commit ac5902f84bb5 ("ublk: fix AB-BA lockdep warning").

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c | 40 +++++++++++++++++++++++-----------------
 1 file changed, 23 insertions(+), 17 deletions(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 46d499d96ca3..15b52210488a 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -115,6 +115,9 @@ struct ublk_uring_cmd_pdu {
  */
 #define UBLK_IO_FLAG_NEED_GET_DATA 0x08
 
+/* atomic RW with ubq->cancel_lock */
+#define UBLK_IO_FLAG_CANCELED	0x80000000
+
 struct ublk_io {
 	/* userspace buffer address from io cmd */
 	__u64	addr;
@@ -139,6 +142,7 @@ struct ublk_queue {
 	bool force_abort;
 	bool timeout;
 	unsigned short nr_io_ready;	/* how many ios setup */
+	spinlock_t		cancel_lock;
 	struct ublk_device *dev;
 	struct ublk_io ios[];
 };
@@ -1473,28 +1477,28 @@ static inline bool ublk_queue_ready(struct ublk_queue *ubq)
 	return ubq->nr_io_ready == ubq->q_depth;
 }
 
-static void ublk_cmd_cancel_cb(struct io_uring_cmd *cmd, unsigned issue_flags)
-{
-	io_uring_cmd_done(cmd, UBLK_IO_RES_ABORT, 0, issue_flags);
-}
-
 static void ublk_cancel_queue(struct ublk_queue *ubq)
 {
 	int i;
 
-	if (!ublk_queue_ready(ubq))
-		return;
-
 	for (i = 0; i < ubq->q_depth; i++) {
 		struct ublk_io *io = &ubq->ios[i];
 
-		if (io->flags & UBLK_IO_FLAG_ACTIVE)
-			io_uring_cmd_complete_in_task(io->cmd,
-						      ublk_cmd_cancel_cb);
-	}
+		if (io->flags & UBLK_IO_FLAG_ACTIVE) {
+			bool done;
 
-	/* all io commands are canceled */
-	ubq->nr_io_ready = 0;
+			spin_lock(&ubq->cancel_lock);
+			done = !!(io->flags & UBLK_IO_FLAG_CANCELED);
+			if (!done)
+				io->flags |= UBLK_IO_FLAG_CANCELED;
+			spin_unlock(&ubq->cancel_lock);
+
+			if (!done)
+				io_uring_cmd_done(io->cmd,
+						UBLK_IO_RES_ABORT, 0,
+						IO_URING_F_UNLOCKED);
+		}
+	}
 }
 
 /* Cancel all pending commands, must be called after del_gendisk() returns */
@@ -1541,7 +1545,6 @@ static void __ublk_quiesce_dev(struct ublk_device *ub)
 	blk_mq_quiesce_queue(ub->ub_disk->queue);
 	ublk_wait_tagset_rqs_idle(ub);
 	ub->dev_info.state = UBLK_S_DEV_QUIESCED;
-	ublk_cancel_dev(ub);
 	/* we are going to release task_struct of ubq_daemon and resets
 	 * ->ubq_daemon to NULL. So in monitor_work, check on ubq_daemon causes UAF.
 	 * Besides, monitor_work is not necessary in QUIESCED state since we have
@@ -1564,6 +1567,7 @@ static void ublk_quiesce_work_fn(struct work_struct *work)
 	__ublk_quiesce_dev(ub);
  unlock:
 	mutex_unlock(&ub->mutex);
+	ublk_cancel_dev(ub);
 }
 
 static void ublk_unquiesce_dev(struct ublk_device *ub)
@@ -1603,8 +1607,8 @@ static void ublk_stop_dev(struct ublk_device *ub)
 	put_disk(ub->ub_disk);
 	ub->ub_disk = NULL;
  unlock:
-	ublk_cancel_dev(ub);
 	mutex_unlock(&ub->mutex);
+	ublk_cancel_dev(ub);
 	cancel_delayed_work_sync(&ub->monitor_work);
 }
 
@@ -1978,6 +1982,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
 	void *ptr;
 	int size;
 
+	spin_lock_init(&ubq->cancel_lock);
 	ubq->flags = ub->dev_info.flags;
 	ubq->q_id = q_id;
 	ubq->q_depth = ub->dev_info.queue_depth;
@@ -2585,8 +2590,9 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
 	int i;
 
 	WARN_ON_ONCE(!(ubq->ubq_daemon && ubq_daemon_is_dying(ubq)));
+
 	/* All old ioucmds have to be completed */
-	WARN_ON_ONCE(ubq->nr_io_ready);
+	ubq->nr_io_ready = 0;
 	/* old daemon is PF_EXITING, put it now */
 	put_task_struct(ubq->ubq_daemon);
 	/* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-6.7/io_uring 4/7] ublk: rename mm_lock as lock
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
                   ` (2 preceding siblings ...)
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 3/7] ublk: move ublk_cancel_dev() out of ub->mutex Ming Lei
@ 2023-10-09  9:33 ` Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 5/7] ublk: quiesce request queue when aborting queue Ming Lei
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

Rename mm_lock field of ublk_device as lock, so that this lock can be reused
for protecting access of ub->ub_disk, which will be used for simplifying
ublk_abort_queue() by quiesce queue in next patch.

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 15b52210488a..ab8828ab3a0c 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -170,7 +170,7 @@ struct ublk_device {
 
 	struct mutex		mutex;
 
-	spinlock_t		mm_lock;
+	spinlock_t		lock;
 	struct mm_struct	*mm;
 
 	struct ublk_params	params;
@@ -1361,12 +1361,12 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
 	unsigned long pfn, end, phys_off = vma->vm_pgoff << PAGE_SHIFT;
 	int q_id, ret = 0;
 
-	spin_lock(&ub->mm_lock);
+	spin_lock(&ub->lock);
 	if (!ub->mm)
 		ub->mm = current->mm;
 	if (current->mm != ub->mm)
 		ret = -EINVAL;
-	spin_unlock(&ub->mm_lock);
+	spin_unlock(&ub->lock);
 
 	if (ret)
 		return ret;
@@ -2341,7 +2341,7 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
 	if (!ub)
 		goto out_unlock;
 	mutex_init(&ub->mutex);
-	spin_lock_init(&ub->mm_lock);
+	spin_lock_init(&ub->lock);
 	INIT_WORK(&ub->quiesce_work, ublk_quiesce_work_fn);
 	INIT_WORK(&ub->stop_work, ublk_stop_work_fn);
 	INIT_DELAYED_WORK(&ub->monitor_work, ublk_daemon_monitor_work);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-6.7/io_uring 5/7] ublk: quiesce request queue when aborting queue
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
                   ` (3 preceding siblings ...)
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 4/7] ublk: rename mm_lock as lock Ming Lei
@ 2023-10-09  9:33 ` Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 6/7] ublk: replace monitor with cancelable uring_cmd Ming Lei
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

So far aborting queue ends request when the ubq daemon is exiting, and
it can be run concurrently with ublk_queue_rq(), this way is fragile and
we depend on the tricky usage of UBLK_IO_FLAG_ABORTED for avoiding such
race.

Quiesce queue when aborting queue, and the two code paths can be run
completely exclusively, then it becomes easier to add new ublk feature,
such as relaxing single same task limit for each queue.

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c | 59 ++++++++++++++++++++++++++++++++++------
 1 file changed, 50 insertions(+), 9 deletions(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index ab8828ab3a0c..e8d52cd7b226 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1440,25 +1440,59 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
 	}
 }
 
-static void ublk_daemon_monitor_work(struct work_struct *work)
+static bool ublk_abort_requests(struct ublk_device *ub)
 {
-	struct ublk_device *ub =
-		container_of(work, struct ublk_device, monitor_work.work);
+	struct gendisk *disk;
 	int i;
 
+	spin_lock(&ub->lock);
+	disk = ub->ub_disk;
+	if (disk)
+		get_device(disk_to_dev(disk));
+	spin_unlock(&ub->lock);
+
+	/* Our disk has been dead */
+	if (!disk)
+		return false;
+
+	/* Now we are serialized with ublk_queue_rq() */
+	blk_mq_quiesce_queue(disk->queue);
 	for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
 		struct ublk_queue *ubq = ublk_get_queue(ub, i);
 
 		if (ubq_daemon_is_dying(ubq)) {
-			if (ublk_queue_can_use_recovery(ubq))
-				schedule_work(&ub->quiesce_work);
-			else
-				schedule_work(&ub->stop_work);
-
 			/* abort queue is for making forward progress */
 			ublk_abort_queue(ub, ubq);
 		}
 	}
+	blk_mq_unquiesce_queue(disk->queue);
+	put_device(disk_to_dev(disk));
+
+	return true;
+}
+
+static void ublk_daemon_monitor_work(struct work_struct *work)
+{
+	struct ublk_device *ub =
+		container_of(work, struct ublk_device, monitor_work.work);
+	int i;
+
+	for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+		struct ublk_queue *ubq = ublk_get_queue(ub, i);
+
+		if (ubq_daemon_is_dying(ubq))
+			goto found;
+	}
+	return;
+
+found:
+	if (!ublk_abort_requests(ub))
+		return;
+
+	if (ublk_can_use_recovery(ub))
+		schedule_work(&ub->quiesce_work);
+	else
+		schedule_work(&ub->stop_work);
 
 	/*
 	 * We can't schedule monitor work after ub's state is not UBLK_S_DEV_LIVE.
@@ -1593,6 +1627,8 @@ static void ublk_unquiesce_dev(struct ublk_device *ub)
 
 static void ublk_stop_dev(struct ublk_device *ub)
 {
+	struct gendisk *disk;
+
 	mutex_lock(&ub->mutex);
 	if (ub->dev_info.state == UBLK_S_DEV_DEAD)
 		goto unlock;
@@ -1602,10 +1638,15 @@ static void ublk_stop_dev(struct ublk_device *ub)
 		ublk_unquiesce_dev(ub);
 	}
 	del_gendisk(ub->ub_disk);
+
+	/* Sync with ublk_abort_queue() by holding the lock */
+	spin_lock(&ub->lock);
+	disk = ub->ub_disk;
 	ub->dev_info.state = UBLK_S_DEV_DEAD;
 	ub->dev_info.ublksrv_pid = -1;
-	put_disk(ub->ub_disk);
 	ub->ub_disk = NULL;
+	spin_unlock(&ub->lock);
+	put_disk(disk);
  unlock:
 	mutex_unlock(&ub->mutex);
 	ublk_cancel_dev(ub);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-6.7/io_uring 6/7] ublk: replace monitor with cancelable uring_cmd
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
                   ` (4 preceding siblings ...)
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 5/7] ublk: quiesce request queue when aborting queue Ming Lei
@ 2023-10-09  9:33 ` Ming Lei
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 7/7] ublk: simplify aborting request Ming Lei
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

Monitor work actually introduces one extra context for handling abort, this
way is easy to cause race, and also introduce extra delay when handling
aborting.

Now we start to support cancelable uring_cmd, so use it instead:

1) this cancel callback is either run from the uring cmd submission task
context or called after the io_uring context is exit, so the callback is
run exclusively with ublk_ch_uring_cmd() and __ublk_rq_task_work().

2) the previous patch freezes request queue when calling ublk_abort_queue(),
which is now completely exclusive with ublk_queue_rq() and
ublk_ch_uring_cmd()/__ublk_rq_task_work().

3) in timeout handler, if all IOs are in-flight, then all uring commands
are completed, uring command canceling can't help us to provide forward
progress any more, so call ublk_abort_requests() in timeout handler.

This way simplifies aborting queue, and is helpful for adding new feature,
such as, relax the limit of using single task for handling one queue.

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c | 208 ++++++++++++++++++++++-----------------
 1 file changed, 119 insertions(+), 89 deletions(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index e8d52cd7b226..75ed7b87a844 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -75,6 +75,7 @@ struct ublk_rq_data {
 
 struct ublk_uring_cmd_pdu {
 	struct ublk_queue *ubq;
+	u16 tag;
 };
 
 /*
@@ -141,14 +142,13 @@ struct ublk_queue {
 	unsigned int max_io_sz;
 	bool force_abort;
 	bool timeout;
+	bool canceling;
 	unsigned short nr_io_ready;	/* how many ios setup */
 	spinlock_t		cancel_lock;
 	struct ublk_device *dev;
 	struct ublk_io ios[];
 };
 
-#define UBLK_DAEMON_MONITOR_PERIOD	(5 * HZ)
-
 struct ublk_device {
 	struct gendisk		*ub_disk;
 
@@ -179,11 +179,6 @@ struct ublk_device {
 	unsigned int		nr_queues_ready;
 	unsigned int		nr_privileged_daemon;
 
-	/*
-	 * Our ubq->daemon may be killed without any notification, so
-	 * monitor each queue's daemon periodically
-	 */
-	struct delayed_work	monitor_work;
 	struct work_struct	quiesce_work;
 	struct work_struct	stop_work;
 };
@@ -194,10 +189,11 @@ struct ublk_params_header {
 	__u32	types;
 };
 
+static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq);
+
 static inline unsigned int ublk_req_build_flags(struct request *req);
 static inline struct ublksrv_io_desc *ublk_get_iod(struct ublk_queue *ubq,
 						   int tag);
-
 static inline bool ublk_dev_is_user_copy(const struct ublk_device *ub)
 {
 	return ub->dev_info.flags & UBLK_F_USER_COPY;
@@ -1122,8 +1118,6 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
 		blk_mq_requeue_request(rq, false);
 	else
 		blk_mq_end_request(rq, BLK_STS_IOERR);
-
-	mod_delayed_work(system_wq, &ubq->dev->monitor_work, 0);
 }
 
 static inline void __ublk_rq_task_work(struct request *req,
@@ -1244,12 +1238,12 @@ static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq)
 	io = &ubq->ios[rq->tag];
 	/*
 	 * If the check pass, we know that this is a re-issued request aborted
-	 * previously in monitor_work because the ubq_daemon(cmd's task) is
+	 * previously in cancel fn because the ubq_daemon(cmd's task) is
 	 * PF_EXITING. We cannot call io_uring_cmd_complete_in_task() anymore
 	 * because this ioucmd's io_uring context may be freed now if no inflight
 	 * ioucmd exists. Otherwise we may cause null-deref in ctx->fallback_work.
 	 *
-	 * Note: monitor_work sets UBLK_IO_FLAG_ABORTED and ends this request(releasing
+	 * Note: cancel fn sets UBLK_IO_FLAG_ABORTED and ends this request(releasing
 	 * the tag). Then the request is re-started(allocating the tag) and we are here.
 	 * Since releasing/allocating a tag implies smp_mb(), finding UBLK_IO_FLAG_ABORTED
 	 * guarantees that here is a re-issued request aborted previously.
@@ -1257,17 +1251,15 @@ static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq)
 	if (unlikely(io->flags & UBLK_IO_FLAG_ABORTED)) {
 		ublk_abort_io_cmds(ubq);
 	} else {
-		struct io_uring_cmd *cmd = io->cmd;
-		struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
-
-		pdu->ubq = ubq;
-		io_uring_cmd_complete_in_task(cmd, ublk_rq_task_work_cb);
+		io_uring_cmd_complete_in_task(io->cmd, ublk_rq_task_work_cb);
 	}
 }
 
 static enum blk_eh_timer_return ublk_timeout(struct request *rq)
 {
 	struct ublk_queue *ubq = rq->mq_hctx->driver_data;
+	unsigned int nr_inflight = 0;
+	int i;
 
 	if (ubq->flags & UBLK_F_UNPRIVILEGED_DEV) {
 		if (!ubq->timeout) {
@@ -1278,6 +1270,29 @@ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
 		return BLK_EH_DONE;
 	}
 
+	if (!ubq_daemon_is_dying(ubq))
+		return BLK_EH_RESET_TIMER;
+
+	for (i = 0; i < ubq->q_depth; i++) {
+		struct ublk_io *io = &ubq->ios[i];
+
+		if (!(io->flags & UBLK_IO_FLAG_ACTIVE))
+			nr_inflight++;
+	}
+
+	/* cancelable uring_cmd can't help us if all commands are in-flight */
+	if (nr_inflight == ubq->q_depth) {
+		struct ublk_device *ub = ubq->dev;
+
+		if (ublk_abort_requests(ub, ubq)) {
+			if (ublk_can_use_recovery(ub))
+				schedule_work(&ub->quiesce_work);
+			else
+				schedule_work(&ub->stop_work);
+		}
+		return BLK_EH_DONE;
+	}
+
 	return BLK_EH_RESET_TIMER;
 }
 
@@ -1307,7 +1322,7 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	blk_mq_start_request(bd->rq);
 
-	if (unlikely(ubq_daemon_is_dying(ubq))) {
+	if (unlikely(ubq->canceling)) {
 		__ublk_abort_rq(ubq, rq);
 		return BLK_STS_OK;
 	}
@@ -1415,9 +1430,9 @@ static void ublk_commit_completion(struct ublk_device *ub,
 }
 
 /*
- * When ->ubq_daemon is exiting, either new request is ended immediately,
- * or any queued io command is drained, so it is safe to abort queue
- * lockless
+ * Called from ubq_daemon context via cancel fn, meantime quiesce ublk
+ * blk-mq queue, so we are called exclusively with blk-mq and ubq_daemon
+ * context, so everything is serialized.
  */
 static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
 {
@@ -1440,10 +1455,17 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
 	}
 }
 
-static bool ublk_abort_requests(struct ublk_device *ub)
+static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq)
 {
 	struct gendisk *disk;
-	int i;
+
+	spin_lock(&ubq->cancel_lock);
+	if (ubq->canceling) {
+		spin_unlock(&ubq->cancel_lock);
+		return false;
+	}
+	ubq->canceling = true;
+	spin_unlock(&ubq->cancel_lock);
 
 	spin_lock(&ub->lock);
 	disk = ub->ub_disk;
@@ -1457,53 +1479,69 @@ static bool ublk_abort_requests(struct ublk_device *ub)
 
 	/* Now we are serialized with ublk_queue_rq() */
 	blk_mq_quiesce_queue(disk->queue);
-	for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
-		struct ublk_queue *ubq = ublk_get_queue(ub, i);
-
-		if (ubq_daemon_is_dying(ubq)) {
-			/* abort queue is for making forward progress */
-			ublk_abort_queue(ub, ubq);
-		}
-	}
+	/* abort queue is for making forward progress */
+	ublk_abort_queue(ub, ubq);
 	blk_mq_unquiesce_queue(disk->queue);
 	put_device(disk_to_dev(disk));
 
 	return true;
 }
 
-static void ublk_daemon_monitor_work(struct work_struct *work)
+static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io,
+		unsigned int issue_flags)
 {
-	struct ublk_device *ub =
-		container_of(work, struct ublk_device, monitor_work.work);
-	int i;
+	bool done;
 
-	for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
-		struct ublk_queue *ubq = ublk_get_queue(ub, i);
+	if (!(io->flags & UBLK_IO_FLAG_ACTIVE))
+		return;
 
-		if (ubq_daemon_is_dying(ubq))
-			goto found;
-	}
-	return;
+	spin_lock(&ubq->cancel_lock);
+	done = !!(io->flags & UBLK_IO_FLAG_CANCELED);
+	if (!done)
+		io->flags |= UBLK_IO_FLAG_CANCELED;
+	spin_unlock(&ubq->cancel_lock);
+
+	if (!done)
+		io_uring_cmd_done(io->cmd, UBLK_IO_RES_ABORT, 0, issue_flags);
+}
+
+/*
+ * The ublk char device won't be closed when calling cancel fn, so both
+ * ublk device and queue are guaranteed to be live
+ */
+static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
+		unsigned int issue_flags)
+{
+	struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+	struct ublk_queue *ubq = pdu->ubq;
+	struct task_struct *task;
+	struct ublk_device *ub;
+	bool need_schedule;
+	struct ublk_io *io;
 
-found:
-	if (!ublk_abort_requests(ub))
+	if (WARN_ON_ONCE(!ubq))
 		return;
 
-	if (ublk_can_use_recovery(ub))
-		schedule_work(&ub->quiesce_work);
-	else
-		schedule_work(&ub->stop_work);
+	if (WARN_ON_ONCE(pdu->tag >= ubq->q_depth))
+		return;
 
-	/*
-	 * We can't schedule monitor work after ub's state is not UBLK_S_DEV_LIVE.
-	 * after ublk_remove() or __ublk_quiesce_dev() is started.
-	 *
-	 * No need ub->mutex, monitor work are canceled after state is marked
-	 * as not LIVE, so new state is observed reliably.
-	 */
-	if (ub->dev_info.state == UBLK_S_DEV_LIVE)
-		schedule_delayed_work(&ub->monitor_work,
-				UBLK_DAEMON_MONITOR_PERIOD);
+	task = io_uring_cmd_get_task(cmd);
+	if (WARN_ON_ONCE(task && task != ubq->ubq_daemon))
+		return;
+
+	ub = ubq->dev;
+	need_schedule = ublk_abort_requests(ub, ubq);
+
+	io = &ubq->ios[pdu->tag];
+	WARN_ON_ONCE(io->cmd != cmd);
+	ublk_cancel_cmd(ubq, &ubq->ios[pdu->tag], issue_flags);
+
+	if (need_schedule) {
+		if (ublk_can_use_recovery(ub))
+			schedule_work(&ub->quiesce_work);
+		else
+			schedule_work(&ub->stop_work);
+	}
 }
 
 static inline bool ublk_queue_ready(struct ublk_queue *ubq)
@@ -1515,24 +1553,8 @@ static void ublk_cancel_queue(struct ublk_queue *ubq)
 {
 	int i;
 
-	for (i = 0; i < ubq->q_depth; i++) {
-		struct ublk_io *io = &ubq->ios[i];
-
-		if (io->flags & UBLK_IO_FLAG_ACTIVE) {
-			bool done;
-
-			spin_lock(&ubq->cancel_lock);
-			done = !!(io->flags & UBLK_IO_FLAG_CANCELED);
-			if (!done)
-				io->flags |= UBLK_IO_FLAG_CANCELED;
-			spin_unlock(&ubq->cancel_lock);
-
-			if (!done)
-				io_uring_cmd_done(io->cmd,
-						UBLK_IO_RES_ABORT, 0,
-						IO_URING_F_UNLOCKED);
-		}
-	}
+	for (i = 0; i < ubq->q_depth; i++)
+		ublk_cancel_cmd(ubq, &ubq->ios[i], IO_URING_F_UNLOCKED);
 }
 
 /* Cancel all pending commands, must be called after del_gendisk() returns */
@@ -1579,15 +1601,6 @@ static void __ublk_quiesce_dev(struct ublk_device *ub)
 	blk_mq_quiesce_queue(ub->ub_disk->queue);
 	ublk_wait_tagset_rqs_idle(ub);
 	ub->dev_info.state = UBLK_S_DEV_QUIESCED;
-	/* we are going to release task_struct of ubq_daemon and resets
-	 * ->ubq_daemon to NULL. So in monitor_work, check on ubq_daemon causes UAF.
-	 * Besides, monitor_work is not necessary in QUIESCED state since we have
-	 * already scheduled quiesce_work and quiesced all ubqs.
-	 *
-	 * Do not let monitor_work schedule itself if state it QUIESCED. And we cancel
-	 * it here and re-schedule it in END_USER_RECOVERY to avoid UAF.
-	 */
-	cancel_delayed_work_sync(&ub->monitor_work);
 }
 
 static void ublk_quiesce_work_fn(struct work_struct *work)
@@ -1650,7 +1663,6 @@ static void ublk_stop_dev(struct ublk_device *ub)
  unlock:
 	mutex_unlock(&ub->mutex);
 	ublk_cancel_dev(ub);
-	cancel_delayed_work_sync(&ub->monitor_work);
 }
 
 /* device can only be started after all IOs are ready */
@@ -1701,6 +1713,21 @@ static inline void ublk_fill_io_cmd(struct ublk_io *io,
 	io->addr = buf_addr;
 }
 
+static inline void ublk_prep_cancel(struct io_uring_cmd *cmd,
+				    unsigned int issue_flags,
+				    struct ublk_queue *ubq, unsigned int tag)
+{
+	struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+
+	/*
+	 * Safe to refer to @ubq since ublk_queue won't be died until its
+	 * commands are completed
+	 */
+	pdu->ubq = ubq;
+	pdu->tag = tag;
+	io_uring_cmd_mark_cancelable(cmd, issue_flags);
+}
+
 static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
 			       unsigned int issue_flags,
 			       const struct ublksrv_io_cmd *ub_cmd)
@@ -1816,6 +1843,7 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
 	default:
 		goto out;
 	}
+	ublk_prep_cancel(cmd, issue_flags, ubq, tag);
 	return -EIOCBQUEUED;
 
  out:
@@ -1883,6 +1911,11 @@ static void ublk_ch_uring_cmd_cb(struct io_uring_cmd *cmd,
 
 static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
 {
+	if (unlikely(issue_flags & IO_URING_F_CANCEL)) {
+		ublk_uring_cmd_cancel_fn(cmd, issue_flags);
+		return 0;
+	}
+
 	/* well-implemented server won't run into unlocked */
 	if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) {
 		io_uring_cmd_complete_in_task(cmd, ublk_ch_uring_cmd_cb);
@@ -2213,8 +2246,6 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 	if (wait_for_completion_interruptible(&ub->completion) != 0)
 		return -EINTR;
 
-	schedule_delayed_work(&ub->monitor_work, UBLK_DAEMON_MONITOR_PERIOD);
-
 	mutex_lock(&ub->mutex);
 	if (ub->dev_info.state == UBLK_S_DEV_LIVE ||
 	    test_bit(UB_STATE_USED, &ub->state)) {
@@ -2385,7 +2416,6 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
 	spin_lock_init(&ub->lock);
 	INIT_WORK(&ub->quiesce_work, ublk_quiesce_work_fn);
 	INIT_WORK(&ub->stop_work, ublk_stop_work_fn);
-	INIT_DELAYED_WORK(&ub->monitor_work, ublk_daemon_monitor_work);
 
 	ret = ublk_alloc_dev_number(ub, header->dev_id);
 	if (ret < 0)
@@ -2639,6 +2669,7 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
 	/* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */
 	ubq->ubq_daemon = NULL;
 	ubq->timeout = false;
+	ubq->canceling = false;
 
 	for (i = 0; i < ubq->q_depth; i++) {
 		struct ublk_io *io = &ubq->ios[i];
@@ -2724,7 +2755,6 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
 			__func__, header->dev_id);
 	blk_mq_kick_requeue_list(ub->ub_disk->queue);
 	ub->dev_info.state = UBLK_S_DEV_LIVE;
-	schedule_delayed_work(&ub->monitor_work, UBLK_DAEMON_MONITOR_PERIOD);
 	ret = 0;
  out_unlock:
 	mutex_unlock(&ub->mutex);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH for-6.7/io_uring 7/7] ublk: simplify aborting request
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
                   ` (5 preceding siblings ...)
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 6/7] ublk: replace monitor with cancelable uring_cmd Ming Lei
@ 2023-10-09  9:33 ` Ming Lei
  2023-10-16 23:47 ` [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
  2023-10-17  1:13 ` Jens Axboe
  8 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2023-10-09  9:33 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block; +Cc: Ming Lei

Now ublk_abort_queue() is run exclusively with ublk_queue_rq() and the
ubq_daemon task, so simplify aborting request:

- set UBLK_IO_FLAG_ABORTED in ublk_abort_queue() just for aborting
this request

- abort request in ublk_queue_rq() if ubq->canceling is set

Signed-off-by: Ming Lei <[email protected]>
---
 drivers/block/ublk_drv.c | 41 +++++++++++-----------------------------
 1 file changed, 11 insertions(+), 30 deletions(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 75ed7b87a844..20237b8f318a 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1083,13 +1083,10 @@ static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
 {
 	WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
 
-	if (!(io->flags & UBLK_IO_FLAG_ABORTED)) {
-		io->flags |= UBLK_IO_FLAG_ABORTED;
-		if (ublk_queue_can_use_recovery_reissue(ubq))
-			blk_mq_requeue_request(req, false);
-		else
-			ublk_put_req_ref(ubq, req);
-	}
+	if (ublk_queue_can_use_recovery_reissue(ubq))
+		blk_mq_requeue_request(req, false);
+	else
+		ublk_put_req_ref(ubq, req);
 }
 
 static void ubq_complete_io_cmd(struct ublk_io *io, int res,
@@ -1230,27 +1227,10 @@ static void ublk_rq_task_work_cb(struct io_uring_cmd *cmd, unsigned issue_flags)
 static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq)
 {
 	struct ublk_rq_data *data = blk_mq_rq_to_pdu(rq);
-	struct ublk_io *io;
 
-	if (!llist_add(&data->node, &ubq->io_cmds))
-		return;
+	if (llist_add(&data->node, &ubq->io_cmds)) {
+		struct ublk_io *io = &ubq->ios[rq->tag];
 
-	io = &ubq->ios[rq->tag];
-	/*
-	 * If the check pass, we know that this is a re-issued request aborted
-	 * previously in cancel fn because the ubq_daemon(cmd's task) is
-	 * PF_EXITING. We cannot call io_uring_cmd_complete_in_task() anymore
-	 * because this ioucmd's io_uring context may be freed now if no inflight
-	 * ioucmd exists. Otherwise we may cause null-deref in ctx->fallback_work.
-	 *
-	 * Note: cancel fn sets UBLK_IO_FLAG_ABORTED and ends this request(releasing
-	 * the tag). Then the request is re-started(allocating the tag) and we are here.
-	 * Since releasing/allocating a tag implies smp_mb(), finding UBLK_IO_FLAG_ABORTED
-	 * guarantees that here is a re-issued request aborted previously.
-	 */
-	if (unlikely(io->flags & UBLK_IO_FLAG_ABORTED)) {
-		ublk_abort_io_cmds(ubq);
-	} else {
 		io_uring_cmd_complete_in_task(io->cmd, ublk_rq_task_work_cb);
 	}
 }
@@ -1320,13 +1300,12 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (ublk_queue_can_use_recovery(ubq) && unlikely(ubq->force_abort))
 		return BLK_STS_IOERR;
 
-	blk_mq_start_request(bd->rq);
-
 	if (unlikely(ubq->canceling)) {
 		__ublk_abort_rq(ubq, rq);
 		return BLK_STS_OK;
 	}
 
+	blk_mq_start_request(bd->rq);
 	ublk_queue_cmd(ubq, rq);
 
 	return BLK_STS_OK;
@@ -1449,8 +1428,10 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
 			 * will do it
 			 */
 			rq = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], i);
-			if (rq)
+			if (rq && blk_mq_request_started(rq)) {
+				io->flags |= UBLK_IO_FLAG_ABORTED;
 				__ublk_fail_req(ubq, io, rq);
+			}
 		}
 	}
 }
@@ -1534,7 +1515,7 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
 
 	io = &ubq->ios[pdu->tag];
 	WARN_ON_ONCE(io->cmd != cmd);
-	ublk_cancel_cmd(ubq, &ubq->ios[pdu->tag], issue_flags);
+	ublk_cancel_cmd(ubq, io, issue_flags);
 
 	if (need_schedule) {
 		if (ublk_can_use_recovery(ub))
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
                   ` (6 preceding siblings ...)
  2023-10-09  9:33 ` [PATCH for-6.7/io_uring 7/7] ublk: simplify aborting request Ming Lei
@ 2023-10-16 23:47 ` Ming Lei
  2023-10-17  1:14   ` Jens Axboe
  2023-10-17  1:13 ` Jens Axboe
  8 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2023-10-16 23:47 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block

On Mon, Oct 09, 2023 at 05:33:15PM +0800, Ming Lei wrote:
> Hello,
> 
> Simplify ublk request & io command aborting handling with the new added
> cancelable uring_cmd. With this change, the aborting logic becomes
> simpler and more reliable, and it becomes easy to add new feature, such
> as relaxing queue/ublk daemon association.
> 
> Pass `blktests ublk` test, and pass lockdep when running `./check ublk`
> of blktests.

Hello Guys,

Ping...


Thanks,
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd
  2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
                   ` (7 preceding siblings ...)
  2023-10-16 23:47 ` [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
@ 2023-10-17  1:13 ` Jens Axboe
  2023-10-17  5:00   ` Ming Lei
  8 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2023-10-17  1:13 UTC (permalink / raw)
  To: io-uring, linux-block, Ming Lei


On Mon, 09 Oct 2023 17:33:15 +0800, Ming Lei wrote:
> Simplify ublk request & io command aborting handling with the new added
> cancelable uring_cmd. With this change, the aborting logic becomes
> simpler and more reliable, and it becomes easy to add new feature, such
> as relaxing queue/ublk daemon association.
> 
> Pass `blktests ublk` test, and pass lockdep when running `./check ublk`
> of blktests.
> 
> [...]

Applied, thanks!

[1/7] ublk: don't get ublk device reference in ublk_abort_queue()
      commit: a5365f65ea2244fef4d6b5076210b0fc4fe5c104
[2/7] ublk: make sure io cmd handled in submitter task context
      commit: fad0f2b5c6d8f4622ed09ebfd6c08817abbfa20d
[3/7] ublk: move ublk_cancel_dev() out of ub->mutex
      commit: 95290eef462aaec33fb6f5f9da541042f9c9a97c
[4/7] ublk: rename mm_lock as lock
      commit: 9b8ce170c0bc82c50bf0db6187e00d3a601df334
[5/7] ublk: quiesce request queue when aborting queue
      commit: e4a81fcd73422bdee366c3a076826d92ee8f2669
[6/7] ublk: replace monitor with cancelable uring_cmd
      commit: 3aa8ac4a0c293fcc1b83c4f1a515b66f1f0123a0
[7/7] ublk: simplify aborting request
      commit: ac7eb8f9b49c786aace696bcca13a60953ea9b11

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd
  2023-10-16 23:47 ` [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
@ 2023-10-17  1:14   ` Jens Axboe
  0 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2023-10-17  1:14 UTC (permalink / raw)
  To: Ming Lei, io-uring, linux-block

On 10/16/23 5:47 PM, Ming Lei wrote:
> On Mon, Oct 09, 2023 at 05:33:15PM +0800, Ming Lei wrote:
>> Hello,
>>
>> Simplify ublk request & io command aborting handling with the new added
>> cancelable uring_cmd. With this change, the aborting logic becomes
>> simpler and more reliable, and it becomes easy to add new feature, such
>> as relaxing queue/ublk daemon association.
>>
>> Pass `blktests ublk` test, and pass lockdep when running `./check ublk`
>> of blktests.
> 
> Hello Guys,
> 
> Ping...

I thought I had already queued this up... Done so now, thanks Ming.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd
  2023-10-17  1:13 ` Jens Axboe
@ 2023-10-17  5:00   ` Ming Lei
  2023-10-17 14:24     ` Jens Axboe
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2023-10-17  5:00 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block, ming.lei

On Mon, Oct 16, 2023 at 07:13:53PM -0600, Jens Axboe wrote:
> 
> On Mon, 09 Oct 2023 17:33:15 +0800, Ming Lei wrote:
> > Simplify ublk request & io command aborting handling with the new added
> > cancelable uring_cmd. With this change, the aborting logic becomes
> > simpler and more reliable, and it becomes easy to add new feature, such
> > as relaxing queue/ublk daemon association.
> > 
> > Pass `blktests ublk` test, and pass lockdep when running `./check ublk`
> > of blktests.
> > 
> > [...]
> 
> Applied, thanks!
> 
> [1/7] ublk: don't get ublk device reference in ublk_abort_queue()
>       commit: a5365f65ea2244fef4d6b5076210b0fc4fe5c104
> [2/7] ublk: make sure io cmd handled in submitter task context
>       commit: fad0f2b5c6d8f4622ed09ebfd6c08817abbfa20d
> [3/7] ublk: move ublk_cancel_dev() out of ub->mutex
>       commit: 95290eef462aaec33fb6f5f9da541042f9c9a97c
> [4/7] ublk: rename mm_lock as lock
>       commit: 9b8ce170c0bc82c50bf0db6187e00d3a601df334
> [5/7] ublk: quiesce request queue when aborting queue
>       commit: e4a81fcd73422bdee366c3a076826d92ee8f2669
> [6/7] ublk: replace monitor with cancelable uring_cmd
>       commit: 3aa8ac4a0c293fcc1b83c4f1a515b66f1f0123a0
> [7/7] ublk: simplify aborting request
>       commit: ac7eb8f9b49c786aace696bcca13a60953ea9b11

Hi Jens,

Thanks for pulling this patchset it.

However, it depends on recent cancelable uring_cmd in for-6.7/io_uring,
but you pull it in for-6.7/block, which is broken now.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd
  2023-10-17  5:00   ` Ming Lei
@ 2023-10-17 14:24     ` Jens Axboe
  0 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2023-10-17 14:24 UTC (permalink / raw)
  To: Ming Lei; +Cc: io-uring, linux-block

On 10/16/23 11:00 PM, Ming Lei wrote:
> On Mon, Oct 16, 2023 at 07:13:53PM -0600, Jens Axboe wrote:
>>
>> On Mon, 09 Oct 2023 17:33:15 +0800, Ming Lei wrote:
>>> Simplify ublk request & io command aborting handling with the new added
>>> cancelable uring_cmd. With this change, the aborting logic becomes
>>> simpler and more reliable, and it becomes easy to add new feature, such
>>> as relaxing queue/ublk daemon association.
>>>
>>> Pass `blktests ublk` test, and pass lockdep when running `./check ublk`
>>> of blktests.
>>>
>>> [...]
>>
>> Applied, thanks!
>>
>> [1/7] ublk: don't get ublk device reference in ublk_abort_queue()
>>       commit: a5365f65ea2244fef4d6b5076210b0fc4fe5c104
>> [2/7] ublk: make sure io cmd handled in submitter task context
>>       commit: fad0f2b5c6d8f4622ed09ebfd6c08817abbfa20d
>> [3/7] ublk: move ublk_cancel_dev() out of ub->mutex
>>       commit: 95290eef462aaec33fb6f5f9da541042f9c9a97c
>> [4/7] ublk: rename mm_lock as lock
>>       commit: 9b8ce170c0bc82c50bf0db6187e00d3a601df334
>> [5/7] ublk: quiesce request queue when aborting queue
>>       commit: e4a81fcd73422bdee366c3a076826d92ee8f2669
>> [6/7] ublk: replace monitor with cancelable uring_cmd
>>       commit: 3aa8ac4a0c293fcc1b83c4f1a515b66f1f0123a0
>> [7/7] ublk: simplify aborting request
>>       commit: ac7eb8f9b49c786aace696bcca13a60953ea9b11
> 
> Hi Jens,
> 
> Thanks for pulling this patchset it.
> 
> However, it depends on recent cancelable uring_cmd in for-6.7/io_uring,
> but you pull it in for-6.7/block, which is broken now.

Ah indeed, and this is probably why it didn't get added initially. I'll
sort it out.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-10-17 14:24 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-09  9:33 [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
2023-10-09  9:33 ` [PATCH for-6.7/io_uring 1/7] ublk: don't get ublk device reference in ublk_abort_queue() Ming Lei
2023-10-09  9:33 ` [PATCH for-6.7/io_uring 2/7] ublk: make sure io cmd handled in submitter task context Ming Lei
2023-10-09  9:33 ` [PATCH for-6.7/io_uring 3/7] ublk: move ublk_cancel_dev() out of ub->mutex Ming Lei
2023-10-09  9:33 ` [PATCH for-6.7/io_uring 4/7] ublk: rename mm_lock as lock Ming Lei
2023-10-09  9:33 ` [PATCH for-6.7/io_uring 5/7] ublk: quiesce request queue when aborting queue Ming Lei
2023-10-09  9:33 ` [PATCH for-6.7/io_uring 6/7] ublk: replace monitor with cancelable uring_cmd Ming Lei
2023-10-09  9:33 ` [PATCH for-6.7/io_uring 7/7] ublk: simplify aborting request Ming Lei
2023-10-16 23:47 ` [PATCH for-6.7/io_uring 0/7] ublk: simplify abort with cancelable uring_cmd Ming Lei
2023-10-17  1:14   ` Jens Axboe
2023-10-17  1:13 ` Jens Axboe
2023-10-17  5:00   ` Ming Lei
2023-10-17 14:24     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox