public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCHSET 0/4] Enable bio recycling for polled IO
@ 2021-08-09 21:23 Jens Axboe
  2021-08-09 21:23 ` [PATCH 1/4] bio: add allocation cache abstraction Jens Axboe
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Jens Axboe @ 2021-08-09 21:23 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block

Hi,

This is v2 of this patchset. The main change from v1 is that we're no
longer passing the cache pointer in struct kiocb, and the primary reason
for that is to avoid growing it by 8 bytes. That would take it over one
cacheline, and that is a noticeable slowdown for hot users of kiocb. Hence
this was re-architected to store it in the per-task io_uring structure
instead. Only real downside of that imho is that we need calls to get it,
and that it's obviously then io_uring specific rather than being able to
have multiple users of this. The latter I don't consider a big problem, as
nobody else supports async polled IO anyway.

The tldr; here is that we get about a 10% bump in polled performance with
this patchset, as we can recycle bio structures essentially for free.
Outside of that, explanations in each patch. I've also got an iomap patch,
but trying to keep this single user until there's agreement on the
direction.

Against for-5.15/io_uring, and can also be found in my
io_uring-bio-cache.2 branch.

 block/bio.c              | 126 +++++++++++++++++++++++++++++++++++----
 fs/block_dev.c           |  30 ++++++++--
 fs/io_uring.c            |  52 ++++++++++++++++
 include/linux/bio.h      |  24 ++++++--
 include/linux/fs.h       |   2 +
 include/linux/io_uring.h |   7 +++
 6 files changed, 221 insertions(+), 20 deletions(-)

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-09 21:23 [PATCHSET 0/4] Enable bio recycling for polled IO Jens Axboe
@ 2021-08-09 21:23 ` Jens Axboe
  2021-08-10 13:15   ` Ming Lei
  2021-08-09 21:23 ` [PATCH 2/4] fs: add bio alloc cache kiocb flag Jens Axboe
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2021-08-09 21:23 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, Jens Axboe

Add a set of helpers that can encapsulate bio allocations, reusing them
as needed. Caller must provide the necessary locking, if any is needed.
The primary intended use case is polled IO from io_uring, which will not
need any external locking.

Very simple - keeps a count of bio's in the cache, and maintains a max
of 512 with a slack of 64. If we get above max + slack, we drop slack
number of bio's.

The cache is intended to be per-task, and the user will need to supply
the storage for it. As io_uring will be the only user right now, provide
a hook that returns the cache there. Stub it out as NULL initially.

Signed-off-by: Jens Axboe <[email protected]>
---
 block/bio.c              | 126 +++++++++++++++++++++++++++++++++++----
 include/linux/bio.h      |  24 ++++++--
 include/linux/io_uring.h |   7 +++
 3 files changed, 141 insertions(+), 16 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 1fab762e079b..3bbda1be27be 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -20,6 +20,7 @@
 #include <linux/sched/sysctl.h>
 #include <linux/blk-crypto.h>
 #include <linux/xarray.h>
+#include <linux/io_uring.h>
 
 #include <trace/events/block.h>
 #include "blk.h"
@@ -238,6 +239,35 @@ static void bio_free(struct bio *bio)
 	}
 }
 
+static inline void __bio_init(struct bio *bio)
+{
+	bio->bi_next = NULL;
+	bio->bi_bdev = NULL;
+	bio->bi_opf = 0;
+	bio->bi_flags = bio->bi_ioprio = bio->bi_write_hint = 0;
+	bio->bi_status = 0;
+	bio->bi_iter.bi_sector = 0;
+	bio->bi_iter.bi_size = 0;
+	bio->bi_iter.bi_idx = 0;
+	bio->bi_iter.bi_bvec_done = 0;
+	bio->bi_end_io = NULL;
+	bio->bi_private = NULL;
+#ifdef CONFIG_BLK_CGROUP
+	bio->bi_blkg = NULL;
+	bio->bi_issue.value = 0;
+#ifdef CONFIG_BLK_CGROUP_IOCOST
+	bio->bi_iocost_cost = 0;
+#endif
+#endif
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	bio->bi_crypt_context = NULL;
+#endif
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+	bio->bi_integrity = NULL;
+#endif
+	bio->bi_vcnt = 0;
+}
+
 /*
  * Users of this function have their own bio allocation. Subsequently,
  * they must remember to pair any call to bio_init() with bio_uninit()
@@ -246,7 +276,7 @@ static void bio_free(struct bio *bio)
 void bio_init(struct bio *bio, struct bio_vec *table,
 	      unsigned short max_vecs)
 {
-	memset(bio, 0, sizeof(*bio));
+	__bio_init(bio);
 	atomic_set(&bio->__bi_remaining, 1);
 	atomic_set(&bio->__bi_cnt, 1);
 
@@ -591,6 +621,19 @@ void guard_bio_eod(struct bio *bio)
 	bio_truncate(bio, maxsector << 9);
 }
 
+static bool __bio_put(struct bio *bio)
+{
+	if (!bio_flagged(bio, BIO_REFFED))
+		return true;
+
+	BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
+
+	/*
+	 * last put frees it
+	 */
+	return atomic_dec_and_test(&bio->__bi_cnt);
+}
+
 /**
  * bio_put - release a reference to a bio
  * @bio:   bio to release reference to
@@ -601,17 +644,8 @@ void guard_bio_eod(struct bio *bio)
  **/
 void bio_put(struct bio *bio)
 {
-	if (!bio_flagged(bio, BIO_REFFED))
+	if (__bio_put(bio))
 		bio_free(bio);
-	else {
-		BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
-
-		/*
-		 * last put frees it
-		 */
-		if (atomic_dec_and_test(&bio->__bi_cnt))
-			bio_free(bio);
-	}
 }
 EXPORT_SYMBOL(bio_put);
 
@@ -1595,6 +1629,76 @@ int bioset_init_from_src(struct bio_set *bs, struct bio_set *src)
 }
 EXPORT_SYMBOL(bioset_init_from_src);
 
+void bio_alloc_cache_init(struct bio_alloc_cache *cache)
+{
+	bio_list_init(&cache->free_list);
+	cache->nr = 0;
+}
+
+static void bio_alloc_cache_prune(struct bio_alloc_cache *cache,
+				  unsigned int nr)
+{
+	struct bio *bio;
+	unsigned int i;
+
+	i = 0;
+	while ((bio = bio_list_pop(&cache->free_list)) != NULL) {
+		cache->nr--;
+		bio_free(bio);
+		if (++i == nr)
+			break;
+	}
+}
+
+void bio_alloc_cache_destroy(struct bio_alloc_cache *cache)
+{
+	bio_alloc_cache_prune(cache, -1U);
+}
+
+struct bio *bio_cache_get(gfp_t gfp, unsigned short nr_vecs, struct bio_set *bs)
+{
+	struct bio_alloc_cache *cache = io_uring_bio_cache();
+	struct bio *bio;
+
+	if (!cache || nr_vecs > BIO_INLINE_VECS)
+		return NULL;
+	if (bio_list_empty(&cache->free_list)) {
+alloc:
+		if (bs)
+			return bio_alloc_bioset(gfp, nr_vecs, bs);
+		else
+			return bio_alloc(gfp, nr_vecs);
+	}
+
+	bio = bio_list_peek(&cache->free_list);
+	if (bs && bio->bi_pool != bs)
+		goto alloc;
+	bio_list_del_head(&cache->free_list, bio);
+	cache->nr--;
+	bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
+	return bio;
+}
+
+#define ALLOC_CACHE_MAX		512
+#define ALLOC_CACHE_SLACK	 64
+
+void bio_cache_put(struct bio *bio)
+{
+	struct bio_alloc_cache *cache = io_uring_bio_cache();
+
+	if (unlikely(!__bio_put(bio)))
+		return;
+	if (cache) {
+		bio_uninit(bio);
+		bio_list_add_head(&cache->free_list, bio);
+		cache->nr++;
+		if (cache->nr > ALLOC_CACHE_MAX + ALLOC_CACHE_SLACK)
+			bio_alloc_cache_prune(cache, ALLOC_CACHE_SLACK);
+	} else {
+		bio_free(bio);
+	}
+}
+
 static int __init init_bio(void)
 {
 	int i;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 2203b686e1f0..b70c72365fa2 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -652,18 +652,22 @@ static inline struct bio *bio_list_peek(struct bio_list *bl)
 	return bl->head;
 }
 
-static inline struct bio *bio_list_pop(struct bio_list *bl)
+static inline void bio_list_del_head(struct bio_list *bl, struct bio *head)
 {
-	struct bio *bio = bl->head;
-
-	if (bio) {
+	if (head) {
 		bl->head = bl->head->bi_next;
 		if (!bl->head)
 			bl->tail = NULL;
 
-		bio->bi_next = NULL;
+		head->bi_next = NULL;
 	}
+}
 
+static inline struct bio *bio_list_pop(struct bio_list *bl)
+{
+	struct bio *bio = bl->head;
+
+	bio_list_del_head(bl, bio);
 	return bio;
 }
 
@@ -676,6 +680,16 @@ static inline struct bio *bio_list_get(struct bio_list *bl)
 	return bio;
 }
 
+struct bio_alloc_cache {
+	struct bio_list		free_list;
+	unsigned int		nr;
+};
+
+void bio_alloc_cache_init(struct bio_alloc_cache *);
+void bio_alloc_cache_destroy(struct bio_alloc_cache *);
+struct bio *bio_cache_get(gfp_t, unsigned short, struct bio_set *bs);
+void bio_cache_put(struct bio *);
+
 /*
  * Increment chain count for the bio. Make sure the CHAIN flag update
  * is visible before the raised count.
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index 04b650bcbbe5..2fb53047638e 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -5,6 +5,8 @@
 #include <linux/sched.h>
 #include <linux/xarray.h>
 
+struct bio_alloc_cache;
+
 #if defined(CONFIG_IO_URING)
 struct sock *io_uring_get_socket(struct file *file);
 void __io_uring_cancel(struct files_struct *files);
@@ -40,4 +42,9 @@ static inline void io_uring_free(struct task_struct *tsk)
 }
 #endif
 
+static inline struct bio_alloc_cache *io_uring_bio_cache(void)
+{
+	return NULL;
+}
+
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/4] fs: add bio alloc cache kiocb flag
  2021-08-09 21:23 [PATCHSET 0/4] Enable bio recycling for polled IO Jens Axboe
  2021-08-09 21:23 ` [PATCH 1/4] bio: add allocation cache abstraction Jens Axboe
@ 2021-08-09 21:23 ` Jens Axboe
  2021-08-09 21:24 ` [PATCH 3/4] io_uring: wire up bio allocation cache Jens Axboe
  2021-08-09 21:24 ` [PATCH 4/4] block: enable use of " Jens Axboe
  3 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2021-08-09 21:23 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, Jens Axboe

We'll be using this to implement a recycling cache for the bio units
used to do IO.

Signed-off-by: Jens Axboe <[email protected]>
---
 include/linux/fs.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 640574294216..2ac1b01a4902 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -319,6 +319,8 @@ enum rw_hint {
 /* iocb->ki_waitq is valid */
 #define IOCB_WAITQ		(1 << 19)
 #define IOCB_NOIO		(1 << 20)
+/* bio cache can be used */
+#define IOCB_ALLOC_CACHE	(1 << 21)
 
 struct kiocb {
 	struct file		*ki_filp;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/4] io_uring: wire up bio allocation cache
  2021-08-09 21:23 [PATCHSET 0/4] Enable bio recycling for polled IO Jens Axboe
  2021-08-09 21:23 ` [PATCH 1/4] bio: add allocation cache abstraction Jens Axboe
  2021-08-09 21:23 ` [PATCH 2/4] fs: add bio alloc cache kiocb flag Jens Axboe
@ 2021-08-09 21:24 ` Jens Axboe
  2021-08-10 12:25   ` Kanchan Joshi
  2021-08-09 21:24 ` [PATCH 4/4] block: enable use of " Jens Axboe
  3 siblings, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2021-08-09 21:24 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, Jens Axboe

Initialize a bio allocation cache, and mark it as being used for
IOPOLL. We could use it for non-polled IO as well, but it'd need some
locking and probably would negate much of the win in that case.

We start with IOPOLL, as completions are locked by the ctx lock anyway.
So no further locking is needed there.

This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
~3.25M IOPS.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c            | 52 ++++++++++++++++++++++++++++++++++++++++
 include/linux/io_uring.h |  4 ++--
 2 files changed, 54 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 91a301bb1644..1d94a434b348 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -474,6 +474,10 @@ struct io_uring_task {
 	atomic_t		inflight_tracked;
 	atomic_t		in_idle;
 
+#ifdef CONFIG_BLOCK
+	struct bio_alloc_cache	bio_cache;
+#endif
+
 	spinlock_t		task_lock;
 	struct io_wq_work_list	task_list;
 	unsigned long		task_state;
@@ -2268,6 +2272,8 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 		if (READ_ONCE(req->result) == -EAGAIN && resubmit &&
 		    !(req->flags & REQ_F_DONT_REISSUE)) {
 			req->iopoll_completed = 0;
+			/* Don't use cache for async retry, not locking safe */
+			req->rw.kiocb.ki_flags &= ~IOCB_ALLOC_CACHE;
 			req_ref_get(req);
 			io_req_task_queue_reissue(req);
 			continue;
@@ -2675,6 +2681,29 @@ static bool io_file_supports_nowait(struct io_kiocb *req, int rw)
 	return __io_file_supports_nowait(req->file, rw);
 }
 
+static void io_mark_alloc_cache(struct kiocb *kiocb)
+{
+#ifdef CONFIG_BLOCK
+	struct block_device *bdev = NULL;
+
+	if (S_ISBLK(file_inode(kiocb->ki_filp)->i_mode))
+		bdev = I_BDEV(kiocb->ki_filp->f_mapping->host);
+	else if (S_ISREG(file_inode(kiocb->ki_filp)->i_mode))
+		bdev = kiocb->ki_filp->f_inode->i_sb->s_bdev;
+
+	/*
+	 * If the lower level device doesn't support polled IO, then
+	 * we cannot safely use the alloc cache. This really should
+	 * be a failure case for polled IO...
+	 */
+	if (!bdev ||
+	    !test_bit(QUEUE_FLAG_POLL, &bdev_get_queue(bdev)->queue_flags))
+		return;
+
+	kiocb->ki_flags |= IOCB_ALLOC_CACHE;
+#endif /* CONFIG_BLOCK */
+}
+
 static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -2717,6 +2746,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 			return -EOPNOTSUPP;
 
 		kiocb->ki_flags |= IOCB_HIPRI;
+		io_mark_alloc_cache(kiocb);
 		kiocb->ki_complete = io_complete_rw_iopoll;
 		req->iopoll_completed = 0;
 	} else {
@@ -2783,6 +2813,8 @@ static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
 	if (check_reissue && (req->flags & REQ_F_REISSUE)) {
 		req->flags &= ~REQ_F_REISSUE;
 		if (io_resubmit_prep(req)) {
+			/* Don't use cache for async retry, not locking safe */
+			req->rw.kiocb.ki_flags &= ~IOCB_ALLOC_CACHE;
 			req_ref_get(req);
 			io_req_task_queue_reissue(req);
 		} else {
@@ -7966,10 +7998,17 @@ static int io_uring_alloc_task_context(struct task_struct *task,
 		return ret;
 	}
 
+#ifdef CONFIG_BLOCK
+	bio_alloc_cache_init(&tctx->bio_cache);
+#endif
+
 	tctx->io_wq = io_init_wq_offload(ctx, task);
 	if (IS_ERR(tctx->io_wq)) {
 		ret = PTR_ERR(tctx->io_wq);
 		percpu_counter_destroy(&tctx->inflight);
+#ifdef CONFIG_BLOCK
+		bio_alloc_cache_destroy(&tctx->bio_cache);
+#endif
 		kfree(tctx);
 		return ret;
 	}
@@ -7993,6 +8032,10 @@ void __io_uring_free(struct task_struct *tsk)
 	WARN_ON_ONCE(tctx->io_wq);
 	WARN_ON_ONCE(tctx->cached_refs);
 
+#ifdef CONFIG_BLOCK
+	bio_alloc_cache_destroy(&tctx->bio_cache);
+#endif
+
 	percpu_counter_destroy(&tctx->inflight);
 	kfree(tctx);
 	tsk->io_uring = NULL;
@@ -10247,6 +10290,15 @@ SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
 	return ret;
 }
 
+struct bio_alloc_cache *io_uring_bio_cache(void)
+{
+#ifdef CONFIG_BLOCK
+	if (current->io_uring)
+		return &current->io_uring->bio_cache;
+#endif
+	return NULL;
+}
+
 static int __init io_uring_init(void)
 {
 #define __BUILD_BUG_VERIFY_ELEMENT(stype, eoffset, etype, ename) do { \
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index 2fb53047638e..a9bab9bd51d1 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -11,6 +11,7 @@ struct bio_alloc_cache;
 struct sock *io_uring_get_socket(struct file *file);
 void __io_uring_cancel(struct files_struct *files);
 void __io_uring_free(struct task_struct *tsk);
+struct bio_alloc_cache *io_uring_bio_cache(void);
 
 static inline void io_uring_files_cancel(struct files_struct *files)
 {
@@ -40,11 +41,10 @@ static inline void io_uring_files_cancel(struct files_struct *files)
 static inline void io_uring_free(struct task_struct *tsk)
 {
 }
-#endif
-
 static inline struct bio_alloc_cache *io_uring_bio_cache(void)
 {
 	return NULL;
 }
+#endif
 
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/4] block: enable use of bio allocation cache
  2021-08-09 21:23 [PATCHSET 0/4] Enable bio recycling for polled IO Jens Axboe
                   ` (2 preceding siblings ...)
  2021-08-09 21:24 ` [PATCH 3/4] io_uring: wire up bio allocation cache Jens Axboe
@ 2021-08-09 21:24 ` Jens Axboe
  2021-08-10 12:39   ` Kanchan Joshi
  3 siblings, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2021-08-09 21:24 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, Jens Axboe

If a kiocb is marked as being valid for bio caching, then use that to
allocate a (and free) new bio if possible.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/block_dev.c | 30 ++++++++++++++++++++++++++----
 1 file changed, 26 insertions(+), 4 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9ef4f1fc2cb0..36a3d53326c0 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -327,6 +327,14 @@ static int blkdev_iopoll(struct kiocb *kiocb, bool wait)
 	return blk_poll(q, READ_ONCE(kiocb->ki_cookie), wait);
 }
 
+static void dio_bio_put(struct blkdev_dio *dio)
+{
+	if (!dio->is_sync && (dio->iocb->ki_flags & IOCB_ALLOC_CACHE))
+		bio_cache_put(&dio->bio);
+	else
+		bio_put(&dio->bio);
+}
+
 static void blkdev_bio_end_io(struct bio *bio)
 {
 	struct blkdev_dio *dio = bio->bi_private;
@@ -362,7 +370,7 @@ static void blkdev_bio_end_io(struct bio *bio)
 		bio_check_pages_dirty(bio);
 	} else {
 		bio_release_pages(bio, false);
-		bio_put(bio);
+		dio_bio_put(dio);
 	}
 }
 
@@ -385,7 +393,14 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 	    (bdev_logical_block_size(bdev) - 1))
 		return -EINVAL;
 
-	bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool);
+	bio = NULL;
+	if (iocb->ki_flags & IOCB_ALLOC_CACHE) {
+		bio = bio_cache_get(GFP_KERNEL, nr_pages, &blkdev_dio_pool);
+		if (!bio)
+			iocb->ki_flags &= ~IOCB_ALLOC_CACHE;
+	}
+	if (!bio)
+		bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool);
 
 	dio = container_of(bio, struct blkdev_dio, bio);
 	dio->is_sync = is_sync = is_sync_kiocb(iocb);
@@ -467,7 +482,14 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 		}
 
 		submit_bio(bio);
-		bio = bio_alloc(GFP_KERNEL, nr_pages);
+		bio = NULL;
+		if (iocb->ki_flags & IOCB_ALLOC_CACHE) {
+			bio = bio_cache_get(GFP_KERNEL, nr_pages, &fs_bio_set);
+			if (!bio)
+				iocb->ki_flags &= ~IOCB_ALLOC_CACHE;
+		}
+		if (!bio)
+			bio = bio_alloc(GFP_KERNEL, nr_pages);
 	}
 
 	if (!is_poll)
@@ -492,7 +514,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 	if (likely(!ret))
 		ret = dio->size;
 
-	bio_put(&dio->bio);
+	dio_bio_put(dio);
 	return ret;
 }
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/4] io_uring: wire up bio allocation cache
  2021-08-09 21:24 ` [PATCH 3/4] io_uring: wire up bio allocation cache Jens Axboe
@ 2021-08-10 12:25   ` Kanchan Joshi
  2021-08-10 13:50     ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Kanchan Joshi @ 2021-08-10 12:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block

On Tue, Aug 10, 2021 at 6:40 AM Jens Axboe <[email protected]> wrote:
>
> Initialize a bio allocation cache, and mark it as being used for
> IOPOLL. We could use it for non-polled IO as well, but it'd need some
> locking and probably would negate much of the win in that case.

For regular (non-polled) IO, will it make sense to tie a bio-cache to
each fixed-buffer slot (ctx->user_bufs array).
One bio cache (along with the lock) per slot. That may localize the
lock contention. And it will happen only when multiple IOs are spawned
from the same fixed-buffer concurrently?

> We start with IOPOLL, as completions are locked by the ctx lock anyway.
> So no further locking is needed there.
>
> This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
> ~3.25M IOPS.



-- 
Kanchan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/4] block: enable use of bio allocation cache
  2021-08-09 21:24 ` [PATCH 4/4] block: enable use of " Jens Axboe
@ 2021-08-10 12:39   ` Kanchan Joshi
  2021-08-10 13:56     ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Kanchan Joshi @ 2021-08-10 12:39 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block

On Tue, Aug 10, 2021 at 6:40 AM Jens Axboe <[email protected]> wrote:
>
> If a kiocb is marked as being valid for bio caching, then use that to
> allocate a (and free) new bio if possible.
>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
>  fs/block_dev.c | 30 ++++++++++++++++++++++++++----
>  1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 9ef4f1fc2cb0..36a3d53326c0 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -327,6 +327,14 @@ static int blkdev_iopoll(struct kiocb *kiocb, bool wait)
>         return blk_poll(q, READ_ONCE(kiocb->ki_cookie), wait);
>  }
>
> +static void dio_bio_put(struct blkdev_dio *dio)
> +{
> +       if (!dio->is_sync && (dio->iocb->ki_flags & IOCB_ALLOC_CACHE))
> +               bio_cache_put(&dio->bio);

I think the second check (against IOCB_ALLOC_CACHE) is sufficient here.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-09 21:23 ` [PATCH 1/4] bio: add allocation cache abstraction Jens Axboe
@ 2021-08-10 13:15   ` Ming Lei
  2021-08-10 13:53     ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Ming Lei @ 2021-08-10 13:15 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block

Hi Jens,

On Mon, Aug 09, 2021 at 03:23:58PM -0600, Jens Axboe wrote:
> Add a set of helpers that can encapsulate bio allocations, reusing them
> as needed. Caller must provide the necessary locking, if any is needed.
> The primary intended use case is polled IO from io_uring, which will not
> need any external locking.
> 
> Very simple - keeps a count of bio's in the cache, and maintains a max
> of 512 with a slack of 64. If we get above max + slack, we drop slack
> number of bio's.
> 
> The cache is intended to be per-task, and the user will need to supply
> the storage for it. As io_uring will be the only user right now, provide
> a hook that returns the cache there. Stub it out as NULL initially.

Is it possible for user space to submit & poll IO from different io_uring
tasks?

Then one bio may be allocated from bio cache of the submission task, and
freed to cache of the poll task?


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/4] io_uring: wire up bio allocation cache
  2021-08-10 12:25   ` Kanchan Joshi
@ 2021-08-10 13:50     ` Jens Axboe
  0 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2021-08-10 13:50 UTC (permalink / raw)
  To: Kanchan Joshi; +Cc: io-uring, linux-block

On 8/10/21 6:25 AM, Kanchan Joshi wrote:
> On Tue, Aug 10, 2021 at 6:40 AM Jens Axboe <[email protected]> wrote:
>>
>> Initialize a bio allocation cache, and mark it as being used for
>> IOPOLL. We could use it for non-polled IO as well, but it'd need some
>> locking and probably would negate much of the win in that case.
> 
> For regular (non-polled) IO, will it make sense to tie a bio-cache to
> each fixed-buffer slot (ctx->user_bufs array).
> One bio cache (along with the lock) per slot. That may localize the
> lock contention. And it will happen only when multiple IOs are spawned
> from the same fixed-buffer concurrently?

I don't think it's worth it - the slub overhead is already pretty low,
basically turning into a cmpxchg16 for the fast path. But that's a big
enough hit for polled IO of this magnitude that it's worth getting rid
of.

I've attempted bio caches before for non-polled, but the lock + irq
dance required for them just means it ends up being moot. Or even if
you have per-cpu caches, just doing irq enable/disable means you're
back at the same perf where you started, except now you've got extra
code...

Here's an example from a few years ago:

https://git.kernel.dk/cgit/linux-block/log/?h=cpu-alloc-cache

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-10 13:15   ` Ming Lei
@ 2021-08-10 13:53     ` Jens Axboe
  2021-08-10 14:24       ` Jens Axboe
  0 siblings, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2021-08-10 13:53 UTC (permalink / raw)
  To: Ming Lei; +Cc: io-uring, linux-block

On 8/10/21 7:15 AM, Ming Lei wrote:
> Hi Jens,
> 
> On Mon, Aug 09, 2021 at 03:23:58PM -0600, Jens Axboe wrote:
>> Add a set of helpers that can encapsulate bio allocations, reusing them
>> as needed. Caller must provide the necessary locking, if any is needed.
>> The primary intended use case is polled IO from io_uring, which will not
>> need any external locking.
>>
>> Very simple - keeps a count of bio's in the cache, and maintains a max
>> of 512 with a slack of 64. If we get above max + slack, we drop slack
>> number of bio's.
>>
>> The cache is intended to be per-task, and the user will need to supply
>> the storage for it. As io_uring will be the only user right now, provide
>> a hook that returns the cache there. Stub it out as NULL initially.
> 
> Is it possible for user space to submit & poll IO from different io_uring
> tasks?
> 
> Then one bio may be allocated from bio cache of the submission task, and
> freed to cache of the poll task?

Yes that is possible, and yes that would not benefit from this cache
at all. The previous version would work just fine with that, as the
cache is just under the ring lock and hence you can share it between
tasks.

I wonder if the niftier solution here is to retain the cache in the
ring still, yet have the pointer be per-task. So basically the setup
that this version does, except we store the cache itself in the ring.
I'll give that a whirl, should be a minor change, and it'll work per
ring instead then like before.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/4] block: enable use of bio allocation cache
  2021-08-10 12:39   ` Kanchan Joshi
@ 2021-08-10 13:56     ` Jens Axboe
  0 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2021-08-10 13:56 UTC (permalink / raw)
  To: Kanchan Joshi; +Cc: io-uring, linux-block

On 8/10/21 6:39 AM, Kanchan Joshi wrote:
> On Tue, Aug 10, 2021 at 6:40 AM Jens Axboe <[email protected]> wrote:
>>
>> If a kiocb is marked as being valid for bio caching, then use that to
>> allocate a (and free) new bio if possible.
>>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>>  fs/block_dev.c | 30 ++++++++++++++++++++++++++----
>>  1 file changed, 26 insertions(+), 4 deletions(-)
>>
>> diff --git a/fs/block_dev.c b/fs/block_dev.c
>> index 9ef4f1fc2cb0..36a3d53326c0 100644
>> --- a/fs/block_dev.c
>> +++ b/fs/block_dev.c
>> @@ -327,6 +327,14 @@ static int blkdev_iopoll(struct kiocb *kiocb, bool wait)
>>         return blk_poll(q, READ_ONCE(kiocb->ki_cookie), wait);
>>  }
>>
>> +static void dio_bio_put(struct blkdev_dio *dio)
>> +{
>> +       if (!dio->is_sync && (dio->iocb->ki_flags & IOCB_ALLOC_CACHE))
>> +               bio_cache_put(&dio->bio);
> 
> I think the second check (against IOCB_ALLOC_CACHE) is sufficient here.

Yes probably don't need the sync check here, I'll kill it.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-10 13:53     ` Jens Axboe
@ 2021-08-10 14:24       ` Jens Axboe
  2021-08-10 14:48         ` Jens Axboe
  2021-08-10 15:54         ` Kanchan Joshi
  0 siblings, 2 replies; 16+ messages in thread
From: Jens Axboe @ 2021-08-10 14:24 UTC (permalink / raw)
  To: Ming Lei; +Cc: io-uring, linux-block

On 8/10/21 7:53 AM, Jens Axboe wrote:
> On 8/10/21 7:15 AM, Ming Lei wrote:
>> Hi Jens,
>>
>> On Mon, Aug 09, 2021 at 03:23:58PM -0600, Jens Axboe wrote:
>>> Add a set of helpers that can encapsulate bio allocations, reusing them
>>> as needed. Caller must provide the necessary locking, if any is needed.
>>> The primary intended use case is polled IO from io_uring, which will not
>>> need any external locking.
>>>
>>> Very simple - keeps a count of bio's in the cache, and maintains a max
>>> of 512 with a slack of 64. If we get above max + slack, we drop slack
>>> number of bio's.
>>>
>>> The cache is intended to be per-task, and the user will need to supply
>>> the storage for it. As io_uring will be the only user right now, provide
>>> a hook that returns the cache there. Stub it out as NULL initially.
>>
>> Is it possible for user space to submit & poll IO from different io_uring
>> tasks?
>>
>> Then one bio may be allocated from bio cache of the submission task, and
>> freed to cache of the poll task?
> 
> Yes that is possible, and yes that would not benefit from this cache
> at all. The previous version would work just fine with that, as the
> cache is just under the ring lock and hence you can share it between
> tasks.
> 
> I wonder if the niftier solution here is to retain the cache in the
> ring still, yet have the pointer be per-task. So basically the setup
> that this version does, except we store the cache itself in the ring.
> I'll give that a whirl, should be a minor change, and it'll work per
> ring instead then like before.

That won't work, as we'd have to do a ctx lookup (which would defeat the
purpose), and we don't even have anything to key off of at that point...

The current approach seems like the only viable one, or adding a member
to kiocb so we can pass in the cache in question. The latter did work
just fine, but I really dislike the fact that it's growing the kiocb to
more than a cacheline.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-10 14:24       ` Jens Axboe
@ 2021-08-10 14:48         ` Jens Axboe
  2021-08-10 15:35           ` Jens Axboe
  2021-08-10 15:54         ` Kanchan Joshi
  1 sibling, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2021-08-10 14:48 UTC (permalink / raw)
  To: Ming Lei; +Cc: io-uring, linux-block

On 8/10/21 8:24 AM, Jens Axboe wrote:
> On 8/10/21 7:53 AM, Jens Axboe wrote:
>> On 8/10/21 7:15 AM, Ming Lei wrote:
>>> Hi Jens,
>>>
>>> On Mon, Aug 09, 2021 at 03:23:58PM -0600, Jens Axboe wrote:
>>>> Add a set of helpers that can encapsulate bio allocations, reusing them
>>>> as needed. Caller must provide the necessary locking, if any is needed.
>>>> The primary intended use case is polled IO from io_uring, which will not
>>>> need any external locking.
>>>>
>>>> Very simple - keeps a count of bio's in the cache, and maintains a max
>>>> of 512 with a slack of 64. If we get above max + slack, we drop slack
>>>> number of bio's.
>>>>
>>>> The cache is intended to be per-task, and the user will need to supply
>>>> the storage for it. As io_uring will be the only user right now, provide
>>>> a hook that returns the cache there. Stub it out as NULL initially.
>>>
>>> Is it possible for user space to submit & poll IO from different io_uring
>>> tasks?
>>>
>>> Then one bio may be allocated from bio cache of the submission task, and
>>> freed to cache of the poll task?
>>
>> Yes that is possible, and yes that would not benefit from this cache
>> at all. The previous version would work just fine with that, as the
>> cache is just under the ring lock and hence you can share it between
>> tasks.
>>
>> I wonder if the niftier solution here is to retain the cache in the
>> ring still, yet have the pointer be per-task. So basically the setup
>> that this version does, except we store the cache itself in the ring.
>> I'll give that a whirl, should be a minor change, and it'll work per
>> ring instead then like before.
> 
> That won't work, as we'd have to do a ctx lookup (which would defeat the
> purpose), and we don't even have anything to key off of at that point...
> 
> The current approach seems like the only viable one, or adding a member
> to kiocb so we can pass in the cache in question. The latter did work
> just fine, but I really dislike the fact that it's growing the kiocb to
> more than a cacheline.

One potential way around this is to store the bio cache pointer in the
iov_iter. Each consumer will setup a new struct to hold the bio etc, so
we can continue storing it in there and have it for completion as well.

Upside of that is that we retain the per-ring cache, instead of
per-task, and iov_iter has room to hold the pointer without getting near
the cacheline size yet.

The downside is that it's kind of odd to store it in the iov_iter, and
I'm sure that Al would hate it. Does seem like the best option though,
in terms of getting the storage for the cache "for free".

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-10 14:48         ` Jens Axboe
@ 2021-08-10 15:35           ` Jens Axboe
  0 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2021-08-10 15:35 UTC (permalink / raw)
  To: Ming Lei; +Cc: io-uring, linux-block

On 8/10/21 8:48 AM, Jens Axboe wrote:
> On 8/10/21 8:24 AM, Jens Axboe wrote:
>> On 8/10/21 7:53 AM, Jens Axboe wrote:
>>> On 8/10/21 7:15 AM, Ming Lei wrote:
>>>> Hi Jens,
>>>>
>>>> On Mon, Aug 09, 2021 at 03:23:58PM -0600, Jens Axboe wrote:
>>>>> Add a set of helpers that can encapsulate bio allocations, reusing them
>>>>> as needed. Caller must provide the necessary locking, if any is needed.
>>>>> The primary intended use case is polled IO from io_uring, which will not
>>>>> need any external locking.
>>>>>
>>>>> Very simple - keeps a count of bio's in the cache, and maintains a max
>>>>> of 512 with a slack of 64. If we get above max + slack, we drop slack
>>>>> number of bio's.
>>>>>
>>>>> The cache is intended to be per-task, and the user will need to supply
>>>>> the storage for it. As io_uring will be the only user right now, provide
>>>>> a hook that returns the cache there. Stub it out as NULL initially.
>>>>
>>>> Is it possible for user space to submit & poll IO from different io_uring
>>>> tasks?
>>>>
>>>> Then one bio may be allocated from bio cache of the submission task, and
>>>> freed to cache of the poll task?
>>>
>>> Yes that is possible, and yes that would not benefit from this cache
>>> at all. The previous version would work just fine with that, as the
>>> cache is just under the ring lock and hence you can share it between
>>> tasks.
>>>
>>> I wonder if the niftier solution here is to retain the cache in the
>>> ring still, yet have the pointer be per-task. So basically the setup
>>> that this version does, except we store the cache itself in the ring.
>>> I'll give that a whirl, should be a minor change, and it'll work per
>>> ring instead then like before.
>>
>> That won't work, as we'd have to do a ctx lookup (which would defeat the
>> purpose), and we don't even have anything to key off of at that point...
>>
>> The current approach seems like the only viable one, or adding a member
>> to kiocb so we can pass in the cache in question. The latter did work
>> just fine, but I really dislike the fact that it's growing the kiocb to
>> more than a cacheline.
> 
> One potential way around this is to store the bio cache pointer in the
> iov_iter. Each consumer will setup a new struct to hold the bio etc, so
> we can continue storing it in there and have it for completion as well.
> 
> Upside of that is that we retain the per-ring cache, instead of
> per-task, and iov_iter has room to hold the pointer without getting near
> the cacheline size yet.
> 
> The downside is that it's kind of odd to store it in the iov_iter, and
> I'm sure that Al would hate it. Does seem like the best option though,
> in terms of getting the storage for the cache "for free".

Here's that approach:

https://git.kernel.dk/cgit/linux-block/log/?h=io_uring-bio-cache.3

totally untested so far.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-10 14:24       ` Jens Axboe
  2021-08-10 14:48         ` Jens Axboe
@ 2021-08-10 15:54         ` Kanchan Joshi
  2021-08-10 15:58           ` Jens Axboe
  1 sibling, 1 reply; 16+ messages in thread
From: Kanchan Joshi @ 2021-08-10 15:54 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Ming Lei, io-uring, linux-block

On Tue, Aug 10, 2021 at 8:18 PM Jens Axboe <[email protected]> wrote:
>
> On 8/10/21 7:53 AM, Jens Axboe wrote:
> > On 8/10/21 7:15 AM, Ming Lei wrote:
> >> Hi Jens,
> >>
> >> On Mon, Aug 09, 2021 at 03:23:58PM -0600, Jens Axboe wrote:
> >>> Add a set of helpers that can encapsulate bio allocations, reusing them
> >>> as needed. Caller must provide the necessary locking, if any is needed.
> >>> The primary intended use case is polled IO from io_uring, which will not
> >>> need any external locking.
> >>>
> >>> Very simple - keeps a count of bio's in the cache, and maintains a max
> >>> of 512 with a slack of 64. If we get above max + slack, we drop slack
> >>> number of bio's.
> >>>
> >>> The cache is intended to be per-task, and the user will need to supply
> >>> the storage for it. As io_uring will be the only user right now, provide
> >>> a hook that returns the cache there. Stub it out as NULL initially.
> >>
> >> Is it possible for user space to submit & poll IO from different io_uring
> >> tasks?
> >>
> >> Then one bio may be allocated from bio cache of the submission task, and
> >> freed to cache of the poll task?
> >
> > Yes that is possible, and yes that would not benefit from this cache
> > at all. The previous version would work just fine with that, as the
> > cache is just under the ring lock and hence you can share it between
> > tasks.
> >
> > I wonder if the niftier solution here is to retain the cache in the
> > ring still, yet have the pointer be per-task. So basically the setup
> > that this version does, except we store the cache itself in the ring.
> > I'll give that a whirl, should be a minor change, and it'll work per
> > ring instead then like before.
>
> That won't work, as we'd have to do a ctx lookup (which would defeat the
> purpose), and we don't even have anything to key off of at that point...
>
> The current approach seems like the only viable one, or adding a member
> to kiocb so we can pass in the cache in question. The latter did work
> just fine, but I really dislike the fact that it's growing the kiocb to
> more than a cacheline.
>
Still under a cacheline seems. kiocb took 48 bytes, and adding a
bio-cache pointer made it 56.

-- 
Kanchan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] bio: add allocation cache abstraction
  2021-08-10 15:54         ` Kanchan Joshi
@ 2021-08-10 15:58           ` Jens Axboe
  0 siblings, 0 replies; 16+ messages in thread
From: Jens Axboe @ 2021-08-10 15:58 UTC (permalink / raw)
  To: Kanchan Joshi; +Cc: Ming Lei, io-uring, linux-block

On 8/10/21 9:54 AM, Kanchan Joshi wrote:
> On Tue, Aug 10, 2021 at 8:18 PM Jens Axboe <[email protected]> wrote:
>>
>> On 8/10/21 7:53 AM, Jens Axboe wrote:
>>> On 8/10/21 7:15 AM, Ming Lei wrote:
>>>> Hi Jens,
>>>>
>>>> On Mon, Aug 09, 2021 at 03:23:58PM -0600, Jens Axboe wrote:
>>>>> Add a set of helpers that can encapsulate bio allocations, reusing them
>>>>> as needed. Caller must provide the necessary locking, if any is needed.
>>>>> The primary intended use case is polled IO from io_uring, which will not
>>>>> need any external locking.
>>>>>
>>>>> Very simple - keeps a count of bio's in the cache, and maintains a max
>>>>> of 512 with a slack of 64. If we get above max + slack, we drop slack
>>>>> number of bio's.
>>>>>
>>>>> The cache is intended to be per-task, and the user will need to supply
>>>>> the storage for it. As io_uring will be the only user right now, provide
>>>>> a hook that returns the cache there. Stub it out as NULL initially.
>>>>
>>>> Is it possible for user space to submit & poll IO from different io_uring
>>>> tasks?
>>>>
>>>> Then one bio may be allocated from bio cache of the submission task, and
>>>> freed to cache of the poll task?
>>>
>>> Yes that is possible, and yes that would not benefit from this cache
>>> at all. The previous version would work just fine with that, as the
>>> cache is just under the ring lock and hence you can share it between
>>> tasks.
>>>
>>> I wonder if the niftier solution here is to retain the cache in the
>>> ring still, yet have the pointer be per-task. So basically the setup
>>> that this version does, except we store the cache itself in the ring.
>>> I'll give that a whirl, should be a minor change, and it'll work per
>>> ring instead then like before.
>>
>> That won't work, as we'd have to do a ctx lookup (which would defeat the
>> purpose), and we don't even have anything to key off of at that point...
>>
>> The current approach seems like the only viable one, or adding a member
>> to kiocb so we can pass in the cache in question. The latter did work
>> just fine, but I really dislike the fact that it's growing the kiocb to
>> more than a cacheline.
>>
> Still under a cacheline seems. kiocb took 48 bytes, and adding a
> bio-cache pointer made it 56.

Huh yes, I think I'm mixing up the fact that we embed kiocb and it takes
req->rw over a cacheline, but I did put a fix on top for that one.

I guess we can ignore that then and just shove it in the kiocb, at the
end.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-08-10 15:58 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-08-09 21:23 [PATCHSET 0/4] Enable bio recycling for polled IO Jens Axboe
2021-08-09 21:23 ` [PATCH 1/4] bio: add allocation cache abstraction Jens Axboe
2021-08-10 13:15   ` Ming Lei
2021-08-10 13:53     ` Jens Axboe
2021-08-10 14:24       ` Jens Axboe
2021-08-10 14:48         ` Jens Axboe
2021-08-10 15:35           ` Jens Axboe
2021-08-10 15:54         ` Kanchan Joshi
2021-08-10 15:58           ` Jens Axboe
2021-08-09 21:23 ` [PATCH 2/4] fs: add bio alloc cache kiocb flag Jens Axboe
2021-08-09 21:24 ` [PATCH 3/4] io_uring: wire up bio allocation cache Jens Axboe
2021-08-10 12:25   ` Kanchan Joshi
2021-08-10 13:50     ` Jens Axboe
2021-08-09 21:24 ` [PATCH 4/4] block: enable use of " Jens Axboe
2021-08-10 12:39   ` Kanchan Joshi
2021-08-10 13:56     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox