public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCHSET v4 0/6] Enable bio recycling for polled IO
@ 2021-08-11 19:35 Jens Axboe
  2021-08-11 19:35 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
                   ` (5 more replies)
  0 siblings, 6 replies; 19+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch

Hi,

This is v4 of this patchset, and Yet Another method of achieving the
same goal. This one moves into the direction of my old cpu-alloc-cache
branch, where the caches are just per-cpu. The trouble with those is
that we need to make this specific to polled IO to lose the IRQ
safety of them, otherwise it's not a real win and we're better off just
using the slab allocator smarts. This is combined with Christoph's idea
to make it per bio_set, and retains the flagging of the kiocb for
having the IO issuer tell the below layer whether the cache can be
safely used or not.

Another change from last is that we can now grossly simplify the
io_uring side, as we don't need locking for the cache and async retries
are no longer interesting there. This is combined with a block layer
change that clears BIO_PERCPU_CACHE if we clear the HIPRI flag.

The tldr; here is that we get about a 10% bump in polled performance with
this patchset, as we can recycle bio structures essentially for free.
Outside of that, explanations in each patch. I've also got an iomap patch,
but trying to keep this single user until there's agreement on the
direction.

Against for-5.15/io_uring, and can also be found in my
io_uring-bio-cache.4 branch.

 block/bio.c                | 170 +++++++++++++++++++++++++++++++++----
 block/blk-core.c           |   5 +-
 fs/block_dev.c             |   6 +-
 fs/io_uring.c              |   2 +-
 include/linux/bio.h        |  23 +++--
 include/linux/blk_types.h  |   1 +
 include/linux/cpuhotplug.h |   1 +
 include/linux/fs.h         |   2 +
 8 files changed, 185 insertions(+), 25 deletions(-)

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/6] bio: optimize initialization of a bio
  2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
@ 2021-08-11 19:35 ` Jens Axboe
  2021-08-12  6:51   ` Christoph Hellwig
  2021-08-11 19:35 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch, Jens Axboe

The memset() used is measurably slower in targeted benchmarks. Get rid
of it and fill in the bio manually, in a separate helper.

Signed-off-by: Jens Axboe <[email protected]>
---
 block/bio.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/block/bio.c b/block/bio.c
index 1fab762e079b..0b1025899131 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -238,6 +238,35 @@ static void bio_free(struct bio *bio)
 	}
 }
 
+static inline void __bio_init(struct bio *bio)
+{
+	bio->bi_next = NULL;
+	bio->bi_bdev = NULL;
+	bio->bi_opf = 0;
+	bio->bi_flags = bio->bi_ioprio = bio->bi_write_hint = 0;
+	bio->bi_status = 0;
+	bio->bi_iter.bi_sector = 0;
+	bio->bi_iter.bi_size = 0;
+	bio->bi_iter.bi_idx = 0;
+	bio->bi_iter.bi_bvec_done = 0;
+	bio->bi_end_io = NULL;
+	bio->bi_private = NULL;
+#ifdef CONFIG_BLK_CGROUP
+	bio->bi_blkg = NULL;
+	bio->bi_issue.value = 0;
+#ifdef CONFIG_BLK_CGROUP_IOCOST
+	bio->bi_iocost_cost = 0;
+#endif
+#endif
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	bio->bi_crypt_context = NULL;
+#endif
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+	bio->bi_integrity = NULL;
+#endif
+	bio->bi_vcnt = 0;
+}
+
 /*
  * Users of this function have their own bio allocation. Subsequently,
  * they must remember to pair any call to bio_init() with bio_uninit()
@@ -246,7 +275,7 @@ static void bio_free(struct bio *bio)
 void bio_init(struct bio *bio, struct bio_vec *table,
 	      unsigned short max_vecs)
 {
-	memset(bio, 0, sizeof(*bio));
+	__bio_init(bio);
 	atomic_set(&bio->__bi_remaining, 1);
 	atomic_set(&bio->__bi_cnt, 1);
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/6] fs: add kiocb alloc cache flag
  2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
  2021-08-11 19:35 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
@ 2021-08-11 19:35 ` Jens Axboe
  2021-08-12  6:54   ` Christoph Hellwig
  2021-08-11 19:35 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch, Jens Axboe

If this kiocb can safely use the polled bio allocation cache, then this
flag must be set.

Signed-off-by: Jens Axboe <[email protected]>
---
 include/linux/fs.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 640574294216..0dcc5de779c9 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -319,6 +319,8 @@ enum rw_hint {
 /* iocb->ki_waitq is valid */
 #define IOCB_WAITQ		(1 << 19)
 #define IOCB_NOIO		(1 << 20)
+/* can use bio alloc cache */
+#define IOCB_ALLOC_CACHE	(1 << 21)
 
 struct kiocb {
 	struct file		*ki_filp;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/6] bio: add allocation cache abstraction
  2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
  2021-08-11 19:35 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
  2021-08-11 19:35 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
@ 2021-08-11 19:35 ` Jens Axboe
  2021-08-12  7:01   ` Christoph Hellwig
  2021-08-11 19:35 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch, Jens Axboe

Add a per-cpu bio_set cache for bio allocations, enabling us to quickly
recycle them instead of going through the slab allocator. This cache
isn't IRQ safe, and hence is only really suitable for polled IO.

Very simple - keeps a count of bio's in the cache, and maintains a max
of 512 with a slack of 64. If we get above max + slack, we drop slack
number of bio's.

Signed-off-by: Jens Axboe <[email protected]>
---
 block/bio.c                | 139 +++++++++++++++++++++++++++++++++----
 include/linux/bio.h        |  23 ++++--
 include/linux/blk_types.h  |   1 +
 include/linux/cpuhotplug.h |   1 +
 4 files changed, 144 insertions(+), 20 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 0b1025899131..689335c00937 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -25,6 +25,11 @@
 #include "blk.h"
 #include "blk-rq-qos.h"
 
+struct bio_alloc_cache {
+	struct bio_list		free_list;
+	unsigned int		nr;
+};
+
 static struct biovec_slab {
 	int nr_vecs;
 	char *name;
@@ -620,6 +625,69 @@ void guard_bio_eod(struct bio *bio)
 	bio_truncate(bio, maxsector << 9);
 }
 
+#define ALLOC_CACHE_MAX		512
+#define ALLOC_CACHE_SLACK	 64
+
+static void bio_alloc_cache_prune(struct bio_alloc_cache *cache,
+				  unsigned int nr)
+{
+	struct bio *bio;
+	unsigned int i;
+
+	i = 0;
+	while ((bio = bio_list_pop(&cache->free_list)) != NULL) {
+		cache->nr--;
+		bio_free(bio);
+		if (++i == nr)
+			break;
+	}
+}
+
+static int bio_cpu_dead(unsigned int cpu, struct hlist_node *node)
+{
+	struct bio_set *bs;
+
+	bs = hlist_entry_safe(node, struct bio_set, cpuhp_dead);
+	if (bs->cache) {
+		struct bio_alloc_cache *cache = per_cpu_ptr(bs->cache, cpu);
+
+		bio_alloc_cache_prune(cache, -1U);
+	}
+	return 0;
+}
+
+static void bio_alloc_cache_destroy(struct bio_set *bs)
+{
+	int cpu;
+
+	if (!bs->cache)
+		return;
+
+	preempt_disable();
+	cpuhp_state_remove_instance_nocalls(CPUHP_BIO_DEAD, &bs->cpuhp_dead);
+	for_each_possible_cpu(cpu) {
+		struct bio_alloc_cache *cache;
+
+		cache = per_cpu_ptr(bs->cache, cpu);
+		bio_alloc_cache_prune(cache, -1U);
+	}
+	preempt_enable();
+	free_percpu(bs->cache);
+}
+
+static inline bool __bio_put(struct bio *bio)
+{
+	if (!bio_flagged(bio, BIO_REFFED))
+		return true;
+
+	BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
+
+	/*
+	 * last put frees it
+	 */
+	return atomic_dec_and_test(&bio->__bi_cnt);
+}
+
 /**
  * bio_put - release a reference to a bio
  * @bio:   bio to release reference to
@@ -630,16 +698,21 @@ void guard_bio_eod(struct bio *bio)
  **/
 void bio_put(struct bio *bio)
 {
-	if (!bio_flagged(bio, BIO_REFFED))
-		bio_free(bio);
-	else {
-		BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
+	if (unlikely(!__bio_put(bio)))
+		return;
 
-		/*
-		 * last put frees it
-		 */
-		if (atomic_dec_and_test(&bio->__bi_cnt))
-			bio_free(bio);
+	if (bio_flagged(bio, BIO_PERCPU_CACHE)) {
+		struct bio_alloc_cache *cache;
+
+		bio_uninit(bio);
+		cache = per_cpu_ptr(bio->bi_pool->cache, get_cpu());
+		bio_list_add_head(&cache->free_list, bio);
+		cache->nr++;
+		if (cache->nr > ALLOC_CACHE_MAX + ALLOC_CACHE_SLACK)
+			bio_alloc_cache_prune(cache, ALLOC_CACHE_SLACK);
+		put_cpu();
+	} else {
+		bio_free(bio);
 	}
 }
 EXPORT_SYMBOL(bio_put);
@@ -1531,6 +1604,7 @@ int biovec_init_pool(mempool_t *pool, int pool_entries)
  */
 void bioset_exit(struct bio_set *bs)
 {
+	bio_alloc_cache_destroy(bs);
 	if (bs->rescue_workqueue)
 		destroy_workqueue(bs->rescue_workqueue);
 	bs->rescue_workqueue = NULL;
@@ -1592,12 +1666,18 @@ int bioset_init(struct bio_set *bs,
 	    biovec_init_pool(&bs->bvec_pool, pool_size))
 		goto bad;
 
-	if (!(flags & BIOSET_NEED_RESCUER))
-		return 0;
-
-	bs->rescue_workqueue = alloc_workqueue("bioset", WQ_MEM_RECLAIM, 0);
-	if (!bs->rescue_workqueue)
-		goto bad;
+	if (flags & BIOSET_NEED_RESCUER) {
+		bs->rescue_workqueue = alloc_workqueue("bioset",
+							WQ_MEM_RECLAIM, 0);
+		if (!bs->rescue_workqueue)
+			goto bad;
+	}
+	if (flags & BIOSET_PERCPU_CACHE) {
+		bs->cache = alloc_percpu(struct bio_alloc_cache);
+		if (!bs->cache)
+			goto bad;
+		cpuhp_state_add_instance_nocalls(CPUHP_BIO_DEAD, &bs->cpuhp_dead);
+	}
 
 	return 0;
 bad:
@@ -1624,6 +1704,32 @@ int bioset_init_from_src(struct bio_set *bs, struct bio_set *src)
 }
 EXPORT_SYMBOL(bioset_init_from_src);
 
+struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
+			    unsigned short nr_vecs, struct bio_set *bs)
+{
+	struct bio_alloc_cache *cache = NULL;
+	struct bio *bio;
+
+	if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS)
+		goto normal_alloc;
+
+	cache = per_cpu_ptr(bs->cache, get_cpu());
+	bio = bio_list_pop(&cache->free_list);
+	if (bio) {
+		cache->nr--;
+		put_cpu();
+		bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
+		bio_set_flag(bio, BIO_PERCPU_CACHE);
+		return bio;
+	}
+	put_cpu();
+normal_alloc:
+	bio = bio_alloc_bioset(gfp, nr_vecs, bs);
+	if (cache)
+		bio_set_flag(bio, BIO_PERCPU_CACHE);
+	return bio;
+}
+
 static int __init init_bio(void)
 {
 	int i;
@@ -1638,6 +1744,9 @@ static int __init init_bio(void)
 				SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
 	}
 
+	cpuhp_setup_state_multi(CPUHP_BIO_DEAD, "block/bio:dead", NULL,
+					bio_cpu_dead);
+
 	if (bioset_init(&fs_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS))
 		panic("bio: can't allocate bios\n");
 
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 2203b686e1f0..5f4a741ee97d 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -401,6 +401,7 @@ static inline struct bio *bio_next_split(struct bio *bio, int sectors,
 enum {
 	BIOSET_NEED_BVECS = BIT(0),
 	BIOSET_NEED_RESCUER = BIT(1),
+	BIOSET_PERCPU_CACHE = BIT(2),
 };
 extern int bioset_init(struct bio_set *, unsigned int, unsigned int, int flags);
 extern void bioset_exit(struct bio_set *);
@@ -409,6 +410,8 @@ extern int bioset_init_from_src(struct bio_set *bs, struct bio_set *src);
 
 struct bio *bio_alloc_bioset(gfp_t gfp, unsigned short nr_iovecs,
 		struct bio_set *bs);
+struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t, unsigned short nr_vecs,
+		struct bio_set *bs);
 struct bio *bio_kmalloc(gfp_t gfp_mask, unsigned short nr_iovecs);
 extern void bio_put(struct bio *);
 
@@ -652,18 +655,22 @@ static inline struct bio *bio_list_peek(struct bio_list *bl)
 	return bl->head;
 }
 
-static inline struct bio *bio_list_pop(struct bio_list *bl)
+static inline void bio_list_del_head(struct bio_list *bl, struct bio *head)
 {
-	struct bio *bio = bl->head;
-
-	if (bio) {
+	if (head) {
 		bl->head = bl->head->bi_next;
 		if (!bl->head)
 			bl->tail = NULL;
 
-		bio->bi_next = NULL;
+		head->bi_next = NULL;
 	}
+}
+
+static inline struct bio *bio_list_pop(struct bio_list *bl)
+{
+	struct bio *bio = bl->head;
 
+	bio_list_del_head(bl, bio);
 	return bio;
 }
 
@@ -699,6 +706,12 @@ struct bio_set {
 	struct kmem_cache *bio_slab;
 	unsigned int front_pad;
 
+	/*
+	 * per-cpu bio alloc cache and notifier
+	 */
+	struct bio_alloc_cache __percpu *cache;
+	struct hlist_node cpuhp_dead;
+
 	mempool_t bio_pool;
 	mempool_t bvec_pool;
 #if defined(CONFIG_BLK_DEV_INTEGRITY)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 290f9061b29a..f68d4e8c775e 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -301,6 +301,7 @@ enum {
 	BIO_TRACKED,		/* set if bio goes through the rq_qos path */
 	BIO_REMAPPED,
 	BIO_ZONE_WRITE_LOCKED,	/* Owns a zoned device zone write lock */
+	BIO_PERCPU_CACHE,	/* can participate in per-cpu alloc cache */
 	BIO_FLAG_LAST
 };
 
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index f39b34b13871..fe72c8d6c980 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -46,6 +46,7 @@ enum cpuhp_state {
 	CPUHP_ARM_OMAP_WAKE_DEAD,
 	CPUHP_IRQ_POLL_DEAD,
 	CPUHP_BLOCK_SOFTIRQ_DEAD,
+	CPUHP_BIO_DEAD,
 	CPUHP_ACPI_CPUDRV_DEAD,
 	CPUHP_S390_PFAULT_DEAD,
 	CPUHP_BLK_MQ_DEAD,
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
  2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
                   ` (2 preceding siblings ...)
  2021-08-11 19:35 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
@ 2021-08-11 19:35 ` Jens Axboe
  2021-08-11 19:35 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
  2021-08-11 19:35 ` [PATCH 6/6] block: enable use of bio allocation cache Jens Axboe
  5 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch, Jens Axboe

The bio alloc cache relies on the fact that a polled bio will complete
in process context, clear the cacheable flag if we disable polling
for a given bio.

Signed-off-by: Jens Axboe <[email protected]>
---
 block/blk-core.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 04477697ee4b..c130206e9961 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -833,8 +833,11 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		}
 	}
 
-	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
+		/* can't support alloc cache if we turn off polling */
+		bio_clear_flag(bio, BIO_PERCPU_CACHE);
 		bio->bi_opf &= ~REQ_HIPRI;
+	}
 
 	switch (bio_op(bio)) {
 	case REQ_OP_DISCARD:
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 5/6] io_uring: enable use of bio alloc cache
  2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
                   ` (3 preceding siblings ...)
  2021-08-11 19:35 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
@ 2021-08-11 19:35 ` Jens Axboe
  2021-08-11 19:35 ` [PATCH 6/6] block: enable use of bio allocation cache Jens Axboe
  5 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch, Jens Axboe

Mark polled IO as being safe for dipping into the bio allocation
cache, in case the targeted bio_set has it enabled.

This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
~3.3M IOPS.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index bd3f8529fe6f..00da7bdd2b53 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2667,7 +2667,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		    !kiocb->ki_filp->f_op->iopoll)
 			return -EOPNOTSUPP;
 
-		kiocb->ki_flags |= IOCB_HIPRI;
+		kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
 		kiocb->ki_complete = io_complete_rw_iopoll;
 		req->iopoll_completed = 0;
 	} else {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 6/6] block: enable use of bio allocation cache
  2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
                   ` (4 preceding siblings ...)
  2021-08-11 19:35 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
@ 2021-08-11 19:35 ` Jens Axboe
  2021-08-12  7:04   ` Christoph Hellwig
  5 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch, Jens Axboe

Initialize the bio_set used for IO with per-cpu bio caching enabled,
and use the new bio_alloc_kiocb() helper to dip into that cache.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/block_dev.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9ef4f1fc2cb0..798bb9d8f533 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -385,7 +385,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 	    (bdev_logical_block_size(bdev) - 1))
 		return -EINVAL;
 
-	bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool);
+	bio = bio_alloc_kiocb(iocb, GFP_KERNEL, nr_pages, &blkdev_dio_pool);
 
 	dio = container_of(bio, struct blkdev_dio, bio);
 	dio->is_sync = is_sync = is_sync_kiocb(iocb);
@@ -513,7 +513,9 @@ blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 
 static __init int blkdev_init(void)
 {
-	return bioset_init(&blkdev_dio_pool, 4, offsetof(struct blkdev_dio, bio), BIOSET_NEED_BVECS);
+	return bioset_init(&blkdev_dio_pool, 4,
+				offsetof(struct blkdev_dio, bio),
+				BIOSET_NEED_BVECS|BIOSET_PERCPU_CACHE);
 }
 module_init(blkdev_init);
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bio: optimize initialization of a bio
  2021-08-11 19:35 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
@ 2021-08-12  6:51   ` Christoph Hellwig
  2021-08-12 15:29     ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2021-08-12  6:51 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block, hch

On Wed, Aug 11, 2021 at 01:35:28PM -0600, Jens Axboe wrote:
> The memset() used is measurably slower in targeted benchmarks. Get rid
> of it and fill in the bio manually, in a separate helper.

If you have some numbers if would be great to throw them in here.

> +static inline void __bio_init(struct bio *bio)

Why is this split from bio_init and what are the criteria where an
initialization goes?

> +	bio->bi_flags = bio->bi_ioprio = bio->bi_write_hint = 0;

Please keep each initialization on a separate line.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/6] fs: add kiocb alloc cache flag
  2021-08-11 19:35 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
@ 2021-08-12  6:54   ` Christoph Hellwig
  2021-08-12 14:52     ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2021-08-12  6:54 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block, hch

On Wed, Aug 11, 2021 at 01:35:29PM -0600, Jens Axboe wrote:
> If this kiocb can safely use the polled bio allocation cache, then this
> flag must be set.

Looks fine, but it might be worth to define the semantics a little bit
better.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/6] bio: add allocation cache abstraction
  2021-08-11 19:35 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
@ 2021-08-12  7:01   ` Christoph Hellwig
  2021-08-12 15:08     ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2021-08-12  7:01 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block, hch

On Wed, Aug 11, 2021 at 01:35:30PM -0600, Jens Axboe wrote:
> +	struct bio *bio;
> +	unsigned int i;
> +
> +	i = 0;

Initialize at declaration time?

> +static inline bool __bio_put(struct bio *bio)
> +{
> +	if (!bio_flagged(bio, BIO_REFFED))
> +		return true;
> +
> +	BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
> +
> +	/*
> +	 * last put frees it
> +	 */
> +	return atomic_dec_and_test(&bio->__bi_cnt);
> +}

Please avoid this helper, we can trivially do the check inside of bio_put:

	if (bio_flagged(bio, BIO_REFFED)) {
		BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
		if (!atomic_dec_and_test(&bio->__bi_cnt))
			return;
	}

> -			bio_free(bio);
> +	if (bio_flagged(bio, BIO_PERCPU_CACHE)) {
> +		struct bio_alloc_cache *cache;
> +
> +		bio_uninit(bio);
> +		cache = per_cpu_ptr(bio->bi_pool->cache, get_cpu());
> +		bio_list_add_head(&cache->free_list, bio);
> +		cache->nr++;
> +		if (cache->nr > ALLOC_CACHE_MAX + ALLOC_CACHE_SLACK)

Folding the increment as a prefix here would make the increment and test
semantics a little more obvious.

> +struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
> +			    unsigned short nr_vecs, struct bio_set *bs)
> +{
> +	struct bio_alloc_cache *cache = NULL;
> +	struct bio *bio;
> +
> +	if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS)
> +		goto normal_alloc;
> +
> +	cache = per_cpu_ptr(bs->cache, get_cpu());
> +	bio = bio_list_pop(&cache->free_list);
> +	if (bio) {
> +		cache->nr--;
> +		put_cpu();
> +		bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
> +		return bio;
> +	}
> +	put_cpu();
> +normal_alloc:
> +	bio = bio_alloc_bioset(gfp, nr_vecs, bs);
> +	if (cache)
> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
> +	return bio;

The goto here is pretty obsfucating and adds an extra patch to the fast
path.

Also I don't think we need the gfp argument here at all - it should
always be GFP_KERNEL.  In fact I plan to kill these arguments from much
of the block layer as the places that need NOIO of NOFS semantics can
and should move to set that in the per-thread context.

> -static inline struct bio *bio_list_pop(struct bio_list *bl)
> +static inline void bio_list_del_head(struct bio_list *bl, struct bio *head)
>  {
> -	struct bio *bio = bl->head;
> -
> -	if (bio) {
> +	if (head) {
>  		bl->head = bl->head->bi_next;
>  		if (!bl->head)
>  			bl->tail = NULL;
>  
> -		bio->bi_next = NULL;
> +		head->bi_next = NULL;
>  	}
> +}
> +
> +static inline struct bio *bio_list_pop(struct bio_list *bl)
> +{
> +	struct bio *bio = bl->head;
>  
> +	bio_list_del_head(bl, bio);
>  	return bio;
>  }

No need for this change.

>
>  
> @@ -699,6 +706,12 @@ struct bio_set {
>  	struct kmem_cache *bio_slab;
>  	unsigned int front_pad;
>  
> +	/*
> +	 * per-cpu bio alloc cache and notifier
> +	 */
> +	struct bio_alloc_cache __percpu *cache;
> +	struct hlist_node cpuhp_dead;
> +

I'd keep the hotplug list entry at the end instead of bloating the
cache line used in the fast path.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 6/6] block: enable use of bio allocation cache
  2021-08-11 19:35 ` [PATCH 6/6] block: enable use of bio allocation cache Jens Axboe
@ 2021-08-12  7:04   ` Christoph Hellwig
  2021-08-12 14:52     ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2021-08-12  7:04 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block, hch

On Wed, Aug 11, 2021 at 01:35:33PM -0600, Jens Axboe wrote:
> Initialize the bio_set used for IO with per-cpu bio caching enabled,
> and use the new bio_alloc_kiocb() helper to dip into that cache.
> 
> Signed-off-by: Jens Axboe <[email protected]>

This seems to be pretty much my patch as-is..

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 6/6] block: enable use of bio allocation cache
  2021-08-12  7:04   ` Christoph Hellwig
@ 2021-08-12 14:52     ` Jens Axboe
  0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2021-08-12 14:52 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-block

On 8/12/21 1:04 AM, Christoph Hellwig wrote:
> On Wed, Aug 11, 2021 at 01:35:33PM -0600, Jens Axboe wrote:
>> Initialize the bio_set used for IO with per-cpu bio caching enabled,
>> and use the new bio_alloc_kiocb() helper to dip into that cache.
>>
>> Signed-off-by: Jens Axboe <[email protected]>
> 
> This seems to be pretty much my patch as-is..

It is, sorry I'll fix the attribution.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/6] fs: add kiocb alloc cache flag
  2021-08-12  6:54   ` Christoph Hellwig
@ 2021-08-12 14:52     ` Jens Axboe
  0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2021-08-12 14:52 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-block

On 8/12/21 12:54 AM, Christoph Hellwig wrote:
> On Wed, Aug 11, 2021 at 01:35:29PM -0600, Jens Axboe wrote:
>> If this kiocb can safely use the polled bio allocation cache, then this
>> flag must be set.
> 
> Looks fine, but it might be worth to define the semantics a little bit
> better.

Sure, I'll expand that explanation a bit.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/6] bio: add allocation cache abstraction
  2021-08-12  7:01   ` Christoph Hellwig
@ 2021-08-12 15:08     ` Jens Axboe
  2021-08-12 15:18       ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2021-08-12 15:08 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-block

On 8/12/21 1:01 AM, Christoph Hellwig wrote:
> On Wed, Aug 11, 2021 at 01:35:30PM -0600, Jens Axboe wrote:
>> +	struct bio *bio;
>> +	unsigned int i;
>> +
>> +	i = 0;
> 
> Initialize at declaration time?

Sure, done.

>> +static inline bool __bio_put(struct bio *bio)
>> +{
>> +	if (!bio_flagged(bio, BIO_REFFED))
>> +		return true;
>> +
>> +	BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
>> +
>> +	/*
>> +	 * last put frees it
>> +	 */
>> +	return atomic_dec_and_test(&bio->__bi_cnt);
>> +}
> 
> Please avoid this helper, we can trivially do the check inside of bio_put:
> 
> 	if (bio_flagged(bio, BIO_REFFED)) {
> 		BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
> 		if (!atomic_dec_and_test(&bio->__bi_cnt))
> 			return;
> 	}

Done

>> -			bio_free(bio);
>> +	if (bio_flagged(bio, BIO_PERCPU_CACHE)) {
>> +		struct bio_alloc_cache *cache;
>> +
>> +		bio_uninit(bio);
>> +		cache = per_cpu_ptr(bio->bi_pool->cache, get_cpu());
>> +		bio_list_add_head(&cache->free_list, bio);
>> +		cache->nr++;
>> +		if (cache->nr > ALLOC_CACHE_MAX + ALLOC_CACHE_SLACK)
> 
> Folding the increment as a prefix here would make the increment and test
> semantics a little more obvious.

I don't really care that deeply about it, but I generally prefer keeping
them separate as it makes it easier to read (for me). But I'll change it.

>> +struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
>> +			    unsigned short nr_vecs, struct bio_set *bs)
>> +{
>> +	struct bio_alloc_cache *cache = NULL;
>> +	struct bio *bio;
>> +
>> +	if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS)
>> +		goto normal_alloc;
>> +
>> +	cache = per_cpu_ptr(bs->cache, get_cpu());
>> +	bio = bio_list_pop(&cache->free_list);
>> +	if (bio) {
>> +		cache->nr--;
>> +		put_cpu();
>> +		bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
>> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
>> +		return bio;
>> +	}
>> +	put_cpu();
>> +normal_alloc:
>> +	bio = bio_alloc_bioset(gfp, nr_vecs, bs);
>> +	if (cache)
>> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
>> +	return bio;
> 
> The goto here is pretty obsfucating and adds an extra patch to the fast
> path.

I don't agree, and it's not the fast path - the fast path is popping off
a bio off the list, not hitting the allocator.

> Also I don't think we need the gfp argument here at all - it should
> always be GFP_KERNEL.  In fact I plan to kill these arguments from much
> of the block layer as the places that need NOIO of NOFS semantics can
> and should move to set that in the per-thread context.

Yeah, I actually kept it on purpose for future users, but let's just
kill it as both potential use cases I have use GFP_KERNEL anyway.

>> -static inline struct bio *bio_list_pop(struct bio_list *bl)
>> +static inline void bio_list_del_head(struct bio_list *bl, struct bio *head)
>>  {
>> -	struct bio *bio = bl->head;
>> -
>> -	if (bio) {
>> +	if (head) {
>>  		bl->head = bl->head->bi_next;
>>  		if (!bl->head)
>>  			bl->tail = NULL;
>>  
>> -		bio->bi_next = NULL;
>> +		head->bi_next = NULL;
>>  	}
>> +}
>> +
>> +static inline struct bio *bio_list_pop(struct bio_list *bl)
>> +{
>> +	struct bio *bio = bl->head;
>>  
>> +	bio_list_del_head(bl, bio);
>>  	return bio;
>>  }
> 
> No need for this change.

Leftover from earlier series, killed it now.

>> @@ -699,6 +706,12 @@ struct bio_set {
>>  	struct kmem_cache *bio_slab;
>>  	unsigned int front_pad;
>>  
>> +	/*
>> +	 * per-cpu bio alloc cache and notifier
>> +	 */
>> +	struct bio_alloc_cache __percpu *cache;
>> +	struct hlist_node cpuhp_dead;
>> +
> 
> I'd keep the hotplug list entry at the end instead of bloating the
> cache line used in the fast path.

Good point, moved to the end.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/6] bio: add allocation cache abstraction
  2021-08-12 15:08     ` Jens Axboe
@ 2021-08-12 15:18       ` Christoph Hellwig
  2021-08-12 15:26         ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2021-08-12 15:18 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, io-uring, linux-block

On Thu, Aug 12, 2021 at 09:08:29AM -0600, Jens Axboe wrote:
> >> +	cache = per_cpu_ptr(bs->cache, get_cpu());
> >> +	bio = bio_list_pop(&cache->free_list);
> >> +	if (bio) {
> >> +		cache->nr--;
> >> +		put_cpu();
> >> +		bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
> >> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
> >> +		return bio;
> >> +	}
> >> +	put_cpu();
> >> +normal_alloc:
> >> +	bio = bio_alloc_bioset(gfp, nr_vecs, bs);
> >> +	if (cache)
> >> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
> >> +	return bio;
> > 
> > The goto here is pretty obsfucating and adds an extra patch to the fast
> > path.
> 
> I don't agree, and it's not the fast path - the fast path is popping off
> a bio off the list, not hitting the allocator.

Oh, I see you special case the list pop return now.  Still seems much
easier to follow to avoid the goto, the cache initialization and the
conditional in the no bio found in the list case (see patch below).

diff --git a/block/bio.c b/block/bio.c
index 689335c00937..b42621cecbef 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1707,11 +1707,11 @@ EXPORT_SYMBOL(bioset_init_from_src);
 struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
 			    unsigned short nr_vecs, struct bio_set *bs)
 {
-	struct bio_alloc_cache *cache = NULL;
+	struct bio_alloc_cache *cache;
 	struct bio *bio;
 
 	if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS)
-		goto normal_alloc;
+		return bio_alloc_bioset(gfp, nr_vecs, bs);
 
 	cache = per_cpu_ptr(bs->cache, get_cpu());
 	bio = bio_list_pop(&cache->free_list);
@@ -1723,10 +1723,8 @@ struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
 		return bio;
 	}
 	put_cpu();
-normal_alloc:
 	bio = bio_alloc_bioset(gfp, nr_vecs, bs);
-	if (cache)
-		bio_set_flag(bio, BIO_PERCPU_CACHE);
+	bio_set_flag(bio, BIO_PERCPU_CACHE);
 	return bio;
 }
 

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/6] bio: add allocation cache abstraction
  2021-08-12 15:18       ` Christoph Hellwig
@ 2021-08-12 15:26         ` Jens Axboe
  0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2021-08-12 15:26 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-block

On 8/12/21 9:18 AM, Christoph Hellwig wrote:
> On Thu, Aug 12, 2021 at 09:08:29AM -0600, Jens Axboe wrote:
>>>> +	cache = per_cpu_ptr(bs->cache, get_cpu());
>>>> +	bio = bio_list_pop(&cache->free_list);
>>>> +	if (bio) {
>>>> +		cache->nr--;
>>>> +		put_cpu();
>>>> +		bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
>>>> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
>>>> +		return bio;
>>>> +	}
>>>> +	put_cpu();
>>>> +normal_alloc:
>>>> +	bio = bio_alloc_bioset(gfp, nr_vecs, bs);
>>>> +	if (cache)
>>>> +		bio_set_flag(bio, BIO_PERCPU_CACHE);
>>>> +	return bio;
>>>
>>> The goto here is pretty obsfucating and adds an extra patch to the fast
>>> path.
>>
>> I don't agree, and it's not the fast path - the fast path is popping off
>> a bio off the list, not hitting the allocator.
> 
> Oh, I see you special case the list pop return now.  Still seems much
> easier to follow to avoid the goto, the cache initialization and the
> conditional in the no bio found in the list case (see patch below).
> 
> diff --git a/block/bio.c b/block/bio.c
> index 689335c00937..b42621cecbef 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -1707,11 +1707,11 @@ EXPORT_SYMBOL(bioset_init_from_src);
>  struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
>  			    unsigned short nr_vecs, struct bio_set *bs)
>  {
> -	struct bio_alloc_cache *cache = NULL;
> +	struct bio_alloc_cache *cache;
>  	struct bio *bio;
>  
>  	if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS)
> -		goto normal_alloc;
> +		return bio_alloc_bioset(gfp, nr_vecs, bs);
>  
>  	cache = per_cpu_ptr(bs->cache, get_cpu());
>  	bio = bio_list_pop(&cache->free_list);
> @@ -1723,10 +1723,8 @@ struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
>  		return bio;
>  	}
>  	put_cpu();
> -normal_alloc:
>  	bio = bio_alloc_bioset(gfp, nr_vecs, bs);
> -	if (cache)
> -		bio_set_flag(bio, BIO_PERCPU_CACHE);
> +	bio_set_flag(bio, BIO_PERCPU_CACHE);
>  	return bio;
>  }

Sure, if that'll shut it down, I'll make the edit :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bio: optimize initialization of a bio
  2021-08-12  6:51   ` Christoph Hellwig
@ 2021-08-12 15:29     ` Jens Axboe
  0 siblings, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2021-08-12 15:29 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-block

On 8/12/21 12:51 AM, Christoph Hellwig wrote:
> On Wed, Aug 11, 2021 at 01:35:28PM -0600, Jens Axboe wrote:
>> The memset() used is measurably slower in targeted benchmarks. Get rid
>> of it and fill in the bio manually, in a separate helper.
> 
> If you have some numbers if would be great to throw them in here.

It's about 1% of the overhead of the alloc after the cache, which
comes later in the series.

Percent│    return __underlying_memset(p, c, size);
       │      lea    0x8(%r8),%rdi
       │    bio_alloc_kiocb():
  2.18 │      cmove  %rax,%r9
       │    memset():
       │      mov    %r8,%rcx
       │      and    $0xfffffffffffffff8,%rdi
       │      movq   $0x0,(%r8)
       │      sub    %rdi,%rcx
       │      add    $0x60,%ecx
       │      shr    $0x3,%ecx
 55.02 │      rep    stos %rax,%es:(%rdi)

This is on AMD, might look different on Intel, the manual clear seems
like a nice win on both. As a minor detail, avoids things like
re-setting bio->bi_pool for cached entries, as it never changes.


>> +static inline void __bio_init(struct bio *bio)
> 
> Why is this split from bio_init and what are the criteria where an
> initialization goes?

Got rid of the helper.

>> +	bio->bi_flags = bio->bi_ioprio = bio->bi_write_hint = 0;
> 
> Please keep each initialization on a separate line.

Done

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/6] bio: optimize initialization of a bio
  2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
@ 2021-08-12 15:41 ` Jens Axboe
  2021-08-12 16:16   ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
  To: io-uring; +Cc: linux-block, hch, Jens Axboe

The memset() used is measurably slower in targeted benchmarks, wasting
about 1% of the total runtime, or 50% of the (later) hot path cached
bio alloc. Get rid of it and fill in the bio manually.

Signed-off-by: Jens Axboe <[email protected]>
---
 block/bio.c | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/block/bio.c b/block/bio.c
index 1fab762e079b..9bf98b877aba 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -246,7 +246,34 @@ static void bio_free(struct bio *bio)
 void bio_init(struct bio *bio, struct bio_vec *table,
 	      unsigned short max_vecs)
 {
-	memset(bio, 0, sizeof(*bio));
+	bio->bi_next = NULL;
+	bio->bi_bdev = NULL;
+	bio->bi_opf = 0;
+	bio->bi_flags = 0;
+	bio->bi_ioprio = 0;
+	bio->bi_write_hint = 0;
+	bio->bi_status = 0;
+	bio->bi_iter.bi_sector = 0;
+	bio->bi_iter.bi_size = 0;
+	bio->bi_iter.bi_idx = 0;
+	bio->bi_iter.bi_bvec_done = 0;
+	bio->bi_end_io = NULL;
+	bio->bi_private = NULL;
+#ifdef CONFIG_BLK_CGROUP
+	bio->bi_blkg = NULL;
+	bio->bi_issue.value = 0;
+#ifdef CONFIG_BLK_CGROUP_IOCOST
+	bio->bi_iocost_cost = 0;
+#endif
+#endif
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	bio->bi_crypt_context = NULL;
+#endif
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+	bio->bi_integrity = NULL;
+#endif
+	bio->bi_vcnt = 0;
+
 	atomic_set(&bio->__bi_remaining, 1);
 	atomic_set(&bio->__bi_cnt, 1);
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bio: optimize initialization of a bio
  2021-08-12 15:41 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
@ 2021-08-12 16:16   ` Christoph Hellwig
  0 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2021-08-12 16:16 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-block, hch

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-08-12 16:19 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-11 19:35 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
2021-08-12  6:51   ` Christoph Hellwig
2021-08-12 15:29     ` Jens Axboe
2021-08-11 19:35 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
2021-08-12  6:54   ` Christoph Hellwig
2021-08-12 14:52     ` Jens Axboe
2021-08-11 19:35 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
2021-08-12  7:01   ` Christoph Hellwig
2021-08-12 15:08     ` Jens Axboe
2021-08-12 15:18       ` Christoph Hellwig
2021-08-12 15:26         ` Jens Axboe
2021-08-11 19:35 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
2021-08-11 19:35 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
2021-08-11 19:35 ` [PATCH 6/6] block: enable use of bio allocation cache Jens Axboe
2021-08-12  7:04   ` Christoph Hellwig
2021-08-12 14:52     ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-12 15:41 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
2021-08-12 16:16   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox