* [PATCH 5/6] io_uring: enable use of bio alloc cache
2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
@ 2021-08-11 19:35 ` Jens Axboe
0 siblings, 0 replies; 21+ messages in thread
From: Jens Axboe @ 2021-08-11 19:35 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch, Jens Axboe
Mark polled IO as being safe for dipping into the bio allocation
cache, in case the targeted bio_set has it enabled.
This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
~3.3M IOPS.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index bd3f8529fe6f..00da7bdd2b53 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2667,7 +2667,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
!kiocb->ki_filp->f_op->iopoll)
return -EOPNOTSUPP;
- kiocb->ki_flags |= IOCB_HIPRI;
+ kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
kiocb->ki_complete = io_complete_rw_iopoll;
req->iopoll_completed = 0;
} else {
--
2.32.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCHSET v5 0/6] Enable bio recycling for polled IO
@ 2021-08-12 15:41 Jens Axboe
2021-08-12 15:41 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
` (5 more replies)
0 siblings, 6 replies; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch
Hi,
For v4, see posting here:
https://lore.kernel.org/io-uring/[email protected]/
3.5M+ IOPS per core:
axboe@amd ~/g/fio (master)> sudo taskset -c 0 t/io_uring -b512 -d128 -c32 -s32 -p1 -F1 -B1 /dev/nvme3n1
i 8, argc 9
Added file /dev/nvme3n1 (submitter 0)
sq_ring ptr = 0x0x7fdab1a8d000
sqes ptr = 0x0x7fdab1a8b000
cq_ring ptr = 0x0x7fdab1a89000
polled=1, fixedbufs=1, register_files=1, buffered=0 QD=128, sq_ring=128, cq_ring=256
submitter=1757
IOPS=3520608, IOS/call=32/31, inflight=47 (47)
IOPS=3514432, IOS/call=32/32, inflight=32 (32)
IOPS=3513440, IOS/call=32/31, inflight=128 (128)
IOPS=3507616, IOS/call=32/32, inflight=32 (32)
IOPS=3505984, IOS/call=32/32, inflight=32 (32)
IOPS=3511328, IOS/call=32/31, inflight=64 (64)
[snip]
Changes can also be bound in my io_uring-bio-cache.5 branch, and sit
on top of for-5.15/io_uring.
block/bio.c | 164 +++++++++++++++++++++++++++++++++----
block/blk-core.c | 5 +-
fs/block_dev.c | 6 +-
fs/io_uring.c | 2 +-
include/linux/bio.h | 13 +++
include/linux/blk_types.h | 1 +
include/linux/cpuhotplug.h | 1 +
include/linux/fs.h | 2 +
8 files changed, 175 insertions(+), 19 deletions(-)
Changes since v4:
- Kill __bio_init() helper
- Kill __bio_put() helper
- Cleanup bio_alloc_kiocb()
- Expand commit messages
- Various little tweaks
- Add kerneldoc for bio_alloc_kiocb()
- Fix attribution on last patch
- Remove bio.h list manipulation leftovers
- Move cpuhp_dead notifier to end of bio_set
- Rebase on for-5.15/io_uring
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 1/6] bio: optimize initialization of a bio
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
@ 2021-08-12 15:41 ` Jens Axboe
2021-08-12 16:16 ` Christoph Hellwig
2021-08-12 15:41 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
` (4 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch, Jens Axboe
The memset() used is measurably slower in targeted benchmarks, wasting
about 1% of the total runtime, or 50% of the (later) hot path cached
bio alloc. Get rid of it and fill in the bio manually.
Signed-off-by: Jens Axboe <[email protected]>
---
block/bio.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/block/bio.c b/block/bio.c
index 1fab762e079b..9bf98b877aba 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -246,7 +246,34 @@ static void bio_free(struct bio *bio)
void bio_init(struct bio *bio, struct bio_vec *table,
unsigned short max_vecs)
{
- memset(bio, 0, sizeof(*bio));
+ bio->bi_next = NULL;
+ bio->bi_bdev = NULL;
+ bio->bi_opf = 0;
+ bio->bi_flags = 0;
+ bio->bi_ioprio = 0;
+ bio->bi_write_hint = 0;
+ bio->bi_status = 0;
+ bio->bi_iter.bi_sector = 0;
+ bio->bi_iter.bi_size = 0;
+ bio->bi_iter.bi_idx = 0;
+ bio->bi_iter.bi_bvec_done = 0;
+ bio->bi_end_io = NULL;
+ bio->bi_private = NULL;
+#ifdef CONFIG_BLK_CGROUP
+ bio->bi_blkg = NULL;
+ bio->bi_issue.value = 0;
+#ifdef CONFIG_BLK_CGROUP_IOCOST
+ bio->bi_iocost_cost = 0;
+#endif
+#endif
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ bio->bi_crypt_context = NULL;
+#endif
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ bio->bi_integrity = NULL;
+#endif
+ bio->bi_vcnt = 0;
+
atomic_set(&bio->__bi_remaining, 1);
atomic_set(&bio->__bi_cnt, 1);
--
2.32.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 2/6] fs: add kiocb alloc cache flag
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-12 15:41 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
@ 2021-08-12 15:41 ` Jens Axboe
2021-08-12 16:20 ` Christoph Hellwig
2021-08-12 15:41 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
` (3 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch, Jens Axboe
If this kiocb can safely use the polled bio allocation cache, then this
flag must be set. Generally this can be set for polled IO, where we will
not see IRQ completions of the request.
Signed-off-by: Jens Axboe <[email protected]>
---
include/linux/fs.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 640574294216..0dcc5de779c9 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -319,6 +319,8 @@ enum rw_hint {
/* iocb->ki_waitq is valid */
#define IOCB_WAITQ (1 << 19)
#define IOCB_NOIO (1 << 20)
+/* can use bio alloc cache */
+#define IOCB_ALLOC_CACHE (1 << 21)
struct kiocb {
struct file *ki_filp;
--
2.32.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 3/6] bio: add allocation cache abstraction
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-12 15:41 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
2021-08-12 15:41 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
@ 2021-08-12 15:41 ` Jens Axboe
2021-08-12 16:32 ` Christoph Hellwig
2021-08-12 15:41 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
` (2 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch, Jens Axboe
Add a per-cpu bio_set cache for bio allocations, enabling us to quickly
recycle them instead of going through the slab allocator. This cache
isn't IRQ safe, and hence is only really suitable for polled IO.
Very simple - keeps a count of bio's in the cache, and maintains a max
of 512 with a slack of 64. If we get above max + slack, we drop slack
number of bio's.
Signed-off-by: Jens Axboe <[email protected]>
---
block/bio.c | 135 +++++++++++++++++++++++++++++++++----
include/linux/bio.h | 13 ++++
include/linux/blk_types.h | 1 +
include/linux/cpuhotplug.h | 1 +
4 files changed, 136 insertions(+), 14 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 9bf98b877aba..0fa2990018de 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -25,6 +25,11 @@
#include "blk.h"
#include "blk-rq-qos.h"
+struct bio_alloc_cache {
+ struct bio_list free_list;
+ unsigned int nr;
+};
+
static struct biovec_slab {
int nr_vecs;
char *name;
@@ -618,6 +623,55 @@ void guard_bio_eod(struct bio *bio)
bio_truncate(bio, maxsector << 9);
}
+#define ALLOC_CACHE_MAX 512
+#define ALLOC_CACHE_SLACK 64
+
+static void bio_alloc_cache_prune(struct bio_alloc_cache *cache,
+ unsigned int nr)
+{
+ unsigned int i = 0;
+ struct bio *bio;
+
+ while ((bio = bio_list_pop(&cache->free_list)) != NULL) {
+ cache->nr--;
+ bio_free(bio);
+ if (++i == nr)
+ break;
+ }
+}
+
+static int bio_cpu_dead(unsigned int cpu, struct hlist_node *node)
+{
+ struct bio_set *bs;
+
+ bs = hlist_entry_safe(node, struct bio_set, cpuhp_dead);
+ if (bs->cache) {
+ struct bio_alloc_cache *cache = per_cpu_ptr(bs->cache, cpu);
+
+ bio_alloc_cache_prune(cache, -1U);
+ }
+ return 0;
+}
+
+static void bio_alloc_cache_destroy(struct bio_set *bs)
+{
+ int cpu;
+
+ if (!bs->cache)
+ return;
+
+ preempt_disable();
+ cpuhp_state_remove_instance_nocalls(CPUHP_BIO_DEAD, &bs->cpuhp_dead);
+ for_each_possible_cpu(cpu) {
+ struct bio_alloc_cache *cache;
+
+ cache = per_cpu_ptr(bs->cache, cpu);
+ bio_alloc_cache_prune(cache, -1U);
+ }
+ preempt_enable();
+ free_percpu(bs->cache);
+}
+
/**
* bio_put - release a reference to a bio
* @bio: bio to release reference to
@@ -628,16 +682,23 @@ void guard_bio_eod(struct bio *bio)
**/
void bio_put(struct bio *bio)
{
- if (!bio_flagged(bio, BIO_REFFED))
- bio_free(bio);
- else {
+ if (unlikely(bio_flagged(bio, BIO_REFFED))) {
BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
-
- /*
- * last put frees it
- */
if (atomic_dec_and_test(&bio->__bi_cnt))
- bio_free(bio);
+ return;
+ }
+
+ if (bio_flagged(bio, BIO_PERCPU_CACHE)) {
+ struct bio_alloc_cache *cache;
+
+ bio_uninit(bio);
+ cache = per_cpu_ptr(bio->bi_pool->cache, get_cpu());
+ bio_list_add_head(&cache->free_list, bio);
+ if (++cache->nr > ALLOC_CACHE_MAX + ALLOC_CACHE_SLACK)
+ bio_alloc_cache_prune(cache, ALLOC_CACHE_SLACK);
+ put_cpu();
+ } else {
+ bio_free(bio);
}
}
EXPORT_SYMBOL(bio_put);
@@ -1529,6 +1590,7 @@ int biovec_init_pool(mempool_t *pool, int pool_entries)
*/
void bioset_exit(struct bio_set *bs)
{
+ bio_alloc_cache_destroy(bs);
if (bs->rescue_workqueue)
destroy_workqueue(bs->rescue_workqueue);
bs->rescue_workqueue = NULL;
@@ -1590,12 +1652,18 @@ int bioset_init(struct bio_set *bs,
biovec_init_pool(&bs->bvec_pool, pool_size))
goto bad;
- if (!(flags & BIOSET_NEED_RESCUER))
- return 0;
-
- bs->rescue_workqueue = alloc_workqueue("bioset", WQ_MEM_RECLAIM, 0);
- if (!bs->rescue_workqueue)
- goto bad;
+ if (flags & BIOSET_NEED_RESCUER) {
+ bs->rescue_workqueue = alloc_workqueue("bioset",
+ WQ_MEM_RECLAIM, 0);
+ if (!bs->rescue_workqueue)
+ goto bad;
+ }
+ if (flags & BIOSET_PERCPU_CACHE) {
+ bs->cache = alloc_percpu(struct bio_alloc_cache);
+ if (!bs->cache)
+ goto bad;
+ cpuhp_state_add_instance_nocalls(CPUHP_BIO_DEAD, &bs->cpuhp_dead);
+ }
return 0;
bad:
@@ -1622,6 +1690,42 @@ int bioset_init_from_src(struct bio_set *bs, struct bio_set *src)
}
EXPORT_SYMBOL(bioset_init_from_src);
+/**
+ * bio_alloc_kiocb - Allocate a bio from bio_set based on kiocb
+ * @kiocb: kiocb describing the IO
+ * @bs: bio_set to allocate from
+ *
+ * Description:
+ * Like @bio_alloc_bioset, but pass in the kiocb. The kiocb is only
+ * used to check if we should dip into the per-cpu bio_set allocation
+ * cache. The allocation uses GFP_KERNEL internally.
+ *
+ */
+struct bio *bio_alloc_kiocb(struct kiocb *kiocb, unsigned short nr_vecs,
+ struct bio_set *bs)
+{
+ struct bio_alloc_cache *cache;
+ struct bio *bio;
+
+ if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS)
+ return bio_alloc_bioset(GFP_KERNEL, nr_vecs, bs);
+
+ cache = per_cpu_ptr(bs->cache, get_cpu());
+ bio = bio_list_pop(&cache->free_list);
+ if (bio) {
+ cache->nr--;
+ put_cpu();
+ bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
+ bio_set_flag(bio, BIO_PERCPU_CACHE);
+ return bio;
+ }
+ put_cpu();
+ bio = bio_alloc_bioset(GFP_KERNEL, nr_vecs, bs);
+ bio_set_flag(bio, BIO_PERCPU_CACHE);
+ return bio;
+}
+EXPORT_SYMBOL_GPL(bio_alloc_kiocb);
+
static int __init init_bio(void)
{
int i;
@@ -1636,6 +1740,9 @@ static int __init init_bio(void)
SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
}
+ cpuhp_setup_state_multi(CPUHP_BIO_DEAD, "block/bio:dead", NULL,
+ bio_cpu_dead);
+
if (bioset_init(&fs_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS))
panic("bio: can't allocate bios\n");
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 2203b686e1f0..89ad28213b1d 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -401,6 +401,7 @@ static inline struct bio *bio_next_split(struct bio *bio, int sectors,
enum {
BIOSET_NEED_BVECS = BIT(0),
BIOSET_NEED_RESCUER = BIT(1),
+ BIOSET_PERCPU_CACHE = BIT(2),
};
extern int bioset_init(struct bio_set *, unsigned int, unsigned int, int flags);
extern void bioset_exit(struct bio_set *);
@@ -409,6 +410,8 @@ extern int bioset_init_from_src(struct bio_set *bs, struct bio_set *src);
struct bio *bio_alloc_bioset(gfp_t gfp, unsigned short nr_iovecs,
struct bio_set *bs);
+struct bio *bio_alloc_kiocb(struct kiocb *kiocb, unsigned short nr_vecs,
+ struct bio_set *bs);
struct bio *bio_kmalloc(gfp_t gfp_mask, unsigned short nr_iovecs);
extern void bio_put(struct bio *);
@@ -699,6 +702,11 @@ struct bio_set {
struct kmem_cache *bio_slab;
unsigned int front_pad;
+ /*
+ * per-cpu bio alloc cache
+ */
+ struct bio_alloc_cache __percpu *cache;
+
mempool_t bio_pool;
mempool_t bvec_pool;
#if defined(CONFIG_BLK_DEV_INTEGRITY)
@@ -715,6 +723,11 @@ struct bio_set {
struct bio_list rescue_list;
struct work_struct rescue_work;
struct workqueue_struct *rescue_workqueue;
+
+ /*
+ * Hot un-plug notifier for the per-cpu cache, if used
+ */
+ struct hlist_node cpuhp_dead;
};
static inline bool bioset_initialized(struct bio_set *bs)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 290f9061b29a..f68d4e8c775e 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -301,6 +301,7 @@ enum {
BIO_TRACKED, /* set if bio goes through the rq_qos path */
BIO_REMAPPED,
BIO_ZONE_WRITE_LOCKED, /* Owns a zoned device zone write lock */
+ BIO_PERCPU_CACHE, /* can participate in per-cpu alloc cache */
BIO_FLAG_LAST
};
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index f39b34b13871..fe72c8d6c980 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -46,6 +46,7 @@ enum cpuhp_state {
CPUHP_ARM_OMAP_WAKE_DEAD,
CPUHP_IRQ_POLL_DEAD,
CPUHP_BLOCK_SOFTIRQ_DEAD,
+ CPUHP_BIO_DEAD,
CPUHP_ACPI_CPUDRV_DEAD,
CPUHP_S390_PFAULT_DEAD,
CPUHP_BLK_MQ_DEAD,
--
2.32.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
` (2 preceding siblings ...)
2021-08-12 15:41 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
@ 2021-08-12 15:41 ` Jens Axboe
2021-08-12 16:33 ` Christoph Hellwig
2021-08-12 17:31 ` Keith Busch
2021-08-12 15:41 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
2021-08-12 15:41 ` [PATCH 6/6] block: use the percpu bio cache in __blkdev_direct_IO Jens Axboe
5 siblings, 2 replies; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch, Jens Axboe
The bio alloc cache relies on the fact that a polled bio will complete
in process context, clear the cacheable flag if we disable polling
for a given bio.
Signed-off-by: Jens Axboe <[email protected]>
---
block/blk-core.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 04477697ee4b..c130206e9961 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -833,8 +833,11 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
}
}
- if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+ if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
+ /* can't support alloc cache if we turn off polling */
+ bio_clear_flag(bio, BIO_PERCPU_CACHE);
bio->bi_opf &= ~REQ_HIPRI;
+ }
switch (bio_op(bio)) {
case REQ_OP_DISCARD:
--
2.32.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 5/6] io_uring: enable use of bio alloc cache
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
` (3 preceding siblings ...)
2021-08-12 15:41 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
@ 2021-08-12 15:41 ` Jens Axboe
2021-08-12 16:39 ` Christoph Hellwig
2021-08-12 15:41 ` [PATCH 6/6] block: use the percpu bio cache in __blkdev_direct_IO Jens Axboe
5 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch, Jens Axboe
Mark polled IO as being safe for dipping into the bio allocation
cache, in case the targeted bio_set has it enabled.
This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
~3.3M IOPS.
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6c65c90131cb..ea387b0741b8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2736,7 +2736,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
!kiocb->ki_filp->f_op->iopoll)
return -EOPNOTSUPP;
- kiocb->ki_flags |= IOCB_HIPRI;
+ kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
kiocb->ki_complete = io_complete_rw_iopoll;
req->iopoll_completed = 0;
} else {
--
2.32.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 6/6] block: use the percpu bio cache in __blkdev_direct_IO
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
` (4 preceding siblings ...)
2021-08-12 15:41 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
@ 2021-08-12 15:41 ` Jens Axboe
5 siblings, 0 replies; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 15:41 UTC (permalink / raw)
To: io-uring; +Cc: linux-block, hch, Christoph Hellwig, Jens Axboe
From: Christoph Hellwig <[email protected]>
Use bio_alloc_kiocb to dip into the percpu cache of bios when the
caller asks for it.
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/block_dev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9ef4f1fc2cb0..3c7fb7106713 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -385,7 +385,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
(bdev_logical_block_size(bdev) - 1))
return -EINVAL;
- bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool);
+ bio = bio_alloc_kiocb(iocb, nr_pages, &blkdev_dio_pool);
dio = container_of(bio, struct blkdev_dio, bio);
dio->is_sync = is_sync = is_sync_kiocb(iocb);
@@ -513,7 +513,9 @@ blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
static __init int blkdev_init(void)
{
- return bioset_init(&blkdev_dio_pool, 4, offsetof(struct blkdev_dio, bio), BIOSET_NEED_BVECS);
+ return bioset_init(&blkdev_dio_pool, 4,
+ offsetof(struct blkdev_dio, bio),
+ BIOSET_NEED_BVECS|BIOSET_PERCPU_CACHE);
}
module_init(blkdev_init);
--
2.32.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 1/6] bio: optimize initialization of a bio
2021-08-12 15:41 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
@ 2021-08-12 16:16 ` Christoph Hellwig
0 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2021-08-12 16:16 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, hch
Looks good,
Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/6] fs: add kiocb alloc cache flag
2021-08-12 15:41 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
@ 2021-08-12 16:20 ` Christoph Hellwig
0 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2021-08-12 16:20 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, hch
On Thu, Aug 12, 2021 at 09:41:45AM -0600, Jens Axboe wrote:
> If this kiocb can safely use the polled bio allocation cache, then this
> flag must be set. Generally this can be set for polled IO, where we will
> not see IRQ completions of the request.
>
> Signed-off-by: Jens Axboe <[email protected]>
Looks good,
Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/6] bio: add allocation cache abstraction
2021-08-12 15:41 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
@ 2021-08-12 16:32 ` Christoph Hellwig
2021-08-12 16:39 ` Jens Axboe
0 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2021-08-12 16:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, Thomas Gleixner
[adding Thomas for a cpu hotplug questions]
> +static void bio_alloc_cache_destroy(struct bio_set *bs)
> +{
> + int cpu;
> +
> + if (!bs->cache)
> + return;
> +
> + preempt_disable();
> + cpuhp_state_remove_instance_nocalls(CPUHP_BIO_DEAD, &bs->cpuhp_dead);
> + for_each_possible_cpu(cpu) {
> + struct bio_alloc_cache *cache;
> +
> + cache = per_cpu_ptr(bs->cache, cpu);
> + bio_alloc_cache_prune(cache, -1U);
> + }
> + preempt_enable();
If I understand the cpu hotplug state machine we should not get any new
cpu down callbacks after cpuhp_state_remove_instance_nocalls returned,
so what do we need the preempt disable here for?
> + /*
> + * Hot un-plug notifier for the per-cpu cache, if used
> + */
> + struct hlist_node cpuhp_dead;
Nit, even if we don't need the cpu up notifaction the node actually
provides both. So I'd reword the comment drop the _dead from the
member name.
Otherwise looks good:
Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-12 15:41 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
@ 2021-08-12 16:33 ` Christoph Hellwig
2021-08-12 17:31 ` Keith Busch
1 sibling, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2021-08-12 16:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, hch
On Thu, Aug 12, 2021 at 09:41:47AM -0600, Jens Axboe wrote:
> The bio alloc cache relies on the fact that a polled bio will complete
> in process context, clear the cacheable flag if we disable polling
> for a given bio.
>
> Signed-off-by: Jens Axboe <[email protected]>
Looks good,
Reviewed-by: Christoph Hellwig <[email protected]>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/6] io_uring: enable use of bio alloc cache
2021-08-12 15:41 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
@ 2021-08-12 16:39 ` Christoph Hellwig
2021-08-12 16:46 ` Jens Axboe
0 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2021-08-12 16:39 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, hch
On Thu, Aug 12, 2021 at 09:41:48AM -0600, Jens Axboe wrote:
> Mark polled IO as being safe for dipping into the bio allocation
> cache, in case the targeted bio_set has it enabled.
>
> This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
> ~3.3M IOPS.
Looks good,
Reviewed-by: Christoph Hellwig <[email protected]>
Didn't the cover letter say 3.5M+ IOPS, though?
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/6] bio: add allocation cache abstraction
2021-08-12 16:32 ` Christoph Hellwig
@ 2021-08-12 16:39 ` Jens Axboe
0 siblings, 0 replies; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 16:39 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: io-uring, linux-block, Thomas Gleixner
On 8/12/21 10:32 AM, Christoph Hellwig wrote:
> [adding Thomas for a cpu hotplug questions]
>
>> +static void bio_alloc_cache_destroy(struct bio_set *bs)
>> +{
>> + int cpu;
>> +
>> + if (!bs->cache)
>> + return;
>> +
>> + preempt_disable();
>> + cpuhp_state_remove_instance_nocalls(CPUHP_BIO_DEAD, &bs->cpuhp_dead);
>> + for_each_possible_cpu(cpu) {
>> + struct bio_alloc_cache *cache;
>> +
>> + cache = per_cpu_ptr(bs->cache, cpu);
>> + bio_alloc_cache_prune(cache, -1U);
>> + }
>> + preempt_enable();
>
> If I understand the cpu hotplug state machine we should not get any new
> cpu down callbacks after cpuhp_state_remove_instance_nocalls returned,
> so what do we need the preempt disable here for?
I don't think we strictly need it. I can kill it.
>> + /*
>> + * Hot un-plug notifier for the per-cpu cache, if used
>> + */
>> + struct hlist_node cpuhp_dead;
>
> Nit, even if we don't need the cpu up notifaction the node actually
> provides both. So I'd reword the comment drop the _dead from the
> member name.
Right, but we only sign up for the down call.
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/6] io_uring: enable use of bio alloc cache
2021-08-12 16:39 ` Christoph Hellwig
@ 2021-08-12 16:46 ` Jens Axboe
0 siblings, 0 replies; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 16:46 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: io-uring, linux-block
On 8/12/21 10:39 AM, Christoph Hellwig wrote:
> On Thu, Aug 12, 2021 at 09:41:48AM -0600, Jens Axboe wrote:
>> Mark polled IO as being safe for dipping into the bio allocation
>> cache, in case the targeted bio_set has it enabled.
>>
>> This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to
>> ~3.3M IOPS.
>
> Looks good,
>
> Reviewed-by: Christoph Hellwig <[email protected]>
>
> Didn't the cover letter say 3.5M+ IOPS, though?
It does indeed, we've had some recent improvements so the range is now
more in the 3.2M -> 3.5M IOPS with the cache. Didn't update it, as it's
still roughly the same 10% bump.
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-12 15:41 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
2021-08-12 16:33 ` Christoph Hellwig
@ 2021-08-12 17:31 ` Keith Busch
2021-08-12 17:41 ` Jens Axboe
1 sibling, 1 reply; 21+ messages in thread
From: Keith Busch @ 2021-08-12 17:31 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, hch
On Thu, Aug 12, 2021 at 09:41:47AM -0600, Jens Axboe wrote:
> - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
> + if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
> + /* can't support alloc cache if we turn off polling */
> + bio_clear_flag(bio, BIO_PERCPU_CACHE);
> bio->bi_opf &= ~REQ_HIPRI;
> + }
It looks like you should also clear BIO_PERCPU_CACHE if this bio gets
split in blk_bio_segment_split().
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-12 17:31 ` Keith Busch
@ 2021-08-12 17:41 ` Jens Axboe
2021-08-13 20:17 ` Keith Busch
0 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2021-08-12 17:41 UTC (permalink / raw)
To: Keith Busch; +Cc: io-uring, linux-block, hch
On 8/12/21 11:31 AM, Keith Busch wrote:
> On Thu, Aug 12, 2021 at 09:41:47AM -0600, Jens Axboe wrote:
>> - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
>> + if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
>> + /* can't support alloc cache if we turn off polling */
>> + bio_clear_flag(bio, BIO_PERCPU_CACHE);
>> bio->bi_opf &= ~REQ_HIPRI;
>> + }
>
> It looks like you should also clear BIO_PERCPU_CACHE if this bio gets
> split in blk_bio_segment_split().
Indeed. Wonder if we should make that a small helper, as any clear of
REQ_HIPRI should clear BIO_PERCPU_CACHE as well.
diff --git a/block/blk-core.c b/block/blk-core.c
index 7e852242f4cc..d2722ecd4d9b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -821,11 +821,8 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
}
}
- if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
- /* can't support alloc cache if we turn off polling */
- bio_clear_flag(bio, BIO_PERCPU_CACHE);
- bio->bi_opf &= ~REQ_HIPRI;
- }
+ if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+ bio_clear_hipri(bio);
switch (bio_op(bio)) {
case REQ_OP_DISCARD:
diff --git a/block/blk-merge.c b/block/blk-merge.c
index f8707ff7e2fc..985ca1116c32 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -285,7 +285,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
* iopoll in direct IO routine. Given performance gain of iopoll for
* big IO can be trival, disable iopoll when split needed.
*/
- bio->bi_opf &= ~REQ_HIPRI;
+ bio_clear_hipri(bio);
return bio_split(bio, sectors, GFP_NOIO, bs);
}
diff --git a/block/blk.h b/block/blk.h
index db6f82bbb683..7dba254b45f2 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -367,4 +367,11 @@ extern struct device_attribute dev_attr_events;
extern struct device_attribute dev_attr_events_async;
extern struct device_attribute dev_attr_events_poll_msecs;
+static inline void bio_clear_hipri(struct bio *bio)
+{
+ /* can't support alloc cache if we turn off polling */
+ bio_clear_flag(bio, BIO_PERCPU_CACHE);
+ bio->bi_opf &= ~REQ_HIPRI;
+}
+
#endif /* BLK_INTERNAL_H */
--
Jens Axboe
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-12 17:41 ` Jens Axboe
@ 2021-08-13 20:17 ` Keith Busch
2021-08-13 20:19 ` Jens Axboe
0 siblings, 1 reply; 21+ messages in thread
From: Keith Busch @ 2021-08-13 20:17 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, hch
On Thu, Aug 12, 2021 at 11:41:58AM -0600, Jens Axboe wrote:
> Indeed. Wonder if we should make that a small helper, as any clear of
> REQ_HIPRI should clear BIO_PERCPU_CACHE as well.
>
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 7e852242f4cc..d2722ecd4d9b 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -821,11 +821,8 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
> }
> }
>
> - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
> - /* can't support alloc cache if we turn off polling */
> - bio_clear_flag(bio, BIO_PERCPU_CACHE);
> - bio->bi_opf &= ~REQ_HIPRI;
> - }
> + if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
> + bio_clear_hipri(bio);
Since BIO_PERCPU_CACHE doesn't work without REQ_HIRPI, should this check
look more like this?
if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
bio->bi_opf &= ~REQ_HIPRI;
if (!(bio->bi_opf & REQ_HIPRI))
bio_clear_flag(bio, BIO_PERCPU_CACHE);
I realise the only BIO_PERCPU_CACHE user in this series never sets it
without REQ_HIPRI, but it looks like a problem waiting to happen if
nothing enforces this pairing: someone could set the CACHE flag on a
QUEUE_FLAG_POLL enabled queue without setting HIPRI and get the wrong
bio_put() action.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-13 20:17 ` Keith Busch
@ 2021-08-13 20:19 ` Jens Axboe
2021-08-13 20:31 ` Keith Busch
0 siblings, 1 reply; 21+ messages in thread
From: Jens Axboe @ 2021-08-13 20:19 UTC (permalink / raw)
To: Keith Busch; +Cc: io-uring, linux-block, hch
On 8/13/21 2:17 PM, Keith Busch wrote:
> On Thu, Aug 12, 2021 at 11:41:58AM -0600, Jens Axboe wrote:
>> Indeed. Wonder if we should make that a small helper, as any clear of
>> REQ_HIPRI should clear BIO_PERCPU_CACHE as well.
>>
>>
>> diff --git a/block/blk-core.c b/block/blk-core.c
>> index 7e852242f4cc..d2722ecd4d9b 100644
>> --- a/block/blk-core.c
>> +++ b/block/blk-core.c
>> @@ -821,11 +821,8 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
>> }
>> }
>>
>> - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
>> - /* can't support alloc cache if we turn off polling */
>> - bio_clear_flag(bio, BIO_PERCPU_CACHE);
>> - bio->bi_opf &= ~REQ_HIPRI;
>> - }
>> + if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
>> + bio_clear_hipri(bio);
>
> Since BIO_PERCPU_CACHE doesn't work without REQ_HIRPI, should this check
> look more like this?
>
> if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
> bio->bi_opf &= ~REQ_HIPRI;
> if (!(bio->bi_opf & REQ_HIPRI))
> bio_clear_flag(bio, BIO_PERCPU_CACHE);
>
> I realise the only BIO_PERCPU_CACHE user in this series never sets it
> without REQ_HIPRI, but it looks like a problem waiting to happen if
> nothing enforces this pairing: someone could set the CACHE flag on a
> QUEUE_FLAG_POLL enabled queue without setting HIPRI and get the wrong
> bio_put() action.
I'd rather turn that into a WARN_ON or similar. But probably better to
do that on the freeing side, honestly. That'll be the most reliable way,
but a shame to add cycles to the hot path...
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-13 20:19 ` Jens Axboe
@ 2021-08-13 20:31 ` Keith Busch
2021-08-13 20:33 ` Jens Axboe
0 siblings, 1 reply; 21+ messages in thread
From: Keith Busch @ 2021-08-13 20:31 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring, linux-block, hch
On Fri, Aug 13, 2021 at 02:19:11PM -0600, Jens Axboe wrote:
> On 8/13/21 2:17 PM, Keith Busch wrote:
> > On Thu, Aug 12, 2021 at 11:41:58AM -0600, Jens Axboe wrote:
> >> Indeed. Wonder if we should make that a small helper, as any clear of
> >> REQ_HIPRI should clear BIO_PERCPU_CACHE as well.
> >>
> >>
> >> diff --git a/block/blk-core.c b/block/blk-core.c
> >> index 7e852242f4cc..d2722ecd4d9b 100644
> >> --- a/block/blk-core.c
> >> +++ b/block/blk-core.c
> >> @@ -821,11 +821,8 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
> >> }
> >> }
> >>
> >> - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
> >> - /* can't support alloc cache if we turn off polling */
> >> - bio_clear_flag(bio, BIO_PERCPU_CACHE);
> >> - bio->bi_opf &= ~REQ_HIPRI;
> >> - }
> >> + if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
> >> + bio_clear_hipri(bio);
> >
> > Since BIO_PERCPU_CACHE doesn't work without REQ_HIRPI, should this check
> > look more like this?
> >
> > if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
> > bio->bi_opf &= ~REQ_HIPRI;
> > if (!(bio->bi_opf & REQ_HIPRI))
> > bio_clear_flag(bio, BIO_PERCPU_CACHE);
> >
> > I realise the only BIO_PERCPU_CACHE user in this series never sets it
> > without REQ_HIPRI, but it looks like a problem waiting to happen if
> > nothing enforces this pairing: someone could set the CACHE flag on a
> > QUEUE_FLAG_POLL enabled queue without setting HIPRI and get the wrong
> > bio_put() action.
>
> I'd rather turn that into a WARN_ON or similar. But probably better to
> do that on the freeing side, honestly. That'll be the most reliable way,
> but a shame to add cycles to the hot path...
Yeah, it is a coding error if that happened, so a WARN sounds okay. I
also don't like adding these kinds of checks, so please feel free to not
include it if you think the usage is clear enough.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported
2021-08-13 20:31 ` Keith Busch
@ 2021-08-13 20:33 ` Jens Axboe
0 siblings, 0 replies; 21+ messages in thread
From: Jens Axboe @ 2021-08-13 20:33 UTC (permalink / raw)
To: Keith Busch; +Cc: io-uring, linux-block, hch
On 8/13/21 2:31 PM, Keith Busch wrote:
> On Fri, Aug 13, 2021 at 02:19:11PM -0600, Jens Axboe wrote:
>> On 8/13/21 2:17 PM, Keith Busch wrote:
>>> On Thu, Aug 12, 2021 at 11:41:58AM -0600, Jens Axboe wrote:
>>>> Indeed. Wonder if we should make that a small helper, as any clear of
>>>> REQ_HIPRI should clear BIO_PERCPU_CACHE as well.
>>>>
>>>>
>>>> diff --git a/block/blk-core.c b/block/blk-core.c
>>>> index 7e852242f4cc..d2722ecd4d9b 100644
>>>> --- a/block/blk-core.c
>>>> +++ b/block/blk-core.c
>>>> @@ -821,11 +821,8 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
>>>> }
>>>> }
>>>>
>>>> - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) {
>>>> - /* can't support alloc cache if we turn off polling */
>>>> - bio_clear_flag(bio, BIO_PERCPU_CACHE);
>>>> - bio->bi_opf &= ~REQ_HIPRI;
>>>> - }
>>>> + if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
>>>> + bio_clear_hipri(bio);
>>>
>>> Since BIO_PERCPU_CACHE doesn't work without REQ_HIRPI, should this check
>>> look more like this?
>>>
>>> if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
>>> bio->bi_opf &= ~REQ_HIPRI;
>>> if (!(bio->bi_opf & REQ_HIPRI))
>>> bio_clear_flag(bio, BIO_PERCPU_CACHE);
>>>
>>> I realise the only BIO_PERCPU_CACHE user in this series never sets it
>>> without REQ_HIPRI, but it looks like a problem waiting to happen if
>>> nothing enforces this pairing: someone could set the CACHE flag on a
>>> QUEUE_FLAG_POLL enabled queue without setting HIPRI and get the wrong
>>> bio_put() action.
>>
>> I'd rather turn that into a WARN_ON or similar. But probably better to
>> do that on the freeing side, honestly. That'll be the most reliable way,
>> but a shame to add cycles to the hot path...
>
> Yeah, it is a coding error if that happened, so a WARN sounds okay. I
> also don't like adding these kinds of checks, so please feel free to not
> include it if you think the usage is clear enough.
Just have to watch for new additions of IOCB_ALLOC_CACHE, which
thankfully shouldn't be too bad.
--
Jens Axboe
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2021-08-13 20:33 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-12 15:41 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
2021-08-12 16:16 ` Christoph Hellwig
2021-08-12 15:41 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
2021-08-12 16:20 ` Christoph Hellwig
2021-08-12 15:41 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
2021-08-12 16:32 ` Christoph Hellwig
2021-08-12 16:39 ` Jens Axboe
2021-08-12 15:41 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
2021-08-12 16:33 ` Christoph Hellwig
2021-08-12 17:31 ` Keith Busch
2021-08-12 17:41 ` Jens Axboe
2021-08-13 20:17 ` Keith Busch
2021-08-13 20:19 ` Jens Axboe
2021-08-13 20:31 ` Keith Busch
2021-08-13 20:33 ` Jens Axboe
2021-08-12 15:41 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
2021-08-12 16:39 ` Christoph Hellwig
2021-08-12 16:46 ` Jens Axboe
2021-08-12 15:41 ` [PATCH 6/6] block: use the percpu bio cache in __blkdev_direct_IO Jens Axboe
-- strict thread matches above, loose matches on Subject: below --
2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-11 19:35 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox