From: Jens Axboe <[email protected]>
To: Christoph Hellwig <[email protected]>
Cc: [email protected], [email protected]
Subject: Re: [PATCH 3/6] bio: add allocation cache abstraction
Date: Thu, 12 Aug 2021 09:08:29 -0600 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 8/12/21 1:01 AM, Christoph Hellwig wrote:
> On Wed, Aug 11, 2021 at 01:35:30PM -0600, Jens Axboe wrote:
>> + struct bio *bio;
>> + unsigned int i;
>> +
>> + i = 0;
>
> Initialize at declaration time?
Sure, done.
>> +static inline bool __bio_put(struct bio *bio)
>> +{
>> + if (!bio_flagged(bio, BIO_REFFED))
>> + return true;
>> +
>> + BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
>> +
>> + /*
>> + * last put frees it
>> + */
>> + return atomic_dec_and_test(&bio->__bi_cnt);
>> +}
>
> Please avoid this helper, we can trivially do the check inside of bio_put:
>
> if (bio_flagged(bio, BIO_REFFED)) {
> BIO_BUG_ON(!atomic_read(&bio->__bi_cnt));
> if (!atomic_dec_and_test(&bio->__bi_cnt))
> return;
> }
Done
>> - bio_free(bio);
>> + if (bio_flagged(bio, BIO_PERCPU_CACHE)) {
>> + struct bio_alloc_cache *cache;
>> +
>> + bio_uninit(bio);
>> + cache = per_cpu_ptr(bio->bi_pool->cache, get_cpu());
>> + bio_list_add_head(&cache->free_list, bio);
>> + cache->nr++;
>> + if (cache->nr > ALLOC_CACHE_MAX + ALLOC_CACHE_SLACK)
>
> Folding the increment as a prefix here would make the increment and test
> semantics a little more obvious.
I don't really care that deeply about it, but I generally prefer keeping
them separate as it makes it easier to read (for me). But I'll change it.
>> +struct bio *bio_alloc_kiocb(struct kiocb *kiocb, gfp_t gfp,
>> + unsigned short nr_vecs, struct bio_set *bs)
>> +{
>> + struct bio_alloc_cache *cache = NULL;
>> + struct bio *bio;
>> +
>> + if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS)
>> + goto normal_alloc;
>> +
>> + cache = per_cpu_ptr(bs->cache, get_cpu());
>> + bio = bio_list_pop(&cache->free_list);
>> + if (bio) {
>> + cache->nr--;
>> + put_cpu();
>> + bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs);
>> + bio_set_flag(bio, BIO_PERCPU_CACHE);
>> + return bio;
>> + }
>> + put_cpu();
>> +normal_alloc:
>> + bio = bio_alloc_bioset(gfp, nr_vecs, bs);
>> + if (cache)
>> + bio_set_flag(bio, BIO_PERCPU_CACHE);
>> + return bio;
>
> The goto here is pretty obsfucating and adds an extra patch to the fast
> path.
I don't agree, and it's not the fast path - the fast path is popping off
a bio off the list, not hitting the allocator.
> Also I don't think we need the gfp argument here at all - it should
> always be GFP_KERNEL. In fact I plan to kill these arguments from much
> of the block layer as the places that need NOIO of NOFS semantics can
> and should move to set that in the per-thread context.
Yeah, I actually kept it on purpose for future users, but let's just
kill it as both potential use cases I have use GFP_KERNEL anyway.
>> -static inline struct bio *bio_list_pop(struct bio_list *bl)
>> +static inline void bio_list_del_head(struct bio_list *bl, struct bio *head)
>> {
>> - struct bio *bio = bl->head;
>> -
>> - if (bio) {
>> + if (head) {
>> bl->head = bl->head->bi_next;
>> if (!bl->head)
>> bl->tail = NULL;
>>
>> - bio->bi_next = NULL;
>> + head->bi_next = NULL;
>> }
>> +}
>> +
>> +static inline struct bio *bio_list_pop(struct bio_list *bl)
>> +{
>> + struct bio *bio = bl->head;
>>
>> + bio_list_del_head(bl, bio);
>> return bio;
>> }
>
> No need for this change.
Leftover from earlier series, killed it now.
>> @@ -699,6 +706,12 @@ struct bio_set {
>> struct kmem_cache *bio_slab;
>> unsigned int front_pad;
>>
>> + /*
>> + * per-cpu bio alloc cache and notifier
>> + */
>> + struct bio_alloc_cache __percpu *cache;
>> + struct hlist_node cpuhp_dead;
>> +
>
> I'd keep the hotplug list entry at the end instead of bloating the
> cache line used in the fast path.
Good point, moved to the end.
--
Jens Axboe
next prev parent reply other threads:[~2021-08-12 15:08 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-11 19:35 [PATCHSET v4 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-11 19:35 ` [PATCH 1/6] bio: optimize initialization of a bio Jens Axboe
2021-08-12 6:51 ` Christoph Hellwig
2021-08-12 15:29 ` Jens Axboe
2021-08-11 19:35 ` [PATCH 2/6] fs: add kiocb alloc cache flag Jens Axboe
2021-08-12 6:54 ` Christoph Hellwig
2021-08-12 14:52 ` Jens Axboe
2021-08-11 19:35 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
2021-08-12 7:01 ` Christoph Hellwig
2021-08-12 15:08 ` Jens Axboe [this message]
2021-08-12 15:18 ` Christoph Hellwig
2021-08-12 15:26 ` Jens Axboe
2021-08-11 19:35 ` [PATCH 4/6] block: clear BIO_PERCPU_CACHE flag if polling isn't supported Jens Axboe
2021-08-11 19:35 ` [PATCH 5/6] io_uring: enable use of bio alloc cache Jens Axboe
2021-08-11 19:35 ` [PATCH 6/6] block: enable use of bio allocation cache Jens Axboe
2021-08-12 7:04 ` Christoph Hellwig
2021-08-12 14:52 ` Jens Axboe
-- strict thread matches above, loose matches on Subject: below --
2021-08-12 15:41 [PATCHSET v5 0/6] Enable bio recycling for polled IO Jens Axboe
2021-08-12 15:41 ` [PATCH 3/6] bio: add allocation cache abstraction Jens Axboe
2021-08-12 16:32 ` Christoph Hellwig
2021-08-12 16:39 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox