public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Gabriel Krisman Bertazi <[email protected]>
Cc: Breno Leitao <[email protected]>,
	[email protected], [email protected],
	[email protected], [email protected], [email protected],
	[email protected]
Subject: Re: [PATCH v3 1/2] io_uring: Move from hlist to io_wq_work_node
Date: Fri, 24 Feb 2023 12:41:58 -0700	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 2/24/23 11:32?AM, Gabriel Krisman Bertazi wrote:
> Jens Axboe <[email protected]> writes:
> 
>> On 2/23/23 12:02?PM, Gabriel Krisman Bertazi wrote:
>>> Breno Leitao <[email protected]> writes:
>>>
>>>> Having cache entries linked using the hlist format brings no benefit, and
>>>> also requires an unnecessary extra pointer address per cache entry.
>>>>
>>>> Use the internal io_wq_work_node single-linked list for the internal
>>>> alloc caches (async_msghdr and async_poll)
>>>>
>>>> This is required to be able to use KASAN on cache entries, since we do
>>>> not need to touch unused (and poisoned) cache entries when adding more
>>>> entries to the list.
>>>>
>>>
>>> Looking at this patch, I wonder if it could go in the opposite direction
>>> instead, and drop io_wq_work_node entirely in favor of list_head. :)
>>>
>>> Do we gain anything other than avoiding the backpointer with a custom
>>> linked implementation, instead of using the interface available in
>>> list.h, that developers know how to use and has other features like
>>> poisoning and extra debug checks?
>>
>> list_head is twice as big, that's the main motivation. This impacts
>> memory usage (obviously), but also caches when adding/removing
>> entries.
> 
> Right. But this is true all around the kernel.  Many (Most?)  places
> that use list_head don't even need to touch list_head->prev.  And
> list_head is usually embedded in larger structures where the cost of
> the extra pointer is insignificant.  I suspect the memory
> footprint shouldn't really be the problem.

I may be in the minority here in caring deeply about even little details
in terms of memory foot print and how many cachelines we touch... Eg if
we can embed 8 bytes rather than 16, then why not? Particularly for
cases where we may have a lot of these structures.

But it's of course always a tradeoff.

> This specific patch is extending io_wq_work_node to io_cache_entry,
> where the increased size will not matter.  In fact, for the cached
> structures, the cache layout and memory footprint don't even seem to
> change, as io_cache_entry is already in a union larger than itself, that
> is not crossing cachelines, (io_async_msghdr, async_poll).

True, for the caching case, the member size doesn't matter. At least
immediately. Sometimes things are shuffled around and optimized further,
and then you may need to find 8 bytes to avoid bloating the struct.

> The other structures currently embedding struct io_work_node are
> io_kiocb (216 bytes long, per request) and io_ring_ctx (1472 bytes long,
> per ring). so it is not like we are saving a lot of memory with a single
> linked list. A more compact cache line still makes sense, though, but I
> think the only case (if any) where there might be any gain is io_kiocb?

Yeah, the ring is already pretty big. It is still handled in cachelines
for the bits that matter, so nice to keep them as small for the
sections. Maybe bumping it will waste an extra cacheline. Or, more
commonly, later additions now end up bumping into the next cacheline
rather than still fitting.

> I don't severely oppose this patch, of course. But I think it'd be worth
> killing io_uring/slist.h entirely in the future instead of adding more
> users.  I intend to give that approach a try, if there's a way to keep
> the size of io_kiocb.

At least it's consistent within io_uring, which also means something.
I'd be fine with taking a look at such a patch, but let's please keep it
outside the scope of this change.

-- 
Jens Axboe


  reply	other threads:[~2023-02-24 19:42 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-23 16:43 [PATCH v3 0/2] io_uring: Add KASAN support for alloc caches Breno Leitao
2023-02-23 16:43 ` [PATCH v3 1/2] io_uring: Move from hlist to io_wq_work_node Breno Leitao
2023-02-23 19:02   ` Gabriel Krisman Bertazi
2023-02-23 19:39     ` Jens Axboe
2023-02-24 18:32       ` Gabriel Krisman Bertazi
2023-02-24 19:41         ` Jens Axboe [this message]
2023-02-24  9:55     ` Breno Leitao
2023-02-23 16:43 ` [PATCH v3 2/2] io_uring: Add KASAN support for alloc_caches Breno Leitao
2023-02-23 19:09   ` Gabriel Krisman Bertazi
2023-03-16 19:01 ` [PATCH v3 0/2] io_uring: Add KASAN support for alloc caches Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox