public inbox for [email protected]
 help / color / mirror / Atom feed
From: Hao Xu <[email protected]>
To: Pavel Begunkov <[email protected]>,
	Jens Axboe <[email protected]>,
	[email protected]
Subject: Re: [PATCH 2/2] io_uring: switch cancel_hash to use per list spinlock
Date: Mon, 30 May 2022 14:52:47 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 5/30/22 08:18, Pavel Begunkov wrote:
> On 5/30/22 00:34, Jens Axboe wrote:
>> On 5/29/22 4:50 PM, Pavel Begunkov wrote:
>>> On 5/29/22 19:40, Jens Axboe wrote:
>>>> On 5/29/22 12:07 PM, Hao Xu wrote:
>>>>> On 5/30/22 00:25, Jens Axboe wrote:
>>>>>> On 5/29/22 10:20 AM, Hao Xu wrote:
>>>>>>> From: Hao Xu <[email protected]>
>>>>>>>
>>>>>>> From: Hao Xu <[email protected]>
>>>>>>>
>>>>>>> Use per list lock for cancel_hash, this removes some completion lock
>>>>>>> invocation and remove contension between different cancel_hash 
>>>>>>> entries
>>>>>>
>>>>>> Interesting, do you have any numbers on this?
>>>>>
>>>>> Just Theoretically for now, I'll do some tests tomorrow. This is
>>>>> actually RFC, forgot to change the subject.
>>>>>
>>>>>>
>>>>>> Also, I'd make a hash bucket struct:
>>>>>>
>>>>>> struct io_hash_bucket {
>>>>>>       spinlock_t lock;
>>>>>>       struct hlist_head list;
>>>>>> };
>>>>>>
>>>>>> rather than two separate structs, that'll have nicer memory 
>>>>>> locality too
>>>>>> and should further improve it. Could be done as a prep patch with the
>>>>>> old locking in place, making the end patch doing the per-bucket lock
>>>>>> simpler as well.
>>>>>
>>>>> Sure, if the test number make sense, I'll send v2. I'll test the
>>>>> hlist_bl list as well(the comment of it says it is much slower than
>>>>> normal spin_lock, but we may not care the efficiency of poll
>>>>> cancellation very much?).
>>>>
>>>> I don't think the bit spinlocks are going to be useful, we should
>>>> stick with a spinlock for this. They are indeed slower and generally 
>>>> not
>>>> used for that reason. For a use case where you need a ton of locks and
>>>> saving the 4 bytes for a spinlock would make sense (or maybe not
>>>> changing some struct?), maybe they have a purpose. But not for this.
>>>
>>> We can put the cancel hashes under uring_lock and completely kill
>>> the hash spinlocking (2 lock/unlock pairs per single-shot). The code
>>> below won't even compile and missing cancellation bits, I'll pick it
>>> up in a week.
>>>
>>> Even better would be to have two hash tables, and auto-magically apply
>>> the feature to SINGLE_SUBMITTER, SQPOLL (both will have uring_lock held)
>>> and apoll (need uring_lock after anyway).
>>
>> My hope was that it'd take us closer to being able to use more granular
>> locking for hashing in general. I don't care too much about the
>> cancelation, but the normal hash locking would be useful to do.
>>
>> However, for cancelations, under uring_lock would indeed be preferable
>> to doing per-bucket locks there. Guess I'll wait and see what your final
>> patch looks like, not sure why it'd be a ctx conditional?
> 
> It replaces 2 spin lock/unlock with one io_tw_lock() in the completion
> path, which is done once per tw batch and grabbed anyway if
> there is no contention (see handle_tw_list()).
> 
> It could be unconditional, but I'd say for those 3 cases we have
> non-existing chance to regress perf/latency, but I can think of
> some cases where it might screw latencies, all share io_uring
> b/w threads.
> 
> Should benefit the cancellation path as well, but I don't care
> about it as well.
> 
>> What about io_poll_remove_all()?
> 
> As mentioned, it's not handled in the diff, but easily doable,
> it should just traverse both hash tables.
> 

Two hash tables looks good to me. If I don't get you wrong, one table
under uring_lock, the other one for normal handling(like per bucket
locking)?


  reply	other threads:[~2022-05-30  6:52 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-29 16:19 [PATCH 0/2] cancel_hash per entry lock Hao Xu
2022-05-29 16:19 ` [PATCH 1/2] io_uring: add an argument for io_poll_disarm() Hao Xu
2022-05-29 16:20 ` [PATCH 2/2] io_uring: switch cancel_hash to use per list spinlock Hao Xu
2022-05-29 16:25   ` Jens Axboe
2022-05-29 18:07     ` Hao Xu
2022-05-29 18:40       ` Jens Axboe
2022-05-29 22:50         ` Pavel Begunkov
2022-05-29 23:34           ` Jens Axboe
2022-05-30  0:18             ` Pavel Begunkov
2022-05-30  6:52               ` Hao Xu [this message]
2022-05-30  9:35                 ` Pavel Begunkov
2022-05-30  6:38           ` Hao Xu
2022-05-30  6:59             ` Hao Xu
2022-05-30  9:39             ` Pavel Begunkov
2022-05-30 13:33     ` Hao Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox