From: Hao Xu <[email protected]>
To: Pavel Begunkov <[email protected]>,
Jens Axboe <[email protected]>,
[email protected]
Subject: Re: [PATCH 2/2] io_uring: switch cancel_hash to use per list spinlock
Date: Mon, 30 May 2022 14:38:13 +0800 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 5/30/22 06:50, Pavel Begunkov wrote:
> On 5/29/22 19:40, Jens Axboe wrote:
>> On 5/29/22 12:07 PM, Hao Xu wrote:
>>> On 5/30/22 00:25, Jens Axboe wrote:
>>>> On 5/29/22 10:20 AM, Hao Xu wrote:
>>>>> From: Hao Xu <[email protected]>
>>>>>
>>>>> From: Hao Xu <[email protected]>
>>>>>
>>>>> Use per list lock for cancel_hash, this removes some completion lock
>>>>> invocation and remove contension between different cancel_hash entries
>>>>
>>>> Interesting, do you have any numbers on this?
>>>
>>> Just Theoretically for now, I'll do some tests tomorrow. This is
>>> actually RFC, forgot to change the subject.
>>>
>>>>
>>>> Also, I'd make a hash bucket struct:
>>>>
>>>> struct io_hash_bucket {
>>>> spinlock_t lock;
>>>> struct hlist_head list;
>>>> };
>>>>
>>>> rather than two separate structs, that'll have nicer memory locality
>>>> too
>>>> and should further improve it. Could be done as a prep patch with the
>>>> old locking in place, making the end patch doing the per-bucket lock
>>>> simpler as well.
>>>
>>> Sure, if the test number make sense, I'll send v2. I'll test the
>>> hlist_bl list as well(the comment of it says it is much slower than
>>> normal spin_lock, but we may not care the efficiency of poll
>>> cancellation very much?).
>>
>> I don't think the bit spinlocks are going to be useful, we should
>> stick with a spinlock for this. They are indeed slower and generally not
>> used for that reason. For a use case where you need a ton of locks and
>> saving the 4 bytes for a spinlock would make sense (or maybe not
>> changing some struct?), maybe they have a purpose. But not for this.
>
> We can put the cancel hashes under uring_lock and completely kill
> the hash spinlocking (2 lock/unlock pairs per single-shot). The code
> below won't even compile and missing cancellation bits, I'll pick it
> up in a week.
>
> Even better would be to have two hash tables, and auto-magically apply
> the feature to SINGLE_SUBMITTER, SQPOLL (both will have uring_lock held)
> and apoll (need uring_lock after anyway).
>
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 6be21967959d..191fa7f31610 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -7120,12 +7120,20 @@ static void io_poll_task_func(struct io_kiocb
> *req, bool *locked)
> }
>
> io_poll_remove_entries(req);
> - spin_lock(&ctx->completion_lock);
> - hash_del(&req->hash_node);
> - __io_req_complete_post(req, req->cqe.res, 0);
> - io_commit_cqring(ctx);
> - spin_unlock(&ctx->completion_lock);
> - io_cqring_ev_posted(ctx);
> +
> + if (ctx->flags & IORING_MUTEX_HASH) {
> + io_tw_lock(ctx, locked);
> + hash_del(&req->hash_node);
> + io_req_complete_state(req, req->cqe.res, 0);
> + io_req_add_compl_list(req);
> + } else {
> + spin_lock(&ctx->completion_lock);
> + hash_del(&req->hash_node);
> + __io_req_complete_post(req, req->cqe.res, 0);
> + io_commit_cqring(ctx);
> + spin_unlock(&ctx->completion_lock);
> + io_cqring_ev_posted(ctx);
> + }
> }
>
> static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
> @@ -7138,9 +7146,14 @@ static void io_apoll_task_func(struct io_kiocb
> *req, bool *locked)
> return;
>
> io_poll_remove_entries(req);
> - spin_lock(&ctx->completion_lock);
> - hash_del(&req->hash_node);
> - spin_unlock(&ctx->completion_lock);
> + if (ctx->flags & IORING_MUTEX_HASH) {
> + io_tw_lock(ctx, locked);
> + hash_del(&req->hash_node);
> + } else {
> + spin_lock(&ctx->completion_lock);
> + hash_del(&req->hash_node);
> + spin_unlock(&ctx->completion_lock);
> + }
>
> if (!ret)
> io_req_task_submit(req, locked);
> @@ -7332,9 +7345,13 @@ static int __io_arm_poll_handler(struct io_kiocb
> *req,
> return 0;
> }
>
> - spin_lock(&ctx->completion_lock);
> - io_poll_req_insert(req);
> - spin_unlock(&ctx->completion_lock);
> + if (ctx->flags & IORING_MUTEX_HASH) {
Does IORING_MUTEX_HASH exclude IOSQE_ASYNC as well? though IOSQE_ASYNC
is uncommon but users may do that.
> + io_poll_req_insert(req);
> + } else {
> + spin_lock(&ctx->completion_lock);
> + io_poll_req_insert(req);
> + spin_unlock(&ctx->completion_lock);
> + }
>
> if (mask) {
> /* can't multishot if failed, just queue the event we've got */
next prev parent reply other threads:[~2022-05-30 6:38 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-29 16:19 [PATCH 0/2] cancel_hash per entry lock Hao Xu
2022-05-29 16:19 ` [PATCH 1/2] io_uring: add an argument for io_poll_disarm() Hao Xu
2022-05-29 16:20 ` [PATCH 2/2] io_uring: switch cancel_hash to use per list spinlock Hao Xu
2022-05-29 16:25 ` Jens Axboe
2022-05-29 18:07 ` Hao Xu
2022-05-29 18:40 ` Jens Axboe
2022-05-29 22:50 ` Pavel Begunkov
2022-05-29 23:34 ` Jens Axboe
2022-05-30 0:18 ` Pavel Begunkov
2022-05-30 6:52 ` Hao Xu
2022-05-30 9:35 ` Pavel Begunkov
2022-05-30 6:38 ` Hao Xu [this message]
2022-05-30 6:59 ` Hao Xu
2022-05-30 9:39 ` Pavel Begunkov
2022-05-30 13:33 ` Hao Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox