public inbox for [email protected]
 help / color / mirror / Atom feed
From: Hao Xu <[email protected]>
To: Pavel Begunkov <[email protected]>, Jens Axboe <[email protected]>
Cc: [email protected], Joseph Qi <[email protected]>
Subject: Re: [PATCH 2/2] io_uring: don't hold uring_lock when calling io_run_task_work*
Date: Sat, 6 Feb 2021 19:34:21 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

在 2021/2/5 下午6:18, Pavel Begunkov 写道:
> On 05/02/2021 09:57, Hao Xu wrote:
>> 在 2021/2/4 下午11:26, Pavel Begunkov 写道:
>>> On 04/02/2021 11:17, Pavel Begunkov wrote:
>>>> On 04/02/2021 03:25, Hao Xu wrote:
>>>>> 在 2021/2/4 上午12:45, Pavel Begunkov 写道:
>>>>>> On 03/02/2021 16:35, Pavel Begunkov wrote:
>>>>>>> On 03/02/2021 14:57, Hao Xu wrote:
>>>>>>>> This is caused by calling io_run_task_work_sig() to do work under
>>>>>>>> uring_lock while the caller io_sqe_files_unregister() already held
>>>>>>>> uring_lock.
>>>>>>>> we need to check if uring_lock is held by us when doing unlock around
>>>>>>>> io_run_task_work_sig() since there are code paths down to that place
>>>>>>>> without uring_lock held.
>>>>>>>
>>>>>>> 1. we don't want to allow parallel io_sqe_files_unregister()s
>>>>>>> happening, it's synchronised by uring_lock atm. Otherwise it's
>>>>>>> buggy.
>>>>> Here "since there are code paths down to that place without uring_lock held" I mean code path of io_ring_ctx_free().
>>>>
>>>> I guess it's to the 1/2, but let me outline the problem again:
>>>> if you have two tasks userspace threads sharing a ring, then they
>>>> can both and in parallel call syscall:files_unregeister. That's
>>>> a potential double percpu_ref_kill(&data->refs), or even worse.
>>>>
>>>> Same for 2, but racing for the table and refs.
>>>
>>> There is a couple of thoughts for this:
>>>
>>> 1. I don't like waiting without holding the lock in general, because
>>> someone can submit more reqs in-between and so indefinitely postponing
>>> the files_unregister.
>> Thanks, Pavel.
>> I thought this issue before, until I saw this in __io_uring_register:
>>
>>    if (io_register_op_must_quiesce(opcode)) {
>>            percpu_ref_kill(&ctx->refs);
> 
> It is different because of this kill, it will prevent submissions.
> 
I saw percpu_ref_is_dying(&ctx->refs) check in sq thread but not
in io_uring_enter(), so I guess there could be another thread doing
io_uring_enter() and submiting sqes.
>>
>>            /*
>>            ¦* Drop uring mutex before waiting for references to exit. If
>>            ¦* another thread is currently inside io_uring_enter() it might
>>            ¦* need to grab the uring_lock to make progress. If we hold it
>>            ¦* here across the drain wait, then we can deadlock. It's safe
>>            ¦* to drop the mutex here, since no new references will come in
>>            ¦* after we've killed the percpu ref.
>>            ¦*/
>>            mutex_unlock(&ctx->uring_lock);
>>            do {
>>                    ret = wait_for_completion_interruptible(&ctx->ref_comp);
>>                    if (!ret)
>>                            break;
>>                    ret = io_run_task_work_sig();
>>                    if (ret < 0)
>>                            break;
>>            } while (1);
>>
>>            mutex_lock(&ctx->uring_lock);
>>
>>            if (ret) {
>>                    percpu_ref_resurrect(&ctx->refs);
>>                    goto out_quiesce;
>>            }
>>    }
>>
>> So now I guess the postponement issue also exits in the above code since
>> there could be another thread submiting reqs to the shared ctx(or we can say uring fd).
>>
>>> 2. I wouldn't want to add checks for that in submission path.
>>>
>>> So, a solution I think about is to wait under the lock, If we need to
>>> run task_works -- briefly drop the lock, run task_works and then do
>>> all unregister all over again. Keep an eye on refs, e.g. probably
>>> need to resurrect it.
>>>
>>> Because we current task is busy nobody submits new requests on
>>> its behalf, and so there can't be infinite number of in-task_work
>>> reqs, and eventually it will just go wait/sleep forever (if not
>>> signalled) under the mutex, so we can a kind of upper bound on
>>> time.
Sorry Pavel, I don't quiet understand "so we can a kind of upper bound 
on time". :(
>>>
>> Do you mean sleeping with timeout rather than just sleeping? I think this works, I'll work on this and think about the detail.
> 
> Without timeout -- it will be awaken when new task_works are coming in,
> but Jens knows better.
So we can just put unlock and lock around io_run_task_work_sig()
to address the issue 2?
> 
>> But before addressing this issue, Should I first send a patch to just fix the deadlock issue?
> 
> Do you mean the deadlock 2/2 was trying to fix? Or some else? The thread
> is all about fixing it, but doing it right. Not sure there is a need for
> faster but incomplete solution, if that's what you meant.
> 


  reply	other threads:[~2021-02-06 11:35 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-03 14:57 [PATCH 0/2] fix deadlock in __io_req_task_submit() Hao Xu
2021-02-03 14:57 ` [PATCH 1/2] io_uring: add uring_lock as an argument to io_sqe_files_unregister() Hao Xu
2021-02-03 16:33   ` Pavel Begunkov
2021-02-04  3:34     ` Hao Xu
2021-02-04 11:11       ` Pavel Begunkov
2021-02-04 14:49         ` Jens Axboe
2021-02-03 14:57 ` [PATCH 2/2] io_uring: don't hold uring_lock when calling io_run_task_work* Hao Xu
2021-02-03 16:35   ` Pavel Begunkov
2021-02-03 16:45     ` Pavel Begunkov
2021-02-04  3:25       ` Hao Xu
2021-02-04 11:17         ` Pavel Begunkov
2021-02-04 15:26           ` Pavel Begunkov
2021-02-05  9:57             ` Hao Xu
2021-02-05 10:18               ` Pavel Begunkov
2021-02-06 11:34                 ` Hao Xu [this message]
2021-02-07 17:16                   ` Pavel Begunkov
2021-02-06 16:21                 ` Hao Xu
2021-02-11 13:30                 ` Hao Xu
2021-02-05 10:03       ` Hao Xu
2021-02-04 11:33   ` Pavel Begunkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=62fc0e30-47bd-71d2-29e3-c2824bf78725@linux.alibaba.com \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox