From: Jens Axboe <[email protected]>
To: Pavel Begunkov <[email protected]>, [email protected]
Cc: Dylan Yudaken <[email protected]>
Subject: Re: [PATCH 1/1] io_uring: optimise locking for local tw with submit_wait
Date: Thu, 6 Oct 2022 15:11:46 -0600 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 10/6/22 3:09 PM, Pavel Begunkov wrote:
> On 10/6/22 21:59, Jens Axboe wrote:
>> On 10/6/22 2:42 PM, Pavel Begunkov wrote:
>>> Running local task_work requires taking uring_lock, for submit + wait we
>>> can try to run them right after submit while we still hold the lock and
>>> save one lock/unlokc pair. The optimisation was implemented in the first
>>> local tw patches but got dropped for simplicity.
>>>
>>> Suggested-by: Dylan Yudaken <[email protected]>
>>> Signed-off-by: Pavel Begunkov <[email protected]>
>>> ---
>>> io_uring/io_uring.c | 12 ++++++++++--
>>> io_uring/io_uring.h | 7 +++++++
>>> 2 files changed, 17 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>>> index 355fc1f3083d..b092473eca1d 100644
>>> --- a/io_uring/io_uring.c
>>> +++ b/io_uring/io_uring.c
>>> @@ -3224,8 +3224,16 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
>>> mutex_unlock(&ctx->uring_lock);
>>> goto out;
>>> }
>>> - if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
>>> - goto iopoll_locked;
>>> + if (flags & IORING_ENTER_GETEVENTS) {
>>> + if (ctx->syscall_iopoll)
>>> + goto iopoll_locked;
>>> + /*
>>> + * Ignore errors, we'll soon call io_cqring_wait() and
>>> + * it should handle ownership problems if any.
>>> + */
>>> + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
>>> + (void)io_run_local_work_locked(ctx);
>>> + }
>>> mutex_unlock(&ctx->uring_lock);
>>> }
>>> diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
>>> index e733d31f31d2..8504bc1f3839 100644
>>> --- a/io_uring/io_uring.h
>>> +++ b/io_uring/io_uring.h
>>> @@ -275,6 +275,13 @@ static inline int io_run_task_work_ctx(struct io_ring_ctx *ctx)
>>> return ret;
>>> }
>>> +static inline int io_run_local_work_locked(struct io_ring_ctx *ctx)
>>> +{
>>> + if (llist_empty(&ctx->work_llist))
>>> + return 0;
>>> + return __io_run_local_work(ctx, true);
>>> +}
>>
>> Do you have pending patches that also use this? If not, maybe we
>> should just keep it in io_uring.c? If you do, then this looks fine
>> to me rather than needing to shuffle it later.
>
> No, I don't. I'd argue it's better as a helper because at least it
> hides always confusing bool argument, and we'd also need to replace
> a similar one in io_iopoll_check(). Add we can stick must_hold there
> for even more clarity. But ultimately I don't care much.
I really don't feel that strongly about it either, let's just keep
it the way it is.
--
Jens Axboe
next prev parent reply other threads:[~2022-10-06 21:11 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-06 20:42 [PATCH 1/1] io_uring: optimise locking for local tw with submit_wait Pavel Begunkov
2022-10-06 20:59 ` Jens Axboe
2022-10-06 21:09 ` Pavel Begunkov
2022-10-06 21:11 ` Jens Axboe [this message]
2022-10-06 21:14 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox