From: Jens Axboe <[email protected]>
To: Peter Zijlstra <[email protected]>
Cc: "Carter Li 李通洲" <[email protected]>,
"Pavel Begunkov" <[email protected]>,
io-uring <[email protected]>
Subject: Re: [ISSUE] The time cost of IOSQE_IO_LINK
Date: Fri, 14 Feb 2020 08:47:56 -0700 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 2/14/20 8:32 AM, Peter Zijlstra wrote:
> On Thu, Feb 13, 2020 at 10:03:54PM -0700, Jens Axboe wrote:
>
>> CC'ing peterz for some cluebat knowledge. Peter, is there a nice way to
>> currently do something like this? Only thing I'm currently aware of is
>> the preempt in/out notifiers, but they don't quite provide what I need,
>> since I need to pass some data (a request) as well.
>
> Whee, nothing quite like this around I think.
Probably not ;-)
>> The full detail on what I'm trying here is:
>>
>> io_uring can have linked requests. One obvious use case for that is to
>> queue a POLLIN on a socket, and then link a read/recv to that. When the
>> poll completes, we want to run the read/recv. io_uring hooks into the
>> waitqueue wakeup handler to finish the poll request, and since we're
>> deep in waitqueue wakeup code, it queues the linked read/recv for
>> execution via an async thread. This is not optimal, obviously, as it
>> relies on a switch to a new thread to perform this read. This hack
>> queues a backlog to the task itself, and runs it when it's scheduled in.
>> Probably want to do the same for sched out as well, currently I just
>> hack that in the io_uring wait part...
>
> I'll definitely need to think more about this, but a few comments on the
> below.
>
>> +static void __io_uring_task_handler(struct list_head *list)
>> +{
>> + struct io_kiocb *req;
>> +
>> + while (!list_empty(list)) {
>> + req = list_first_entry(list, struct io_kiocb, list);
>> + list_del(&req->list);
>> +
>> + __io_queue_sqe(req, NULL);
>> + }
>> +}
>> +
>> +void io_uring_task_handler(struct task_struct *tsk)
>> +{
>> + LIST_HEAD(list);
>> +
>> + raw_spin_lock_irq(&tsk->uring_lock);
>> + if (!list_empty(&tsk->uring_work))
>> + list_splice_init(&tsk->uring_work, &list);
>> + raw_spin_unlock_irq(&tsk->uring_lock);
>> +
>> + __io_uring_task_handler(&list);
>> +}
>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index fc1dfc007604..b60f081cac17 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2717,6 +2717,11 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
>> INIT_HLIST_HEAD(&p->preempt_notifiers);
>> #endif
>>
>> +#ifdef CONFIG_IO_URING
>> + INIT_LIST_HEAD(&p->uring_work);
>> + raw_spin_lock_init(&p->uring_lock);
>> +#endif
>> +
>> #ifdef CONFIG_COMPACTION
>> p->capture_control = NULL;
>> #endif
>> @@ -3069,6 +3074,20 @@ fire_sched_out_preempt_notifiers(struct task_struct *curr,
>>
>> #endif /* CONFIG_PREEMPT_NOTIFIERS */
>>
>> +#ifdef CONFIG_IO_URING
>> +extern void io_uring_task_handler(struct task_struct *tsk);
>> +
>> +static inline void io_uring_handler(struct task_struct *tsk)
>> +{
>> + if (!list_empty(&tsk->uring_work))
>> + io_uring_task_handler(tsk);
>> +}
>> +#else /* !CONFIG_IO_URING */
>> +static inline void io_uring_handler(struct task_struct *tsk)
>> +{
>> +}
>> +#endif
>> +
>> static inline void prepare_task(struct task_struct *next)
>> {
>> #ifdef CONFIG_SMP
>> @@ -3322,6 +3341,8 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev)
>> balance_callback(rq);
>> preempt_enable();
>>
>> + io_uring_handler(current);
>> +
>> if (current->set_child_tid)
>> put_user(task_pid_vnr(current), current->set_child_tid);
>>
>
> I suspect you meant to put that in finish_task_switch() which is the
> tail end of every schedule(), schedule_tail() is the tail end of
> clone().
>
> Or maybe you meant to put it in (and rename) sched_update_worker() which
> is after every schedule() but in a preemptible context -- much saner
> since you don't want to go add an unbounded amount of work in a
> non-preemptible context.
>
> At which point you already have your callback: io_wq_worker_running(),
> or is this for any random task?
Let me try and clarify - this isn't for the worker tasks, this is for
any task that is using io_uring. In fact, it's particularly not for the
worker threads, just the task itself.
I basically want the handler to be called when:
1) The task is scheduled in. The poll will complete and stuff some items
on that task list, and I want to task to process them as it wakes up.
2) The task is going to sleep, don't want to leave entries around while
the task is sleeping.
3) I need it to be called from "normal" context, with ints enabled,
preempt enabled, etc.
sched_update_worker() (with a rename) looks ideal for #1, and the
context is sane for me. Just need a good spot to put the hook call for
schedule out. I think this:
if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
preempt_disable();
if (tsk->flags & PF_WQ_WORKER)
wq_worker_sleeping(tsk);
else
io_wq_worker_sleeping(tsk);
preempt_enable_no_resched();
}
just needs to go into another helper, and then I can call it there
outside of the preempt.
I'm sure there are daemons lurking here, but I'll test and see how it
goes...
--
Jens Axboe
next prev parent reply other threads:[~2020-02-14 15:48 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-12 16:31 [ISSUE] The time cost of IOSQE_IO_LINK Carter Li 李通洲
2020-02-12 17:11 ` Jens Axboe
2020-02-12 17:22 ` Jens Axboe
2020-02-12 17:29 ` Jens Axboe
2020-02-13 0:33 ` Carter Li 李通洲
2020-02-13 15:08 ` Pavel Begunkov
2020-02-13 15:14 ` Jens Axboe
2020-02-13 15:51 ` Carter Li 李通洲
2020-02-14 1:25 ` Carter Li 李通洲
2020-02-14 2:45 ` Jens Axboe
2020-02-14 5:03 ` Jens Axboe
2020-02-14 15:32 ` Peter Zijlstra
2020-02-14 15:47 ` Jens Axboe [this message]
2020-02-14 16:18 ` Jens Axboe
2020-02-14 17:52 ` Jens Axboe
2020-02-14 20:44 ` Jens Axboe
2020-02-15 0:16 ` Carter Li 李通洲
2020-02-15 1:10 ` Jens Axboe
2020-02-15 1:25 ` Carter Li 李通洲
2020-02-15 1:27 ` Jens Axboe
2020-02-15 6:01 ` Jens Axboe
2020-02-15 6:32 ` Carter Li 李通洲
2020-02-15 15:11 ` Jens Axboe
2020-02-16 19:06 ` Pavel Begunkov
2020-02-16 22:23 ` Jens Axboe
2020-02-17 10:30 ` Pavel Begunkov
2020-02-17 19:30 ` Jens Axboe
2020-02-16 23:06 ` Jens Axboe
2020-02-16 23:07 ` Jens Axboe
2020-02-17 12:09 ` Peter Zijlstra
2020-02-17 16:12 ` Jens Axboe
2020-02-17 17:16 ` Jens Axboe
2020-02-17 17:46 ` Peter Zijlstra
2020-02-17 18:16 ` Jens Axboe
2020-02-18 13:13 ` Peter Zijlstra
2020-02-18 14:27 ` [PATCH] asm-generic/atomic: Add try_cmpxchg() fallbacks Peter Zijlstra
2020-02-18 14:40 ` Peter Zijlstra
2020-02-20 10:30 ` Will Deacon
2020-02-20 10:37 ` Peter Zijlstra
2020-02-20 10:39 ` Will Deacon
2020-02-18 14:56 ` [ISSUE] The time cost of IOSQE_IO_LINK Oleg Nesterov
2020-02-18 15:07 ` Oleg Nesterov
2020-02-18 15:38 ` Peter Zijlstra
2020-02-18 16:33 ` Jens Axboe
2020-02-18 15:07 ` Peter Zijlstra
2020-02-18 15:50 ` [PATCH] task_work_run: don't take ->pi_lock unconditionally Oleg Nesterov
2020-02-20 16:39 ` Peter Zijlstra
2020-02-20 17:22 ` Oleg Nesterov
2020-02-20 17:49 ` Peter Zijlstra
2020-02-21 14:52 ` Oleg Nesterov
2020-02-24 18:47 ` Jens Axboe
2020-02-28 19:17 ` Jens Axboe
2020-02-28 19:25 ` Peter Zijlstra
2020-02-28 19:28 ` Jens Axboe
2020-02-28 20:06 ` Peter Zijlstra
2020-02-28 20:15 ` Jens Axboe
2020-02-18 16:46 ` [ISSUE] The time cost of IOSQE_IO_LINK Jens Axboe
2020-02-18 16:52 ` Jens Axboe
2020-02-18 13:13 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox