From: Jann Horn <[email protected]>
To: Jens Axboe <[email protected]>
Cc: io-uring <[email protected]>,
Glauber Costa <[email protected]>,
Peter Zijlstra <[email protected]>,
Pavel Begunkov <[email protected]>
Subject: Re: [PATCH 7/9] io_uring: add per-task callback handler
Date: Thu, 20 Feb 2020 23:02:16 +0100 [thread overview]
Message-ID: <CAG48ez1sQi7ntGnLxyo9X_642-wr55+Kn662XyyEYGLyi0iLwQ@mail.gmail.com> (raw)
In-Reply-To: <[email protected]>
On Thu, Feb 20, 2020 at 9:32 PM Jens Axboe <[email protected]> wrote:
>
> For poll requests, it's not uncommon to link a read (or write) after
> the poll to execute immediately after the file is marked as ready.
> Since the poll completion is called inside the waitqueue wake up handler,
> we have to punt that linked request to async context. This slows down
> the processing, and actually means it's faster to not use a link for this
> use case.
>
> We also run into problems if the completion_lock is contended, as we're
> doing a different lock ordering than the issue side is. Hence we have
> to do trylock for completion, and if that fails, go async. Poll removal
> needs to go async as well, for the same reason.
>
> eventfd notification needs special case as well, to avoid stack blowing
> recursion or deadlocks.
>
> These are all deficiencies that were inherited from the aio poll
> implementation, but I think we can do better. When a poll completes,
> simply queue it up in the task poll list. When the task completes the
> list, we can run dependent links inline as well. This means we never
> have to go async, and we can remove a bunch of code associated with
> that, and optimizations to try and make that run faster. The diffstat
> speaks for itself.
[...]
> -static void io_poll_trigger_evfd(struct io_wq_work **workptr)
> +static void io_poll_task_func(struct callback_head *cb)
> {
> - struct io_kiocb *req = container_of(*workptr, struct io_kiocb, work);
> + struct io_kiocb *req = container_of(cb, struct io_kiocb, sched_work);
> + struct io_kiocb *nxt = NULL;
>
[...]
> + io_poll_task_handler(req, &nxt);
> + if (nxt)
> + __io_queue_sqe(nxt, NULL);
This can now get here from anywhere that calls schedule(), right?
Which means that this might almost double the required kernel stack
size, if one codepath exists that calls schedule() while near the
bottom of the stack and another codepath exists that goes from here
through the VFS and again uses a big amount of stack space? This is a
somewhat ugly suggestion, but I wonder whether it'd make sense to
check whether we've consumed over 25% of stack space, or something
like that, and if so, directly punt the request.
Also, can we recursively hit this point? Even if __io_queue_sqe()
doesn't *want* to block, the code it calls into might still block on a
mutex or something like that, at which point the mutex code would call
into schedule(), which would then again hit sched_out_update() and get
here, right? As far as I can tell, this could cause unbounded
recursion.
(On modern kernels with CONFIG_VMAP_STACK=y, running out of stack
space on a task stack is "just" a plain kernel oops instead of nasty
memory corruption, but we still should really try to avoid it.)
> }
[...]
> @@ -3646,46 +3596,11 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
>
> list_del_init(&poll->wait.entry);
>
[...]
> + tsk = req->task;
> + req->result = mask;
> + init_task_work(&req->sched_work, io_poll_task_func);
> + sched_work_add(tsk, &req->sched_work);
Doesn't this have to check the return value?
> + wake_up_process(tsk);
> return 1;
> }
>
> @@ -3733,6 +3648,9 @@ static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
>
> events = READ_ONCE(sqe->poll_events);
> poll->events = demangle_poll(events) | EPOLLERR | EPOLLHUP;
> +
> + /* task will wait for requests on exit, don't need a ref */
> + req->task = current;
Can we get here in SQPOLL mode?
> return 0;
> }
next prev parent reply other threads:[~2020-02-20 22:02 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-20 20:31 [PATCHSET 0/9] io_uring: use polled async retry Jens Axboe
2020-02-20 20:31 ` [PATCH 1/9] io_uring: consider any io_read/write -EAGAIN as final Jens Axboe
2020-02-20 20:31 ` [PATCH 2/9] io_uring: io_accept() should hold on to submit reference on retry Jens Axboe
2020-02-20 20:31 ` [PATCH 3/9] sched: move io-wq/workqueue worker sched in/out into helpers Jens Axboe
2020-02-20 20:31 ` [PATCH 4/9] task_work_run: don't take ->pi_lock unconditionally Jens Axboe
2020-02-20 20:31 ` [PATCH 5/9] kernel: abstract out task work helpers Jens Axboe
2020-02-20 21:07 ` Peter Zijlstra
2020-02-20 21:08 ` Jens Axboe
2020-02-20 20:31 ` [PATCH 6/9] sched: add a sched_work list Jens Axboe
2020-02-20 21:17 ` Peter Zijlstra
2020-02-20 21:53 ` Jens Axboe
2020-02-20 22:02 ` Jens Axboe
2020-02-20 20:31 ` [PATCH 7/9] io_uring: add per-task callback handler Jens Axboe
2020-02-20 22:02 ` Jann Horn [this message]
2020-02-20 22:14 ` Jens Axboe
2020-02-20 22:18 ` Jens Axboe
2020-02-20 22:25 ` Jann Horn
2020-02-20 22:23 ` Jens Axboe
2020-02-20 22:38 ` Jann Horn
2020-02-20 22:56 ` Jens Axboe
2020-02-20 22:58 ` Jann Horn
2020-02-20 23:02 ` Jens Axboe
2020-02-20 22:23 ` Jann Horn
2020-02-20 23:00 ` Jens Axboe
2020-02-20 23:12 ` Jann Horn
2020-02-20 23:22 ` Jens Axboe
2020-02-21 1:29 ` Jann Horn
2020-02-21 17:32 ` Jens Axboe
2020-02-21 19:24 ` Jann Horn
2020-02-21 20:18 ` Jens Axboe
2020-02-20 22:56 ` Jann Horn
2020-02-21 10:47 ` Peter Zijlstra
2020-02-21 14:49 ` Jens Axboe
2020-02-21 15:02 ` Jann Horn
2020-02-21 16:12 ` Peter Zijlstra
2020-02-21 16:23 ` Peter Zijlstra
2020-02-21 20:13 ` Jens Axboe
2020-02-21 13:51 ` Pavel Begunkov
2020-02-21 14:50 ` Jens Axboe
2020-02-21 18:30 ` Pavel Begunkov
2020-02-21 19:10 ` Jens Axboe
2020-02-21 19:22 ` Pavel Begunkov
2020-02-23 6:00 ` Jens Axboe
2020-02-23 6:26 ` Jens Axboe
2020-02-23 11:02 ` Pavel Begunkov
2020-02-23 14:49 ` Jens Axboe
2020-02-23 14:58 ` Jens Axboe
2020-02-23 15:07 ` Jens Axboe
2020-02-23 18:04 ` Pavel Begunkov
2020-02-23 18:06 ` Jens Axboe
2020-02-23 17:55 ` Pavel Begunkov
2020-02-20 20:31 ` [PATCH 8/9] io_uring: mark requests that we can do poll async in io_op_defs Jens Axboe
2020-02-20 20:31 ` [PATCH 9/9] io_uring: use poll driven retry for files that support it Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAG48ez1sQi7ntGnLxyo9X_642-wr55+Kn662XyyEYGLyi0iLwQ@mail.gmail.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox