From: Kumar Kartikeya Dwivedi <[email protected]>
To: Alexei Starovoitov <[email protected]>
Cc: [email protected], Jens Axboe <[email protected]>,
Pavel Begunkov <[email protected]>,
[email protected], Alexei Starovoitov <[email protected]>,
Daniel Borkmann <[email protected]>,
Andrii Nakryiko <[email protected]>,
Pavel Emelyanov <[email protected]>,
Alexander Mihalicyn <[email protected]>,
Andrei Vagin <[email protected]>,
[email protected], [email protected]
Subject: Re: [PATCH bpf-next v1 1/8] io_uring: Implement eBPF iterator for registered buffers
Date: Fri, 19 Nov 2021 09:45:23 +0530 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On Fri, Nov 19, 2021 at 03:32:26AM IST, Alexei Starovoitov wrote:
> On Tue, Nov 16, 2021 at 11:12:30AM +0530, Kumar Kartikeya Dwivedi wrote:
> > This change adds eBPF iterator for buffers registered in io_uring ctx.
> > It gives access to the ctx, the index of the registered buffer, and a
> > pointer to the io_uring_ubuf itself. This allows the iterator to save
> > info related to buffers added to an io_uring instance, that isn't easy
> > to export using the fdinfo interface (like exact struct page composing
> > the registered buffer).
> >
> > The primary usecase this is enabling is checkpoint/restore support.
> >
> > Note that we need to use mutex_trylock when the file is read from, in
> > seq_start functions, as the order of lock taken is opposite of what it
> > would be when io_uring operation reads the same file. We take
> > seq_file->lock, then ctx->uring_lock, while io_uring would first take
> > ctx->uring_lock and then seq_file->lock for the same ctx.
> >
> > This can lead to a deadlock scenario described below:
> >
> > CPU 0 CPU 1
> >
> > vfs_read
> > mutex_lock(&seq_file->lock) io_read
> > mutex_lock(&ctx->uring_lock)
> > mutex_lock(&ctx->uring_lock) # switched to mutex_trylock
> > mutex_lock(&seq_file->lock)
> >
> > The trylock also protects the case where io_uring tries to read from
> > iterator attached to itself (same ctx), where the order of locks would
> > be:
> > io_uring_enter
> > mutex_lock(&ctx->uring_lock) <-----------.
> > io_read \
> > seq_read \
> > mutex_lock(&seq_file->lock) /
> > mutex_lock(&ctx->uring_lock) # deadlock-`
> >
> > In both these cases (recursive read and contended uring_lock), -EDEADLK
> > is returned to userspace.
> >
> > In the future, this iterator will be extended to directly support
> > iteration of bvec Flexible Array Member, so that when there is no
> > corresponding VMA that maps to the registered buffer (e.g. if VMA is
> > destroyed after pinning pages), we are able to reconstruct the
> > registration on restore by dumping the page contents and then replaying
> > them into a temporary mapping used for registration later. All this is
> > out of scope for the current series however, but builds upon this
> > iterator.
>
> From BPF infra perspective these new iterators fit very well and
> I don't see any issues maintaining this interface while kernel keeps
> changing, but this commit log and shallowness of the selftests
> makes me question feasibility of this approach in particular with io_uring.
> Is it even possible to scan all internal bits of io_uring and reconstruct
> it later? The bpf iter is only the read part. Don't you need the write part
> for CRIU ? Even for reads only... io_uring has complex inner state.
Yes, the inner state is complex and often entangled with other task attributes,
like credentials, mm, file descriptors, other io_uring fds (wq_fd, etc.) but for
now these iterators are a good starting point to implement the majority of the
missing features in CRIU userspace. These iterators (including task* ones), and
procfs allow us to collect enough state to correlate various resources and form
relationships (e.g. which fd or buffer of which task(s) is registered, which
io_uring was used for wq_fd registration, or which eventfd was registered,
etc.). Thanks to access to io_ring_ctx, and iter being a tracing prog, we can
do usual pointer access to read some of that data.
> Like bpf itself which cannot be realistically CRIU-ed.
> I don't think we can merge this in pieces. We need to wait until there is
> full working CRIU framework that uses these new iterators.
I don't intend to add a 'write' framework. The usual process with CRIU is to
gather enough state to reconstruct the kernel resource by repeating steps
similar to what it might have performed during it's lifetime, e.g. approximating
a trace of execution leading to the same task state and then repeating that on
restore.
While a 'write' framework with first class kernel support for checkpoint/restore
would actually make CRIU's life a lot more simpler, there usually has been a lot
of pushback against interfaces like that, hence the approach has been to add
features that can extract the relevant information out of the kernel, and do the
rest of the work in userspace.
E.g. if we find that fd 1 and 2 are registered in the io_uring, and buffer at
0xabcd with len 128, then we first restore fd 1 and 2 (which can be anything)
at restore time, then after this restore is complete, continue restoration of
io_uring by executing file registration step for both [0], and depending on if
the mapping existed for 0xabcd in task mm, restore buffer once the mm of the
task has been restored, or otherwise map a temporary buffer, replay data, and
register it and destroy mappings. This would be similar to what might have
happened in the task itself. The correct order of events to do the restore
will be managed by CRIU itself.
It doesn't have to be exactly the same stpes, just enough for the task to not
notice a difference and not break its assumptions, and lead to the same end
result of task and resource state.
Even the task of determining whether fd 1 and 2 belong to which fdtable is
non-trivial and ineffecient rn. You can look at [0] for a similar case that was
solved for epoll, which works but we can do better (BPF didn't exist at that
time so it was the best we could do back then).
Also, this work is part of GSoC. There is already code that is waiting for this
to fill in the missing pieces [0]. If you want me to add a sample/selftest that
demonstrates/tests how this can be used to reconstruct a task's io_uring, I can
certainly do that. We've already spent a few months contemplating on a few
approaches and this turned out to be the best/most powerful. At one point I had
to scrap some my earlier patches completely because they couldn't work with
descriptorless io_uring. Iterator seem like the best solution so far that can
adapt gracefully to feature additions in something seeing as heavy development
as io_uring.
[0]: https://github.com/checkpoint-restore/criu/commit/cfa3f405d522334076fc4d687bd077bee3186ccf#diff-d2cfa5a05213c854d539de003a23a286311ae81431026d3d50b0068c0cb5a852
[1]: https://github.com/checkpoint-restore/criu/pull/1597
--
Kartikeya
next prev parent reply other threads:[~2021-11-19 4:15 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-16 5:42 [PATCH bpf-next v1 0/8] Introduce BPF iterators for io_uring and epoll Kumar Kartikeya Dwivedi
2021-11-16 5:42 ` [PATCH bpf-next v1 1/8] io_uring: Implement eBPF iterator for registered buffers Kumar Kartikeya Dwivedi
2021-11-18 17:21 ` Yonghong Song
2021-11-18 18:28 ` Kumar Kartikeya Dwivedi
2021-11-18 19:13 ` Yonghong Song
2021-11-18 22:02 ` Alexei Starovoitov
2021-11-19 4:15 ` Kumar Kartikeya Dwivedi [this message]
2021-11-19 4:44 ` Kumar Kartikeya Dwivedi
2021-11-19 4:56 ` Alexei Starovoitov
2021-11-19 5:16 ` Kumar Kartikeya Dwivedi
2021-11-19 5:24 ` Alexei Starovoitov
2021-11-19 6:12 ` Kumar Kartikeya Dwivedi
2021-12-03 15:52 ` Pavel Begunkov
2021-12-03 23:16 ` Kumar Kartikeya Dwivedi
2021-11-16 5:42 ` [PATCH bpf-next v1 2/8] bpf: Add bpf_page_to_pfn helper Kumar Kartikeya Dwivedi
2021-11-17 12:35 ` kernel test robot
2021-11-17 13:39 ` kernel test robot
2021-11-18 17:27 ` Yonghong Song
2021-11-18 18:30 ` Kumar Kartikeya Dwivedi
2021-11-18 19:18 ` Yonghong Song
2021-11-18 19:22 ` Kumar Kartikeya Dwivedi
2021-11-16 5:42 ` [PATCH bpf-next v1 3/8] io_uring: Implement eBPF iterator for registered files Kumar Kartikeya Dwivedi
2021-11-18 17:33 ` Yonghong Song
2021-11-16 5:42 ` [PATCH bpf-next v1 4/8] epoll: Implement eBPF iterator for registered items Kumar Kartikeya Dwivedi
2021-11-18 17:50 ` Yonghong Song
2021-11-16 5:42 ` [PATCH bpf-next v1 5/8] selftests/bpf: Add test for io_uring BPF iterators Kumar Kartikeya Dwivedi
2021-11-18 17:54 ` Yonghong Song
2021-11-18 18:33 ` Kumar Kartikeya Dwivedi
2021-11-18 19:21 ` Yonghong Song
2021-11-16 5:42 ` [PATCH bpf-next v1 6/8] selftests/bpf: Add test for epoll BPF iterator Kumar Kartikeya Dwivedi
2021-11-16 5:42 ` [PATCH bpf-next v1 7/8] selftests/bpf: Test partial reads for io_uring, epoll iterators Kumar Kartikeya Dwivedi
2021-11-16 5:42 ` [PATCH bpf-next v1 8/8] selftests/bpf: Fix btf_dump test for bpf_iter_link_info Kumar Kartikeya Dwivedi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox