public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Christian Dietrich <[email protected]>,
	io-uring <[email protected]>
Cc: Horst Schirmeier <[email protected]>,
	"Franz-B. Tuneke" <[email protected]>
Subject: Re: [RFC] Programming model for io_uring + eBPF
Date: Sat, 1 May 2021 10:49:48 +0100	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 4/29/21 2:27 PM, Christian Dietrich wrote:
> Pavel Begunkov <[email protected]> [23. April 2021]:
> 
>> Yeah, absolutely. I don't see much profit in registering them
>> dynamically, so for now they will be needed to be loaded and attached
>> in advance. Or can be done in a more dynamic fashion, doesn't really
>> matter.
>>
>> btw, bpf splits compilation and attach steps, adds some flexibility.
> 
> So, I'm currently working on rebasing your work onto the tag
> 'for-5.13/io_uring-2021-04-27'. So if you already have some branch on
> this, just let me know to save the work.
I'll hack the v2 up the next week + patch some bugs I see, so
I'd suggest to wait for a bit. Will post an example together, because
was using plain bpf assembly for testing.

>> Should look similar to the userspace, fill a 64B chunk of memory,
>> where the exact program is specified by an index, the same that is
>> used during attach/registration
> 
> When looking at the current implementation, when can only perform the
> attachment once and there is no "append eBPF". While this is probably OK
> for code, for eBPF maps, we will need some kind of append eBPF map.

No need to register maps, it's passed as an fd. Looks I have never
posted any example, but you can even compile fds in in bpf programs
at runtime.

fd = ...;
bpf_prog[] = { ..., READ_MAP(fd), ...};
compile_bpf(bpf_prog);

I was thinking about the opposite, to allow to optionally register
them to save on atomic ops. Not in the next patch iteration though

Regarding programs, can be updated by unregister+register (slow).
And can be made more dynamic as registered files, but don't
see much profit in doing so, but open to use cases.

>> and context fd is just another field in the SQE. On the space -- it
>> depends. Some opcodes pass more info than others, and even for those we
>> yet have 16 bytes unused. For bpf I don't expect passing much in SQE, so
>> it should be ok.
> 
> So besides an eBPF-Progam ID, we would also pass an ID for an eEBF map
> in the SQE.

Yes, at least that's how I envision it. But that's a good question
what else we want to pass. E.g. some arbitrary ids (u64), that's
forwarded to the program, it can be a pointer to somewhere or just
a piece of current state.
 
> One thought that came to my mind: Why do we have to register the eBPF
> programs and maps? We could also just pass the FDs for those objects in
> the SQE. As long as there is no other state, it could be the userspaces
> choice to either attach it or pass it every time. For other FDs we
> already support both modes, right?

It doesn't register maps (see above). For not registering/attaching in
advance -- interesting idea, I need it double check with bpf guys.

>>> - My proposed serialization promise
>>
>> It can be an optional feature, but 1) it may become a bottleneck at
>> some point, 2) users use several rings, e.g. per-thread, so they
>> might need to reimplement serialisation anyway.
> 
> If we make it possible to pass some FD to an synchronization object
> (e.g. semaphore), this might do the trick to support both modes at the interface.
> 
>>> - Exposing synchronization primitives to the eBPF program. I don't think
>>>   that we can argue for semaphores in an eBPF program.
>>
>> I remember a discussion about sleep-able bpf, we need to look what has
>> happened with it.
> 
> But surely this would hurt a lot as we would have to manage not only
> eBPF programs, but also eBPF processes. While this is surely possible, I
> don't know if it is really suitable for a high-performance interface
> like io_uring. But, don't know about the state.
> 
>>
>>> With the serialization promise, we at least avoid the need to
>>> synchronize callbacks with callbacks. However, synchronization between
>>> user space and callback is still a problem.
>>
>> Need to look up up-to-date BPF capabilities, but can also be spinlocks,
>> for both: bpf-userspace sync, and between bpf 
>> https://lwn.net/ml/netdev/[email protected]/
> 
> Using Spinlocks between kernel and userspace just feels wrong, very
> wrong. But it might be an alternate route to synchronization

Right, probably not the way to go, can't exist out of try_lock()
or blocking, but it's still interesting for me to look into in general

>> With a bit of work nothing forbids to make them userspace visible,
>> just next step to the idea. In the end I want to have no difference
>> between CQs, and everyone can reap from anywhere, and it's up to
>> user to use/synchronise properly.
> 
> I like the notion of orthogonality with this route. Perhaps, we don't
> need to have user-invisible CQs but it can be enough to address the CQ
> of another uring in my SQE as the sink for the resulting CQE.

Not going to happen, cross references is always a huge headache.

The idea is to add several CQs to a single ring, regardless of BPF,
and for any request of any type use specify sqe->cq_idx, and CQE
goes there. And then BPF can benefit from it, sending its requests
to the right CQ, not necessarily to a single one.
It will be the responsibility of the userspace to make use of CQs
right, e.g. allocating CQ per BPF program and so avoiding any sync,
or do something more complex cross posting to several CQs.

 
> Downside with that idea would be that the user has to setup another 
> ring with SQ and CQ, but only the CQ is used.
> 
>> [...]
> 
>> CQ is specified by index in SQE, in each SQE. So either as you say, or
>> just specify index of the main CQ in that previous linked request in
>> the first place.
> 
> From looking at the code: This is not yet the case, or? 

Right, not yet

>>> How do I indicate at the first SQE into which CQ the result should be
>>> written?
> 
>> Yes, adds a bit of complexity, but without it you can only get last CQE,
>> 1) it's not flexible enough and shoots off different potential scenarios
>>
>> 2) not performance efficient -- overhead on running a bpf request after
>> each I/O request can be too large.
>>
>> 3) does require mandatory linking if you want to get result. Without it
>> we can submit a single BPF request and let it running multiple times,
>> e.g. waiting for on CQ, but linking would much limit options
>>
>> 4) bodgy from the implementation perspective
> 
> When taking a step back, this is nearly a io_uring_enter(minwait=N)-SQE

Think of it as a separate request type (won't be a separate, but still...)
waiting on CQ, similarly to io_uring_enter(minwait=N) but doesn't block.

> with an attached eBPF callback, or? At that point, we are nearly full
> circle.

What do you mean?

>>> Are we able to encode all of that into a single SQE that also holds an
>>> eBPF function pointer and (potenitally) an pointer to a context map?
>>
>> yes, but can be just a separate linked request...
> 
> So, let's make a little collection about the (potential) information
> that our poor SQE has to hold. Thereby, FDs should be registrable and
> addressible by an index.
> 
> - FD to eBPF program

yes (index, not fd), but as you mentioned we should check

> - FD to eBPF map

we pass fd, but not register it (not by default)

> - FD to synchronization object during the execution

can be in the context, in theory. We need to check available
synchronisation means

> - FD to foreign CQ for waiting on N CQEs

It's a generic field in SQE, so not bpf specific. And a BPF program
submitting SQEs is free to specify any CQ index in them.

e.g. pseudo-code:

my_bpf_program(ctx) {
   sqe = prep_read(ctx, ...);
   sqe->cq_idx = 1;

   sqe = prep_recv(ctx, ...);
   sqe->cq_idx = 42;
}

> 
> That are a lot of references to other object for which we would have
> to extend the registration interface.

So, it's only bpf programs that it registers.

>> Right. And it should know what it's doing anyway in most cases. All
>> more complex dispatching / state machines can be pretty well
>> implemented via context.
> 
> You convinced me that an eBPF map as a context is the more canonical way
> of doing it by achieving the same degree of flexibility.
> 
>> I believe there was something for accessing userspace memory, we
>> need to look it up.
> 
> Either way, from a researcher perspective, we can just allow it and look
> how it can performs.

-- 
Pavel Begunkov

  reply	other threads:[~2021-05-01  9:49 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <[email protected]>
     [not found] ` <[email protected]>
     [not found]   ` <[email protected]>
2021-04-16 15:49     ` [RFC] Programming model for io_uring + eBPF Pavel Begunkov
2021-04-20 16:35       ` Christian Dietrich
2021-04-23 15:34         ` Pavel Begunkov
2021-04-29 13:27           ` Christian Dietrich
2021-05-01  9:49             ` Pavel Begunkov [this message]
2021-05-05 12:57               ` Christian Dietrich
2021-05-05 16:13                 ` Christian Dietrich
2021-05-07 15:13                   ` Pavel Begunkov
2021-05-12 11:20                     ` Christian Dietrich
2021-05-18 14:39                       ` Pavel Begunkov
2021-05-19 16:55                         ` Christian Dietrich
2021-05-20 11:14                           ` Pavel Begunkov
2021-05-20 15:01                             ` Christian Dietrich
2021-05-21 10:27                               ` Pavel Begunkov
2021-05-27 11:12                                 ` Christian Dietrich
2021-06-02 10:47                                   ` Pavel Begunkov
2021-05-07 15:10                 ` Pavel Begunkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox