From: Jens Axboe <[email protected]>
To: Gabriel Krisman Bertazi <[email protected]>
Cc: [email protected], [email protected]
Subject: Re: [PATCH 5/6] io_uring: add support for futex wake and wait
Date: Mon, 12 Jun 2023 19:09:41 -0600 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 6/12/23 5:00?PM, Gabriel Krisman Bertazi wrote:
> Jens Axboe <[email protected]> writes:
>
>> On 6/12/23 10:06?AM, Gabriel Krisman Bertazi wrote:
>>> Jens Axboe <[email protected]> writes:
>>>
>>>> Add support for FUTEX_WAKE/WAIT primitives.
>>>
>>> This is great. I was so sure io_uring had this support already for some
>>> reason. I might have dreamed it.
>>
>> I think you did :-)
>
> Premonitory! Still, there should be better things to dream about than
> with the kernel code.
I dunno, if it's io_uring related I'm a supporter.
>>> Even with an asynchronous model, it might make sense to halt execution
>>> of further queued operations until futex completes. I think
>>> IOSQE_IO_DRAIN is a barrier only against the submission part, so it
>>> wouldn't hep. Is there a way to ensure this ordering?
>>
>> You'd use link for that - link whatever depends on the wake to the futex
>> wait. Or just queue it up once you reap the wait completion, when that
>> is posted because we got woken.
>
> The challenge of linked requests, in my opinion, is that once a link
> chain starts, everything needs to be link together, and a single error
> fails everything, which is ok when operations are related, but
> not so much when doing IO to different files from the same ring.
Not quite sure if you're misunderstanding links, or just have a
different use case in mind. You can certainly have several independent
chains of links.
>>>> Cancelations are supported, both from the application point-of-view,
>>>> but also to be able to cancel pending waits if the ring exits before
>>>> all events have occurred.
>>>>
>>>> This is just the barebones wait/wake support. Features to be added
>>>> later:
>>>
>>> One item high on my wishlist would be the futexv semantics (wait on any
>>> of a set of futexes). It cannot be implemented by issuing several
>>> FUTEX_WAIT.
>>
>> Yep, I do think that one is interesting enough to consider upfront.
>> Unfortunately the internal implementation of that does not look that
>> great, though I'm sure we can make that work. ? But would likely
>> require some futexv refactoring to make it work. I can take a look at
>> it.
>
> No disagreement here. To be fair, the main challenge was making the new
> interface compatible with a futex being waited on/waked the original
> interface. At some point, we had a really nice design for a single
> object, but we spent two years bikesheding over the interface and ended
> up merging something pretty much similar to the proposal from two years
> prior.
It turned out not to be too bad - here's a poc:
https://git.kernel.dk/cgit/linux/commit/?h=io_uring-futex&id=421b12df4ed0bb25c53afe496370bc2b70b04e15
needs a bit of splitting and cleaning, notably I think I need to redo
the futex_q->wake_data bit to make that cleaner with the current use
case and the async use case. With that, then everything can just use
futex_queue() and the only difference really is that the sync variants
will do timer setup upfront and then sleep at the bottom, where the
async part just calls the meat of the function.
>> You could obviously do futexv with this patchset, just posting N futex
>> waits and canceling N-1 when you get woken by one. Though that's of
>> course not very pretty or nice to use, but design wise it would totally
>> work as you don't actually block on these with io_uring.
>
> Yes, but at that point, i guess it'd make more sense to implement the
> same semantics by polling over a set of eventfds or having a single
> futex and doing dispatch in userspace.
Oh yeah, would not recommend the above approach. Just saying that you
COULD do that if you really wanted to, which is not something you could
do with futex before waitv. But kind of moot now that there's at least a
prototype.
--
Jens Axboe
next prev parent reply other threads:[~2023-06-13 1:09 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-09 18:31 [PATCHSET RFC 0/6] Add io_uring support for futex wait/wake Jens Axboe
2023-06-09 18:31 ` [PATCH 1/6] futex: abstract out futex_op_to_flags() helper Jens Axboe
2023-06-09 18:31 ` [PATCH 2/6] futex: factor out the futex wake handling Jens Axboe
2023-06-09 18:31 ` [PATCH 3/6] futex: assign default futex_q->wait_data at insertion time Jens Axboe
2023-06-09 18:31 ` [PATCH 4/6] futex: add futex wait variant that takes a futex_q directly Jens Axboe
2023-06-09 18:31 ` [PATCH 5/6] io_uring: add support for futex wake and wait Jens Axboe
2023-06-12 16:06 ` Gabriel Krisman Bertazi
2023-06-12 20:37 ` Jens Axboe
2023-06-12 23:00 ` Gabriel Krisman Bertazi
2023-06-13 1:09 ` Jens Axboe [this message]
2023-06-13 2:55 ` io_uring link semantics (was [PATCH 5/6] io_uring: add support for futex wake and wait) Gabriel Krisman Bertazi
2023-06-23 19:04 ` [PATCH 5/6] io_uring: add support for futex wake and wait Andres Freund
2023-06-23 19:07 ` Jens Axboe
2023-06-23 19:34 ` Andres Freund
2023-06-23 19:46 ` Jens Axboe
2023-06-09 18:31 ` [PATCH 6/6] io_uring/futex: enable use of the allocation caches for futex_q Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox