From: Jens Axboe <[email protected]>
To: Pavel Begunkov <[email protected]>, [email protected]
Subject: Re: [PATCH RFC v2] io_uring: limit inflight IO
Date: Sat, 9 Nov 2019 07:16:14 -0700 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 11/9/19 5:28 AM, Pavel Begunkov wrote:
> On 11/9/2019 2:01 PM, Pavel Begunkov wrote:
>> On 09/11/2019 00:10, Jens Axboe wrote:
>>> Here's a modified version for discussion. Instead of sizing this on the
>>> specific ring, just size it based on the max allowable CQ ring size. I
>>> think this should be safer, and won't risk breaking existing use cases
>>> out there.
>>>
>>> The reasoning here is that we already limit the ring sizes globally,
>>> they are under ulimit memlock protection. With this on top, we have some
>>> sort of safe guard for the system as a whole as well, whereas before we
>>> had none. Even a small ring size can keep queuing IO.
>>>
>>> Let me know what you guys think...
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index 29ea1106132d..0d8c3b1612af 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -737,6 +737,25 @@ static struct io_kiocb *io_get_fallback_req(struct io_ring_ctx *ctx)
>>> return NULL;
>>> }
>>>
>>> +static bool io_req_over_limit(struct io_ring_ctx *ctx)
>>> +{
>>> + unsigned inflight;
>>> +
>>> + /*
>>> + * This doesn't need to be super precise, so only check every once
>>> + * in a while.
>>> + */
>>> + if (ctx->cached_sq_head & ctx->sq_mask)
>>> + return false;
>>> +
>>> + /*
>>> + * Use 2x the max CQ ring size
>>> + */
>>> + inflight = ctx->cached_sq_head -
>>> + (ctx->cached_cq_tail + atomic_read(&ctx->cached_cq_overflow));
>>> + return inflight >= 2 * IORING_MAX_CQ_ENTRIES;
>>> +}
>>
>> ctx->cached_cq_tail protected by ctx->completion_lock and can be
>> changed asynchronously. That's a not synchronised access, so
>> formally (probably) breaks the memory model.
>>
>> False values shouldn't be a problem here, but anyway.
>>
>
> Took a glance, it seems access to cached_cq_tail is already messed up in
> other places. Do I miss something?
It doesn't really matter for cases that don't need a stable value,
like this one right here. It's an integer, so it's not like we'll
ever see a fractured value, at most it'll be a bit stale.
--
Jens Axboe
prev parent reply other threads:[~2019-11-09 14:16 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-08 21:10 [PATCH RFC v2] io_uring: limit inflight IO Jens Axboe
2019-11-09 11:01 ` Pavel Begunkov
2019-11-09 12:28 ` Pavel Begunkov
2019-11-09 14:16 ` Jens Axboe [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox