From: Josef <[email protected]>
To: Pavel Begunkov <[email protected]>
Cc: Jens Axboe <[email protected]>,
Norman Maurer <[email protected]>,
Dmitry Kadashev <[email protected]>,
io-uring <[email protected]>
Subject: Re: "Cannot allocate memory" on ring creation (not RLIMIT_MEMLOCK)
Date: Sun, 20 Dec 2020 16:56:58 +0100 [thread overview]
Message-ID: <CAAss7+oFAS9rs-6Wkz3=FQX4x0TpFY1WiMZpK66MofFgMhTaqw@mail.gmail.com> (raw)
In-Reply-To: <[email protected]>
> I'd really appreciate if you can try one more. I want to know why
> the final cleanup doesn't cope with it.
yeah sure, which kernel version? it seems to be that this patch
doesn't match io_uring-5.11 and io_uring-5.10
On Sun, 20 Dec 2020 at 15:22, Pavel Begunkov <[email protected]> wrote:
>
> On 20/12/2020 13:00, Pavel Begunkov wrote:
> > On 20/12/2020 07:13, Josef wrote:
> >>> Guys, do you share rings between processes? Explicitly like sending
> >>> io_uring fd over a socket, or implicitly e.g. sharing fd tables
> >>> (threads), or cloning with copying fd tables (and so taking a ref
> >>> to a ring).
> >>
> >> no in netty we don't share ring between processes
> >>
> >>> In other words, if you kill all your io_uring applications, does it
> >>> go back to normal?
> >>
> >> no at all, the io-wq worker thread is still running, I literally have
> >> to restart the vm to go back to normal(as far as I know is not
> >> possible to kill kernel threads right?)
> >>
> >>> Josef, can you test the patch below instead? Following Jens' idea it
> >>> cancels more aggressively when a task is killed or exits. It's based
> >>> on [1] but would probably apply fine to for-next.
> >>
> >> it works, I run several tests with eventfd read op async flag enabled,
> >> thanks a lot :) you are awesome guys :)
> >
> > Thanks for testing and confirming! Either we forgot something in
> > io_ring_ctx_wait_and_kill() and it just can't cancel some requests,
> > or we have a dependency that prevents release from happening.
> >
> > BTW, apparently that patch causes hangs for unrelated but known
> > reasons, so better to not use it, we'll merge something more stable.
>
> I'd really appreciate if you can try one more. I want to know why
> the final cleanup doesn't cope with it.
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 941fe9b64fd9..d38fc819648e 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -8614,6 +8614,10 @@ static int io_remove_personalities(int id, void *p, void *data)
> return 0;
> }
>
> +static void io_cancel_defer_files(struct io_ring_ctx *ctx,
> + struct task_struct *task,
> + struct files_struct *files);
> +
> static void io_ring_exit_work(struct work_struct *work)
> {
> struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
> @@ -8627,6 +8631,8 @@ static void io_ring_exit_work(struct work_struct *work)
> */
> do {
> io_iopoll_try_reap_events(ctx);
> + io_poll_remove_all(ctx, NULL, NULL);
> + io_kill_timeouts(ctx, NULL, NULL);
> } while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20));
> io_ring_ctx_free(ctx);
> }
> @@ -8641,6 +8647,7 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
> io_cqring_overflow_flush(ctx, true, NULL, NULL);
> mutex_unlock(&ctx->uring_lock);
>
> + io_cancel_defer_files(ctx, NULL, NULL);
> io_kill_timeouts(ctx, NULL, NULL);
> io_poll_remove_all(ctx, NULL, NULL);
>
> --
> Pavel Begunkov
--
Josef
next prev parent reply other threads:[~2020-12-20 15:58 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-17 8:19 "Cannot allocate memory" on ring creation (not RLIMIT_MEMLOCK) Dmitry Kadashev
2020-12-17 8:26 ` Norman Maurer
2020-12-17 8:36 ` Dmitry Kadashev
2020-12-17 8:40 ` Dmitry Kadashev
2020-12-17 10:38 ` Josef
2020-12-17 11:10 ` Dmitry Kadashev
2020-12-17 13:43 ` Victor Stewart
2020-12-18 9:20 ` Dmitry Kadashev
2020-12-18 17:22 ` Jens Axboe
2020-12-18 15:26 ` Jens Axboe
2020-12-18 17:21 ` Josef
2020-12-18 17:23 ` Jens Axboe
2020-12-19 2:49 ` Josef
2020-12-19 16:13 ` Jens Axboe
2020-12-19 16:29 ` Jens Axboe
2020-12-19 17:11 ` Jens Axboe
2020-12-19 17:34 ` Norman Maurer
2020-12-19 17:38 ` Jens Axboe
2020-12-19 20:51 ` Josef
2020-12-19 21:54 ` Jens Axboe
2020-12-19 23:13 ` Jens Axboe
2020-12-19 23:42 ` Josef
2020-12-19 23:42 ` Pavel Begunkov
2020-12-20 0:25 ` Jens Axboe
2020-12-20 0:55 ` Pavel Begunkov
2020-12-21 10:35 ` Dmitry Kadashev
2020-12-21 10:49 ` Dmitry Kadashev
2020-12-21 11:00 ` Dmitry Kadashev
2020-12-21 15:36 ` Pavel Begunkov
2020-12-22 3:35 ` Pavel Begunkov
2020-12-22 4:07 ` Pavel Begunkov
2020-12-22 11:04 ` Dmitry Kadashev
2020-12-22 11:06 ` Dmitry Kadashev
2020-12-22 13:13 ` Dmitry Kadashev
2020-12-22 16:33 ` Pavel Begunkov
2020-12-23 8:39 ` Dmitry Kadashev
2020-12-23 9:38 ` Dmitry Kadashev
2020-12-23 11:48 ` Dmitry Kadashev
2020-12-23 12:27 ` Pavel Begunkov
2020-12-20 1:57 ` Pavel Begunkov
2020-12-20 7:13 ` Josef
2020-12-20 13:00 ` Pavel Begunkov
2020-12-20 14:19 ` Pavel Begunkov
2020-12-20 15:56 ` Josef [this message]
2020-12-20 15:58 ` Pavel Begunkov
2020-12-20 16:14 ` Jens Axboe
2020-12-20 16:59 ` Josef
2020-12-20 18:23 ` Josef
2020-12-20 18:41 ` Pavel Begunkov
2020-12-21 8:22 ` Josef
2020-12-21 15:30 ` Pavel Begunkov
2020-12-21 10:31 ` Dmitry Kadashev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAAss7+oFAS9rs-6Wkz3=FQX4x0TpFY1WiMZpK66MofFgMhTaqw@mail.gmail.com' \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox