public inbox for [email protected]
 help / color / mirror / Atom feed
From: Stefan Metzmacher <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Cc: [email protected], [email protected],
	[email protected], [email protected]
Subject: Re: [PATCH 0/6] Allow signals for IO threads
Date: Fri, 26 Mar 2021 16:04:06 +0100	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>


Am 26.03.21 um 15:53 schrieb Jens Axboe:
> On 3/26/21 8:45 AM, Stefan Metzmacher wrote:
>> Am 26.03.21 um 15:43 schrieb Stefan Metzmacher:
>>> Am 26.03.21 um 15:38 schrieb Jens Axboe:
>>>> On 3/26/21 7:59 AM, Jens Axboe wrote:
>>>>> On 3/26/21 7:54 AM, Jens Axboe wrote:
>>>>>>> The KILL after STOP deadlock still exists.
>>>>>>
>>>>>> In which tree? Sounds like you're still on the old one with that
>>>>>> incremental you sent, which wasn't complete.
>>>>>>
>>>>>>> Does io_wq_manager() exits without cleaning up on SIGKILL?
>>>>>>
>>>>>> No, it should kill up in all cases. I'll try your stop + kill, I just
>>>>>> tested both of them separately and didn't observe anything. I also ran
>>>>>> your io_uring-cp example (and found a bug in the example, fixed and
>>>>>> pushed), fwiw.
>>>>>
>>>>> I can reproduce this one! I'll take a closer look.
>>>>
>>>> OK, that one is actually pretty straight forward - we rely on cleaning
>>>> up on exit, but for fatal cases, get_signal() will call do_exit() for us
>>>> and never return. So we might need a special case in there to deal with
>>>> that, or some other way of ensuring that fatal signal gets processed
>>>> correctly for IO threads.
>>>
>>> And if (fatal_signal_pending(current)) doesn't prevent get_signal() from being called?
>>
>> Ah, we're still in the first get_signal() from SIGSTOP, correct?
> 
> Yes exactly, we're waiting in there being stopped. So we either need to
> check to something ala:
> 
> relock:
> +	if (current->flags & PF_IO_WORKER && fatal_signal_pending(current))
> +		return false;
> 
> to catch it upfront and from the relock case, or add:
> 
> 	fatal:
> +		if (current->flags & PF_IO_WORKER)
> +			return false;
> 
> to catch it in the fatal section.
> 

Or something like io_uring_files_cancel()

Maybe change current->pf_io_worker with a generic current->io_thread
structure which, has exit hooks, as well as
io_wq_worker_sleeping() and io_wq_worker_running().

Maybe create_io_thread would take such an structure
as argument instead of a single function pointer.

struct io_thread_description {
	const char *name;
	int (*thread_fn)(struct io_thread_description *);
	void (*sleeping_fn)((struct io_thread_description *);
	void (*running_fn)((struct io_thread_description *);
	void (*exit_fn)((struct io_thread_description *);
};

And then
struct io_wq_manager {
	struct io_thread_description description;
	... manager specific stuff...
};

metze

  parent reply	other threads:[~2021-03-26 15:05 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-26  0:39 [PATCH 0/6] Allow signals for IO threads Jens Axboe
2021-03-26  0:39 ` [PATCH 1/8] io_uring: handle signals for IO threads like a normal thread Jens Axboe
2021-03-26  0:39 ` [PATCH 2/8] kernel: unmask SIGSTOP for IO threads Jens Axboe
2021-03-26 13:48   ` Oleg Nesterov
2021-03-26 15:01     ` Jens Axboe
2021-03-26 15:23       ` Stefan Metzmacher
2021-03-26 15:29         ` Jens Axboe
2021-03-26 18:01           ` Stefan Metzmacher
2021-03-26 18:59             ` Jens Axboe
2021-04-01 14:53             ` Stefan Metzmacher
2021-03-26  0:39 ` [PATCH 3/8] Revert "signal: don't allow sending any signals to PF_IO_WORKER threads" Jens Axboe
2021-03-26  0:39 ` [PATCH 4/8] Revert "kernel: treat PF_IO_WORKER like PF_KTHREAD for ptrace/signals" Jens Axboe
2021-03-26  0:39 ` [PATCH 5/8] Revert "kernel: freezer should treat PF_IO_WORKER like PF_KTHREAD for freezing" Jens Axboe
2021-03-26  0:39 ` [PATCH 6/8] Revert "signal: don't allow STOP on PF_IO_WORKER threads" Jens Axboe
     [not found] ` <[email protected]>
2021-03-26 12:56   ` [PATCH 0/6] Allow signals for IO threads Jens Axboe
2021-03-26 13:31     ` Stefan Metzmacher
2021-03-26 13:54       ` Jens Axboe
2021-03-26 13:59         ` Jens Axboe
2021-03-26 14:38           ` Jens Axboe
2021-03-26 14:43             ` Stefan Metzmacher
2021-03-26 14:45               ` Stefan Metzmacher
2021-03-26 14:53                 ` Jens Axboe
2021-03-26 14:55                   ` Jens Axboe
2021-03-26 15:08                     ` Stefan Metzmacher
2021-03-26 15:10                       ` Jens Axboe
2021-03-26 15:11                         ` Stefan Metzmacher
2021-03-26 15:12                           ` Jens Axboe
2021-03-26 15:04                   ` Stefan Metzmacher [this message]
2021-03-26 15:09                     ` Jens Axboe
2021-03-26 14:50               ` Jens Axboe
2021-03-27  1:46       ` Stefan Metzmacher
2021-03-27 16:41         ` Jens Axboe
2021-04-01 14:58         ` Stefan Metzmacher
2021-04-01 15:39           ` Linus Torvalds
2021-04-01 16:00             ` Stefan Metzmacher
2021-04-01 16:24               ` Linus Torvalds
2021-04-03  0:48                 ` Stefan Metzmacher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox