public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Thomas Gleixner <[email protected]>,
	Christoph Hellwig <[email protected]>, Ming Lei <[email protected]>
Cc: [email protected], [email protected],
	John Garry <[email protected]>,
	Bart Van Assche <[email protected]>,
	Hannes Reinecke <[email protected]>,
	[email protected], Peter Zijlstra <[email protected]>
Subject: Re: io_uring vs CPU hotplug, was Re: [PATCH 5/9] blk-mq: don't set data->ctx and data->hctx in blk_mq_alloc_request_hctx
Date: Wed, 20 May 2020 16:40:50 -0600	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 5/20/20 4:14 PM, Thomas Gleixner wrote:
> Jens Axboe <[email protected]> writes:
> 
>> On 5/20/20 1:41 PM, Thomas Gleixner wrote:
>>> Jens Axboe <[email protected]> writes:
>>>> On 5/20/20 8:45 AM, Jens Axboe wrote:
>>>>> It just uses kthread_create_on_cpu(), nothing home grown. Pretty sure
>>>>> they just break affinity if that CPU goes offline.
>>>>
>>>> Just checked, and it works fine for me. If I create an SQPOLL ring with
>>>> SQ_AFF set and bound to CPU 3, if CPU 3 goes offline, then the kthread
>>>> just appears unbound but runs just fine. When CPU 3 comes online again,
>>>> the mask appears correct.
>>>
>>> When exactly during the unplug operation is it unbound?
>>
>> When the CPU has been fully offlined. I check the affinity mask, it
>> reports 0. But it's still being scheduled, and it's processing work.
>> Here's an example, PID 420 is the thread in question:
>>
>> [root@archlinux cpu3]# taskset -p 420
>> pid 420's current affinity mask: 8
>> [root@archlinux cpu3]# echo 0 > online 
>> [root@archlinux cpu3]# taskset -p 420
>> pid 420's current affinity mask: 0
>> [root@archlinux cpu3]# echo 1 > online 
>> [root@archlinux cpu3]# taskset -p 420
>> pid 420's current affinity mask: 8
>>
>> So as far as I can tell, it's working fine for me with the goals
>> I have for that kthread.
> 
> Works for me is not really useful information and does not answer my
> question:
> 
>>> When exactly during the unplug operation is it unbound?

I agree, and that question is relevant to the block side of things. What
Christoph asked in this particular sub-thread was specifically for the
io_uring sqpoll thread, and that's what I was adressing. For that, it
doesn't matter _when_ it becomes unbound. All that matters it that it
breaks affinity and keeps working.

> The problem Ming and Christoph are trying to solve requires that the
> thread is migrated _before_ the hardware queue is shut down and
> drained. That's why I asked for the exact point where this happens.

Right, and I haven't looked into that at all, so don't know the answer
to that question.

> When the CPU is finally offlined, i.e. the CPU cleared the online bit in
> the online mask is definitely too late simply because it still runs on
> that outgoing CPU _after_ the hardware queue is shut down and drained.
> 
> This needs more thought and changes to sched and kthread so that the
> kthread breaks affinity once the CPU goes offline. Too tired to figure
> that out right now.

Yes, to provide the needed guarantees for the block ctx and hctx
mappings we'll need to know exactly at what stage it ceases to run on
that CPU.

-- 
Jens Axboe


  reply	other threads:[~2020-05-20 22:43 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20200518093155.GB35380@T590>
     [not found] ` <[email protected]>
     [not found]   ` <20200518115454.GA46364@T590>
     [not found]     ` <[email protected]>
     [not found]       ` <20200518141107.GA50374@T590>
     [not found]         ` <[email protected]>
     [not found]           ` <20200519015420.GA70957@T590>
     [not found]             ` <[email protected]>
     [not found]               ` <20200520011823.GA415158@T590>
     [not found]                 ` <20200520030424.GI416136@T590>
2020-05-20  8:03                   ` io_uring vs CPU hotplug, was Re: [PATCH 5/9] blk-mq: don't set data->ctx and data->hctx in blk_mq_alloc_request_hctx Christoph Hellwig
2020-05-20 14:45                     ` Jens Axboe
2020-05-20 15:20                       ` Jens Axboe
2020-05-20 15:31                         ` Christoph Hellwig
2020-05-20 19:41                         ` Thomas Gleixner
2020-05-20 20:18                           ` Jens Axboe
2020-05-20 22:14                             ` Thomas Gleixner
2020-05-20 22:40                               ` Jens Axboe [this message]
2020-05-21  2:27                               ` Ming Lei
2020-05-21  8:13                                 ` Thomas Gleixner
2020-05-21  9:23                                   ` Ming Lei
2020-05-21 18:39                                     ` Thomas Gleixner
2020-05-21 18:45                                       ` Jens Axboe
2020-05-21 20:00                                         ` Thomas Gleixner
2020-05-22  1:57                                       ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox