From: hexue <[email protected]>
To: [email protected]
Cc: [email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected],
[email protected], [email protected]
Subject: Re: Re: [PATCH v2] io_uring: releasing CPU resources when polling
Date: Tue, 23 Apr 2024 11:26:45 +0800 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
On 4/22/24 18:11, Jens Axboe wrote:
>On 4/18/24 3:31 AM, hexue wrote:
>> This patch is intended to release the CPU resources of io_uring in
>> polling mode. When IO is issued, the program immediately polls for
>> check completion, which is a waste of CPU resources when IO commands
>> are executed on the disk.
>>
>> I add the hybrid polling feature in io_uring, enables polling to
>> release a portion of CPU resources without affecting block layer.
>>
>> - Record the running time and context switching time of each
>> IO, and use these time to determine whether a process continue
>> to schedule.
>>
>> - Adaptive adjustment to different devices. Due to the real-time
>> nature of time recording, each device's IO processing speed is
>> different, so the CPU optimization effect will vary.
>>
>> - Set a interface (ctx->flag) enables application to choose whether
>> or not to use this feature.
>>
>> The CPU optimization in peak workload of patch is tested as follows:
>> all CPU utilization of original polling is 100% for per CPU, after
>> optimization, the CPU utilization drop a lot (per CPU);
>>
>> read(128k, QD64, 1Job) 37% write(128k, QD64, 1Job) 40%
>> randread(4k, QD64, 16Job) 52% randwrite(4k, QD64, 16Job) 12%
>>
>> Compared to original polling, the optimised performance reduction
>> with peak workload within 1%.
>>
>> read 0.29% write 0.51% randread 0.09% randwrite 0%
>
>As mentioned, this is like a reworked version of the old hybrid polling
>we had. The feature itself may make sense, but there's a slew of things
>in this patch that aren't really acceptable. More below.
Thank you very much for your patience in reviewing and correcting, I will
improve those as soon as possible and submit the v3 patch later.
prev parent reply other threads:[~2024-04-23 3:43 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20240418093150epcas5p31dc20cc737c72009265593f247e48262@epcas5p3.samsung.com>
2024-04-18 9:31 ` [PATCH v2] io_uring: releasing CPU resources when polling hexue
2024-04-19 9:09 ` Ammar Faizi
[not found] ` <CGME20240423033155epcas5p315a8e6ea0114402afafed84e5902ed6b@epcas5p3.samsung.com>
2024-04-23 3:31 ` hexue
2024-04-19 12:27 ` Pavel Begunkov
[not found] ` <CGME20240422081827epcas5p21ba3154cfcaa7520d7db412f5d3a15d2@epcas5p2.samsung.com>
2024-04-22 8:18 ` Re: io_uring: " hexue
2024-04-22 18:11 ` [PATCH v2] " Jens Axboe
[not found] ` <CGME20240423032657epcas5p4d97f377b48bd40e792d187013c5e409c@epcas5p4.samsung.com>
2024-04-23 3:26 ` hexue [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox