Qun*_*cut 5 c++ microbenchmark c++17
我正在尝试对用C++实现的一组较大算法的相对较小部分进行基准测试.在简化,可以说,每一个算法是通过两个功能(让我们称他们为实现foo()和bar()),可以反复调用,并在arbitary秩序,这些功能可以修改一些算法的内部数据结构.除此之外,我想分别通过分别测量foo()和中花费的总时间来比较算法的性能bar().
现在我有两种算法:算法A有一些工作foo(),但很少,但bar()算法B绝对没有foo()(foo()实际上这里是一个空函数),但很多工作bar().我在这里观察到的意外事情是,算法B花费的总时间foo()在许多场景中大于算法A花费的总时间foo().经过一些调试,我发现对于算法B,在调用bar()之后第一次调用foo()需要花费很多时间,而后续调用foo()往往会更快.
为了确定这种效果,我想出了算法B的以下简化,其中包括一个空函数(对应foo())和两个函数,我试图模拟工作(对应bar(),"真实" bar()基本上也只是分配空间并迭代数据结构):
BH:
#ifndef B_H
#define B_H
void foo_emptyFunction(unsigned long long u); // foo()
void bar_expensiveFunction1(); // bar() - version 1
void bar_expensiveFunction2(); // bar() - version 2
#endif
Run Code Online (Sandbox Code Playgroud)
b.cpp
#include "b.h"
#include <iostream>
#include <vector>
#include <math.h>
void foo_emptyFunction(unsigned long long )
{
// nothing
}
void bar_expensiveFunction1() {
std::vector<unsigned long> vec;
for (auto i = 0UL; i < 1000000UL; i++) {
vec.push_back(i);
}
std::cout << "Created and filled a vector with " << vec.size() << " elements." << std::endl;
}
void bar_expensiveFunction2() {
std::vector<unsigned long> vec;
for (auto i = 1UL; i <= 1000000UL; i++) {
vec.push_back(i);
}
auto sum = 0ULL;
auto sumSqrts = 0.0;
for (auto i : vec) {
sum += i;
sumSqrts += sqrt(i);
}
std::cout << "Sum of elements from " << vec.front()
<< " to " << vec.back() << " is " << sum
<< ", the sum of their square roots is " << sumSqrts << "." << std::endl;
}
Run Code Online (Sandbox Code Playgroud)
然后我尝试测量在"昂贵的"之后多次调用空函数所需的时间:
main.cpp中:
#include "b.h"
#include <chrono>
#include <thread>
#include <iostream>
#include <math.h>
typedef std::chrono::high_resolution_clock sclock;
typedef unsigned long long time_interval;
typedef std::chrono::duration<time_interval, std::chrono::nanoseconds::period> time_as;
void timeIt() {
auto start = sclock::now();
auto end = start;
for (auto i = 0U; i < 10U; i++) {
start = sclock::now();
asm volatile("" ::: "memory");
foo_emptyFunction(1000ULL);
asm volatile("" ::: "memory");
end = sclock::now();
std::cout << "Call #" << i << " to empty function took " << std::chrono::duration_cast<time_as>(end - start).count() << "ns." << std::endl;
}
}
int main()
{
timeIt();
bar_expensiveFunction1();
timeIt();
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Slept for 100ms." << std::endl;
timeIt();
bar_expensiveFunction2();
timeIt();
bar_expensiveFunction1();
timeIt();
return 0;
}
Run Code Online (Sandbox Code Playgroud)
如果我编译(g++ -o test main.cpp b.cpp或者也g++ -O3 -o test main.cpp b.cpp)并运行代码,我会得到类似于此的输出:
./测试
Call #0 to empty function took 79ns.
Call #1 to empty function took 57ns.
Call #2 to empty function took 55ns.
Call #3 to empty function took 31ns.
Call #4 to empty function took 35ns.
Call #5 to empty function took 26ns.
Call #6 to empty function took 26ns.
Call #7 to empty function took 36ns.
Call #8 to empty function took 24ns.
Call #9 to empty function took 26ns.
Created and filled a vector with 1000000 elements.
Call #0 to empty function took 84ns.
Call #1 to empty function took 27ns.
Call #2 to empty function took 28ns.
Call #3 to empty function took 27ns.
Call #4 to empty function took 29ns.
Call #5 to empty function took 27ns.
Call #6 to empty function took 29ns.
Call #7 to empty function took 33ns.
Call #8 to empty function took 28ns.
Call #9 to empty function took 23ns.
Slept for 100ms.
Call #0 to empty function took 238ns.
Call #1 to empty function took 106ns.
Call #2 to empty function took 102ns.
Call #3 to empty function took 118ns.
Call #4 to empty function took 199ns.
Call #5 to empty function took 92ns.
Call #6 to empty function took 216ns.
Call #7 to empty function took 118ns.
Call #8 to empty function took 113ns.
Call #9 to empty function took 107ns.
Sum of elements from 1 to 1000000 is 500000500000, the sum of their square roots is 6.66667e+08.
Call #0 to empty function took 126ns.
Call #1 to empty function took 35ns.
Call #2 to empty function took 31ns.
Call #3 to empty function took 30ns.
Call #4 to empty function took 38ns.
Call #5 to empty function took 54ns.
Call #6 to empty function took 29ns.
Call #7 to empty function took 35ns.
Call #8 to empty function took 30ns.
Call #9 to empty function took 29ns.
Created and filled a vector with 1000000 elements.
Call #0 to empty function took 112ns.
Call #1 to empty function took 23ns.
Call #2 to empty function took 23ns.
Call #3 to empty function took 23ns.
Call #4 to empty function took 23ns.
Call #5 to empty function took 22ns.
Call #6 to empty function took 23ns.
Call #7 to empty function took 23ns.
Call #8 to empty function took 24ns.
Call #9 to empty function took 23ns.
Run Code Online (Sandbox Code Playgroud)
我怀疑运行时间的差异,特别是第一次调用的峰值,可能是由于某种缓存效应,但我真的很想知道这里发生了什么.
编辑:我在这里观察到的效果与实际代码中的效果非常相似.第一次通话时几乎总是有一个巨大的高峰,从第三次通话起它是相当稳定的.实际代码中的效果甚至更加明显,我怀疑是因为B::bar()现实中有更多的工作(它遍历图形而不仅仅是整数列表).不幸的是,真正的代码是一个非常大的项目的一部分,所以我不能在这里发布.我上面发布的代码是原始的相当重的简化,但它似乎显示相同的效果.在现实中,无论是foo()和bar()是虚拟的(我知道这配备了时间损失),并在不同的compiliaton单元,所以编译器不能优化的函数调用了.另外,我还检查了真实程序的汇编程序.我也意识到我不可避免地还要测量调用now()的时间 - 但我使用相同的算法A基准代码(至少在其实现中有所作为foo()),并且测量的总时间A::foo()更少. ..优化级别似乎没有(很大)影响这个效果,我得到了与clang相同的行为.
编辑2:我还在专用机器上运行算法基准测试(Linux,只有系统进程,cpu频率调控器设置为性能).
此外,我知道通常当你进行这种微基准测试时,你会做一些事情,比如缓存加温和重复你要多次测试的代码部分.不幸的是,每次调用foo()或bar()可能修改内部数据结构,所以我不能只重复它们.我很感激任何改进建议.
谢谢!
我注意到睡眠后基准测试的表现更差。这可能是由于 CPU 进入较低频率/功耗模式所致。
在基准测试之前将CPU频率固定为最大,这样CPU在基准测试期间就不会调整它。
在 Linux 上:
$ sudo cpupower --cpu all frequency-set --related --governor performance
Run Code Online (Sandbox Code Playgroud)
在 Windows 上,将电源计划设置为“高性能”。