bash内置时间命令的精度是多少?

Ale*_*ini 4 linux bash performance time

我有一个脚本,使用bash builtin命令测量程序的执行时间time.

我试图理解这个命令的精度:据我所知它以毫秒为单位返回精度,但它使用的getrusage()函数返回一个以微秒为单位的值.但是阅读本文,实际精度仅为10毫秒,因为getrusage采样时间取决于刻度(= 100Hz).这篇论文真的很老了(它提到Linux 2.2.14在Pentium 166Mhz上运行,内存为96Mb).

time仍然使用getrusage()100赫兹蜱或者是在现代系统更精确?

测试机器正在运行Linux 2.6.32.

编辑:这是一个稍微修改过的版本(也应该在老版本的GCC上编译)的muru代码:修改变量'v'的值也改变度量之间的延迟以便发现最小粒度.大约500,000的值应该在相对较新的cpu上触发1ms的更改(第一版i5/i7 @~2.5Ghz)

#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>

void dosomething(){
    long v = 1000000; 
        while (v > 0)
          v--;
}

int main()
{
    struct rusage r1, r2;
    long t1, t2, min, max;
    int i;

    printf("t1\tt2\tdiff\n");

    for (i = 0; i<5; i++){
        getrusage(RUSAGE_SELF, &r1);
        dosomething();
        getrusage(RUSAGE_SELF, &r2);

        t1 = r1.ru_stime.tv_usec + r1.ru_stime.tv_sec*1000000 + r1.ru_utime.tv_usec + r1.ru_utime.tv_sec*1000000;
        t2 = r2.ru_stime.tv_usec + r2.ru_stime.tv_sec*1000000 + r2.ru_utime.tv_usec + r2.ru_utime.tv_sec*1000000;

        printf("%ld\t%ld\t%ld\n",t1,t2,t2-t1);

        if ((t2-t1 < min) | (i == 0))
            min = t2-t1;

        if ((t2-t1 > max) | (i == 0))
            max = t2-t1;
        dosomething();

    }

    printf("Min = %ldus Max = %ldus\n",min,max);

    return 0;
}
Run Code Online (Sandbox Code Playgroud)

但是精度势必Linux版本:使用Linux 3和更高的精度是摆在我们的订单,而在Linux 2.6.32可能约为1毫秒,也可能根据具体的发行.我想这个差异与HRT的使用有关,而不是最近的Linux版本上的Tick.

在任何情况下,所有最近和不是最近的机器上的最大时间精度是1ms.

mur*_*uru 5

bash内建time仍然使用getrusage(2).在Ubuntu 14.04系统上:

$ bash --version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ strace -o log bash -c 'time sleep 1'

real    0m1.018s
user    0m0.000s
sys 0m0.001s
$ tail log
getrusage(RUSAGE_SELF, {ru_utime={0, 0}, ru_stime={0, 3242}, ...}) = 0
getrusage(RUSAGE_CHILDREN, {ru_utime={0, 0}, ru_stime={0, 530}, ...}) = 0
write(2, "\n", 1)                       = 1
write(2, "real\t0m1.018s\n", 14)        = 14
write(2, "user\t0m0.000s\n", 14)        = 14
write(2, "sys\t0m0.001s\n", 13)         = 13
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
exit_group(0)                           = ?
+++ exited with 0 +++
Run Code Online (Sandbox Code Playgroud)

strace输出所示,它调用getrusage.

至于精度,使用的rusage结构getrusage包括timeval对象,并timeval具有微秒精度.从的手册页getrusage:

ru_utime
      This is the total amount of time spent executing in user mode,
      expressed in a timeval structure (seconds plus microseconds).

ru_stime
      This is the total amount of time spent executing in kernel
      mode, expressed in a timeval structure (seconds plus
      microseconds).
Run Code Online (Sandbox Code Playgroud)

我认为它比10毫秒更好.以下示例文件:

#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>

int main()
{
    struct rusage now, then;
    getrusage(RUSAGE_SELF, &then);
    getrusage(RUSAGE_SELF, &now);
    printf("%ld %ld\n",
        then.ru_stime.tv_usec + then.ru_stime.tv_sec*1000000 + then.ru_utime.tv_usec + then.ru_utime.tv_sec*1000000
        now.ru_stime.tv_usec + now.ru_stime.tv_sec*1000000 + now.ru_utime.tv_usec + now.ru_utime.tv_sec*1000000);
}
Run Code Online (Sandbox Code Playgroud)

现在:

$ make test
cc     test.c   -o test
$ for ((i=0; i < 5; i++)); do ./test; done
447 448
356 356
348 348
347 347
349 350
Run Code Online (Sandbox Code Playgroud)

连续调用之间报告的差异为getrusage1μs和0(最小值).由于它确实显示1μs间隙,因此滴答必须至多为1μs.

如果它有10毫秒的滴答,差异将为零,或至少为10000.