微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

linux – bash内置时间命令的精度是多少?

我有一个脚本,使用bash内置命令时间测量程序的执行时间.

我试图理解这个命令的精度:据我所知它返回以ms为单位的精度,但它使用getrusage()函数返回一个以微秒为单位的值.但是读取this paper,真正的精度只有10ms,因为getrusage依靠滴答(= 100Hz)来抽样.这篇论文真的很老了(它提到Linux 2.2.14在Pentium 166Mhz上运行,内存为96Mb).

时间是否还在使用getrusage()和100 Hz滴答或在现代系统上更精确?

测试机器正在运行Linux 2.6.32.

编辑:这是一个略微修改的版本(也应该在老版本的GCC上编译)的muru代码修改变量’v’的值也改变度量之间的延迟以便发现最小粒度.大约500,000的值应该在相对较新的cpu上触发1ms的更改(第一版i5 / i7 @~2.5Ghz)

#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>

void dosomething(){
    long v = 1000000; 
        while (v > 0)
          v--;
}

int main()
{
    struct rusage r1, r2;
    long t1, t2, min, max;
    int i;

    printf("t1\tt2\tdiff\n");

    for (i = 0; i<5; i++){
        getrusage(RUSAGE_SELF, &r1);
        dosomething();
        getrusage(RUSAGE_SELF, &r2);

        t1 = r1.ru_stime.tv_usec + r1.ru_stime.tv_sec*1000000 + r1.ru_utime.tv_usec + r1.ru_utime.tv_sec*1000000;
        t2 = r2.ru_stime.tv_usec + r2.ru_stime.tv_sec*1000000 + r2.ru_utime.tv_usec + r2.ru_utime.tv_sec*1000000;

        printf("%ld\t%ld\t%ld\n",t1,t2,t2-t1);

        if ((t2-t1 < min) | (i == 0))
            min = t2-t1;

        if ((t2-t1 > max) | (i == 0))
            max = t2-t1;
        dosomething();

    }

    printf("Min = %ldus Max = %ldus\n",min,max);

    return 0;
}

但是精度与linux版本绑定:Linux 3及以上版本的精度是我们的顺序,而在linux 2.6.32上可能是1ms左右,可能还取决于具体的发行版.我想这个差异与HRT的使用有关,而不是最近的Linux版本上的Tick.

在任何情况下,所有最近和不是最近的机器上的最大时间精度是1ms.

解决方法:

bash内置时间仍然使用getrusage(2).在Ubuntu 14.04系统上:

$bash --version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$strace -o log bash -c 'time sleep 1'

real    0m1.018s
user    0m0.000s
sys 0m0.001s
$tail log
getrusage(RUSAGE_SELF, {ru_utime={0, 0}, ru_stime={0, 3242}, ...}) = 0
getrusage(RUSAGE_CHILDREN, {ru_utime={0, 0}, ru_stime={0, 530}, ...}) = 0
write(2, "\n", 1)                       = 1
write(2, "real\t0m1.018s\n", 14)        = 14
write(2, "user\t0m0.000s\n", 14)        = 14
write(2, "sys\t0m0.001s\n", 13)         = 13
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
exit_group(0)                           = ?
+++ exited with 0 +++

正如strace输出所示,它称为getrusage.

至于精度,getrusage使用的rusage结构包括timeval对象,timeval具有微秒精度.从manpage of getrusage

ru_utime
      This is the total amount of time spent executing in user mode,
      expressed in a timeval structure (seconds plus microseconds).

ru_stime
      This is the total amount of time spent executing in kernel
      mode, expressed in a timeval structure (seconds plus
      microseconds).

我认为它比10毫秒更好.以下示例文件

#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>

int main()
{
    struct rusage Now, then;
    getrusage(RUSAGE_SELF, &then);
    getrusage(RUSAGE_SELF, &Now);
    printf("%ld %ld\n",
        then.ru_stime.tv_usec + then.ru_stime.tv_sec*1000000 + then.ru_utime.tv_usec + then.ru_utime.tv_sec*1000000
        Now.ru_stime.tv_usec + Now.ru_stime.tv_sec*1000000 + Now.ru_utime.tv_usec + Now.ru_utime.tv_sec*1000000);
}

现在:

$make test
cc     test.c   -o test
$for ((i=0; i < 5; i++)); do ./test; done
447 448
356 356
348 348
347 347
349 350

连续调用getrusage之间报告的差异是1μs和0(最小值).由于它确实显示1μs间隙,因此滴答必须至多为1μs.

如果它有10毫秒的滴答,差异将为零,或至少为10000.

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐