为什么 timerfd periodic Linux 计时器比预期提前一点到期?
Why does a timerfd periodic Linux timer expire a little before than expected?
我正在使用 Linux 定期计时器,特别是 timerfd
,我将其设置为定期到期,例如,每 200 毫秒。
但是,我注意到定时器似乎有时会过期,比我设置的超时早一点。
特别是,我使用以下 C 代码执行简单测试:
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <poll.h>
#include <unistd.h>
#include <inttypes.h>
#include <sys/timerfd.h>
#include <sys/time.h>
#define NO_FLAGS_TIMER 0
#define NUM_TESTS 10
// Function to perform the difference between two struct timeval.
// The operation which is performed is out = out - in
static inline int timevalSub(struct timeval *in, struct timeval *out) {
time_t original_out_tv_sec=out->tv_sec;
if (out->tv_usec < in->tv_usec) {
int nsec = (in->tv_usec - out->tv_usec) / 1000000 + 1;
in->tv_usec -= 1000000 * nsec;
in->tv_sec += nsec;
}
if (out->tv_usec - in->tv_usec > 1000000) {
int nsec = (out->tv_usec - in->tv_usec) / 1000000;
in->tv_usec += 1000000 * nsec;
in->tv_sec -= nsec;
}
out->tv_sec-=in->tv_sec;
out->tv_usec-=in->tv_usec;
// '1' is returned when the result is negative
return original_out_tv_sec < in->tv_sec;
}
// Function to create a timerfd and set it with a periodic timeout of 'time_ms', in milliseconds
int timerCreateAndSet(struct pollfd *timerMon,int *clockFd,uint64_t time_ms) {
struct itimerspec new_value;
time_t sec;
long nanosec;
// Create monotonic (increasing) timer
*clockFd=timerfd_create(CLOCK_MONOTONIC,NO_FLAGS_TIMER);
if(*clockFd==-1) {
return -1;
}
// Convert time, in ms, to seconds and nanoseconds
sec=(time_t) ((time_ms)/1000);
nanosec=1000000*time_ms-sec*1000000000;
new_value.it_value.tv_nsec=nanosec;
new_value.it_value.tv_sec=sec;
new_value.it_interval.tv_nsec=nanosec;
new_value.it_interval.tv_sec=sec;
// Fill pollfd structure
timerMon->fd=*clockFd;
timerMon->revents=0;
timerMon->events=POLLIN;
// Start timer
if(timerfd_settime(*clockFd,NO_FLAGS_TIMER,&new_value,NULL)==-1) {
close(*clockFd);
return -2;
}
return 0;
}
int main(void) {
struct timeval tv,tv_prev,tv_curr;
int clockFd;
struct pollfd timerMon;
unsigned long long junk;
gettimeofday(&tv,NULL);
timerCreateAndSet(&timerMon,&clockFd,200); // 200 ms periodic expiration time
tv_prev=tv;
for(int a=0;a<NUM_TESTS;a++) {
// No error check on poll() just for the sake of brevity...
// The final code should contain a check on the return value of poll()
poll(&timerMon,1,-1);
(void) read(clockFd,&junk,sizeof(junk));
gettimeofday(&tv,NULL);
tv_curr=tv;
if(timevalSub(&tv_prev,&tv_curr)) {
fprintf(stdout,"Error! Negative timestamps. The test will be interrupted now.\n");
break;
}
printf("Iteration: %d - curr. timestamp: %lu.%lu - elapsed after %f ms - real est. delta_t %f ms\n",a,tv.tv_sec,tv.tv_usec,200.0,
(tv_curr.tv_sec*1000000+tv_curr.tv_usec)/1000.0);
tv_prev=tv;
}
return 0;
}
用gcc编译后:
gcc -o timertest_Whosebug timertest_Whosebug.c
我得到以下输出:
Iteration: 0 - curr. timestamp: 1583491102.833748 - elapsed after 200.000000 ms - real est. delta_t 200.112000 ms
Iteration: 1 - curr. timestamp: 1583491103.33690 - elapsed after 200.000000 ms - real est. delta_t 199.942000 ms
Iteration: 2 - curr. timestamp: 1583491103.233687 - elapsed after 200.000000 ms - real est. delta_t 199.997000 ms
Iteration: 3 - curr. timestamp: 1583491103.433737 - elapsed after 200.000000 ms - real est. delta_t 200.050000 ms
Iteration: 4 - curr. timestamp: 1583491103.633737 - elapsed after 200.000000 ms - real est. delta_t 200.000000 ms
Iteration: 5 - curr. timestamp: 1583491103.833701 - elapsed after 200.000000 ms - real est. delta_t 199.964000 ms
Iteration: 6 - curr. timestamp: 1583491104.33686 - elapsed after 200.000000 ms - real est. delta_t 199.985000 ms
Iteration: 7 - curr. timestamp: 1583491104.233745 - elapsed after 200.000000 ms - real est. delta_t 200.059000 ms
Iteration: 8 - curr. timestamp: 1583491104.433737 - elapsed after 200.000000 ms - real est. delta_t 199.992000 ms
Iteration: 9 - curr. timestamp: 1583491104.633736 - elapsed after 200.000000 ms - real est. delta_t 199.999000 ms
我希望使用 gettimeofday()
估计的实时差异永远不会小于 200 毫秒(也是由于使用 read()
清除事件所需的时间),但是有还有一些值略小于 200 毫秒,例如 199.942000 ms
.
你知道我为什么观察到这种行为吗?
是否是因为我正在使用 gettimeofday()
并且有时 tv_prev
会晚一点(由于调用 read()
或gettimeofday()
本身)和 tv_curr
,在下一次迭代中,不是,导致估计时间小于 200 毫秒,而计时器实际上每 200 毫秒过期一次?
非常感谢您。
这与进程调度有关。计时器确实非常精确,每 200 毫秒发出一次超时信号,但您的程序在实际取回控制权之前不会注册该信号。这意味着您从 gettimeofday()
通话中获得的时间可以显示在未来的某个时间。当您减去此类延迟值时,您可以获得大于或小于 200 毫秒的结果。
如何估计计时器的实际信号与您调用 gettimeofday()
之间的时间?它与进程调度时间片有关。该量程有一些默认值,由 include/linux/sched/rt.h 中的 RR_TIMESLICE 设置。您可以像这样在您的系统上检查它:
#include <sched.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
int main(void) {
struct timespec tp;
if (sched_rr_get_interval(getpid(), &tp)) {
perror("Cannot get scheduler quantum");
} else {
printf("Scheduler quantum is %f ms\n", (tp.tv_sec * 1e9 + tp.tv_nsec) / 1e6);
}
}
我系统上的输出:
Scheduler quantum is 4.000000 ms
因此,您可能需要等待另一个进程的调度程序时间片完成,然后才能获得控制权并能够读取当前时间。在我的系统上,它可能导致产生的延迟与预期的 200 毫秒有大约 ±4 毫秒的偏差。
在执行了将近 7000 次迭代后,我得到了注册等待时间的以下分布:
如您所见,大部分时间都在预期的 200 毫秒左右的±2 毫秒区间内。所有迭代中的最小和最大时间分别为 189.992 毫秒和 210.227 毫秒:
~$ sort times.txt | head
189.992000
190.092000
190.720000
194.402000
195.250000
195.746000
195.847000
195.964000
196.256000
196.420000
~$ sort times.txt | tail
203.746000
203.824000
203.900000
204.026000
204.273000
205.625000
205.634000
208.974000
210.202000
210.227000
~$
大于 4 毫秒的偏差是由程序需要等待多个量程而不只是一个量程的罕见情况引起的。
我正在使用 Linux 定期计时器,特别是 timerfd
,我将其设置为定期到期,例如,每 200 毫秒。
但是,我注意到定时器似乎有时会过期,比我设置的超时早一点。
特别是,我使用以下 C 代码执行简单测试:
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <poll.h>
#include <unistd.h>
#include <inttypes.h>
#include <sys/timerfd.h>
#include <sys/time.h>
#define NO_FLAGS_TIMER 0
#define NUM_TESTS 10
// Function to perform the difference between two struct timeval.
// The operation which is performed is out = out - in
static inline int timevalSub(struct timeval *in, struct timeval *out) {
time_t original_out_tv_sec=out->tv_sec;
if (out->tv_usec < in->tv_usec) {
int nsec = (in->tv_usec - out->tv_usec) / 1000000 + 1;
in->tv_usec -= 1000000 * nsec;
in->tv_sec += nsec;
}
if (out->tv_usec - in->tv_usec > 1000000) {
int nsec = (out->tv_usec - in->tv_usec) / 1000000;
in->tv_usec += 1000000 * nsec;
in->tv_sec -= nsec;
}
out->tv_sec-=in->tv_sec;
out->tv_usec-=in->tv_usec;
// '1' is returned when the result is negative
return original_out_tv_sec < in->tv_sec;
}
// Function to create a timerfd and set it with a periodic timeout of 'time_ms', in milliseconds
int timerCreateAndSet(struct pollfd *timerMon,int *clockFd,uint64_t time_ms) {
struct itimerspec new_value;
time_t sec;
long nanosec;
// Create monotonic (increasing) timer
*clockFd=timerfd_create(CLOCK_MONOTONIC,NO_FLAGS_TIMER);
if(*clockFd==-1) {
return -1;
}
// Convert time, in ms, to seconds and nanoseconds
sec=(time_t) ((time_ms)/1000);
nanosec=1000000*time_ms-sec*1000000000;
new_value.it_value.tv_nsec=nanosec;
new_value.it_value.tv_sec=sec;
new_value.it_interval.tv_nsec=nanosec;
new_value.it_interval.tv_sec=sec;
// Fill pollfd structure
timerMon->fd=*clockFd;
timerMon->revents=0;
timerMon->events=POLLIN;
// Start timer
if(timerfd_settime(*clockFd,NO_FLAGS_TIMER,&new_value,NULL)==-1) {
close(*clockFd);
return -2;
}
return 0;
}
int main(void) {
struct timeval tv,tv_prev,tv_curr;
int clockFd;
struct pollfd timerMon;
unsigned long long junk;
gettimeofday(&tv,NULL);
timerCreateAndSet(&timerMon,&clockFd,200); // 200 ms periodic expiration time
tv_prev=tv;
for(int a=0;a<NUM_TESTS;a++) {
// No error check on poll() just for the sake of brevity...
// The final code should contain a check on the return value of poll()
poll(&timerMon,1,-1);
(void) read(clockFd,&junk,sizeof(junk));
gettimeofday(&tv,NULL);
tv_curr=tv;
if(timevalSub(&tv_prev,&tv_curr)) {
fprintf(stdout,"Error! Negative timestamps. The test will be interrupted now.\n");
break;
}
printf("Iteration: %d - curr. timestamp: %lu.%lu - elapsed after %f ms - real est. delta_t %f ms\n",a,tv.tv_sec,tv.tv_usec,200.0,
(tv_curr.tv_sec*1000000+tv_curr.tv_usec)/1000.0);
tv_prev=tv;
}
return 0;
}
用gcc编译后:
gcc -o timertest_Whosebug timertest_Whosebug.c
我得到以下输出:
Iteration: 0 - curr. timestamp: 1583491102.833748 - elapsed after 200.000000 ms - real est. delta_t 200.112000 ms
Iteration: 1 - curr. timestamp: 1583491103.33690 - elapsed after 200.000000 ms - real est. delta_t 199.942000 ms
Iteration: 2 - curr. timestamp: 1583491103.233687 - elapsed after 200.000000 ms - real est. delta_t 199.997000 ms
Iteration: 3 - curr. timestamp: 1583491103.433737 - elapsed after 200.000000 ms - real est. delta_t 200.050000 ms
Iteration: 4 - curr. timestamp: 1583491103.633737 - elapsed after 200.000000 ms - real est. delta_t 200.000000 ms
Iteration: 5 - curr. timestamp: 1583491103.833701 - elapsed after 200.000000 ms - real est. delta_t 199.964000 ms
Iteration: 6 - curr. timestamp: 1583491104.33686 - elapsed after 200.000000 ms - real est. delta_t 199.985000 ms
Iteration: 7 - curr. timestamp: 1583491104.233745 - elapsed after 200.000000 ms - real est. delta_t 200.059000 ms
Iteration: 8 - curr. timestamp: 1583491104.433737 - elapsed after 200.000000 ms - real est. delta_t 199.992000 ms
Iteration: 9 - curr. timestamp: 1583491104.633736 - elapsed after 200.000000 ms - real est. delta_t 199.999000 ms
我希望使用 gettimeofday()
估计的实时差异永远不会小于 200 毫秒(也是由于使用 read()
清除事件所需的时间),但是有还有一些值略小于 200 毫秒,例如 199.942000 ms
.
你知道我为什么观察到这种行为吗?
是否是因为我正在使用 gettimeofday()
并且有时 tv_prev
会晚一点(由于调用 read()
或gettimeofday()
本身)和 tv_curr
,在下一次迭代中,不是,导致估计时间小于 200 毫秒,而计时器实际上每 200 毫秒过期一次?
非常感谢您。
这与进程调度有关。计时器确实非常精确,每 200 毫秒发出一次超时信号,但您的程序在实际取回控制权之前不会注册该信号。这意味着您从 gettimeofday()
通话中获得的时间可以显示在未来的某个时间。当您减去此类延迟值时,您可以获得大于或小于 200 毫秒的结果。
如何估计计时器的实际信号与您调用 gettimeofday()
之间的时间?它与进程调度时间片有关。该量程有一些默认值,由 include/linux/sched/rt.h 中的 RR_TIMESLICE 设置。您可以像这样在您的系统上检查它:
#include <sched.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
int main(void) {
struct timespec tp;
if (sched_rr_get_interval(getpid(), &tp)) {
perror("Cannot get scheduler quantum");
} else {
printf("Scheduler quantum is %f ms\n", (tp.tv_sec * 1e9 + tp.tv_nsec) / 1e6);
}
}
我系统上的输出:
Scheduler quantum is 4.000000 ms
因此,您可能需要等待另一个进程的调度程序时间片完成,然后才能获得控制权并能够读取当前时间。在我的系统上,它可能导致产生的延迟与预期的 200 毫秒有大约 ±4 毫秒的偏差。 在执行了将近 7000 次迭代后,我得到了注册等待时间的以下分布:
如您所见,大部分时间都在预期的 200 毫秒左右的±2 毫秒区间内。所有迭代中的最小和最大时间分别为 189.992 毫秒和 210.227 毫秒:
~$ sort times.txt | head
189.992000
190.092000
190.720000
194.402000
195.250000
195.746000
195.847000
195.964000
196.256000
196.420000
~$ sort times.txt | tail
203.746000
203.824000
203.900000
204.026000
204.273000
205.625000
205.634000
208.974000
210.202000
210.227000
~$
大于 4 毫秒的偏差是由程序需要等待多个量程而不只是一个量程的罕见情况引起的。