Problem measuring N times the execution time of a code block
Posted
by Nazgulled
on Stack Overflow
See other posts from Stack Overflow
or by Nazgulled
Published on 2010-05-12T19:16:48Z
Indexed on
2010/05/12
23:34 UTC
Read the original article
Hit count: 331
EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page.
Hi,
Let's say I have a function named myOperation
that I need to measure the execution time of. To measure it, I'm using clock_gettime()
as it was recommend here in one of the comments.
My teacher recommends us to measure it N
times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation
M
times instead of just one. If myOperation
is a very fast operation, measuring it M
times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation
only one time or M
times really depends if the operation itself takes long enough for the clock precision we are using.
I'm having trouble dealing with that M
times execution. Increasing M
decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds.
This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me.
Enough theory, here's my code:
#include <stdio.h>
#include <time.h>
#define MEASUREMENTS 1
#define OPERATIONS 1
typedef struct timespec TimeClock;
TimeClock diffTimeClock(TimeClock start, TimeClock end) {
TimeClock aux;
if((end.tv_nsec - start.tv_nsec) < 0) {
aux.tv_sec = end.tv_sec - start.tv_sec - 1;
aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec;
} else {
aux.tv_sec = end.tv_sec - start.tv_sec;
aux.tv_nsec = end.tv_nsec - start.tv_nsec;
}
return aux;
}
int main(void) {
TimeClock sTime, eTime, dTime;
int i, j;
for(i = 0; i < MEASUREMENTS; i++) {
printf(" » MEASURE %02d\n", i+1);
clock_gettime(CLOCK_REALTIME, &sTime);
for(j = 0; j < OPERATIONS; j++) {
myOperation();
}
clock_gettime(CLOCK_REALTIME, &eTime);
dTime = diffTimeClock(sTime, eTime);
printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec);
printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS);
}
return 0;
}
Notes: The above diffTimeClock
function is from this blog post. I replaced my real operation with myOperation()
because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation()
with whatever you like to compile the code if you wish.
As you can see, OPERATIONS = 1
and the results are:
» MEASURE 01
- NSEC (TOTAL): 27456580
- NSEC (OP): 27456580
For OPERATIONS = 100
the results are:
» MEASURE 01
- NSEC (TOTAL): 218929736
- NSEC (OP): 2189297
For OPERATIONS = 1000
the results are:
» MEASURE 01
- NSEC (TOTAL): 862834890
- NSEC (OP): 862834
For OPERATIONS = 10000
the results are:
» MEASURE 01
- NSEC (TOTAL): 574133641
- NSEC (OP): 57413
Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS
. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it.
You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation
is reading 5000 lines of text from an CSV file, separating the values by ;
and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again.
Now that I think of it, I also that think that there's a problem measuring time with clock_gettime()
, maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000
. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.
© Stack Overflow or respective owner