Is running "milli"-benchmarks a good idea?
- by Konstantin Weitz
I just came across the Caliper project and it looks very nice.
Reading the introduction to microbenchmarks, one gets the feeling that the developers would not suggest to use the framework if the benchmark takes longer than a second or so. I looked at the code and it looks like a RuntimeOutOfRangeException is actually thrown if a scenario takes longer than 10s to execute.
Could you explain to me what the problems are with running larger benchmarks?
My motivation for using Caliper was to compare two join-algorithm implementations. Those will definitely run for quite some time and will do some disk IO, yet running the entire database would make it hard to do the comparison, because the configuration of the algorithms and the visualization of the results would be a pain.