2018-01-02 18:12:05 +01:00
|
|
|
// Copyright (c) 2015-2017 The Bitcoin Core developers
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
// Distributed under the MIT software license, see the accompanying
|
|
|
|
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
2015-10-27 17:44:13 +01:00
|
|
|
|
2017-11-10 01:57:53 +01:00
|
|
|
#include <bench/bench.h>
|
|
|
|
#include <bench/perf.h>
|
2015-10-27 17:44:13 +01:00
|
|
|
|
2017-05-13 17:52:14 +02:00
|
|
|
#include <assert.h>
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
#include <iostream>
|
2016-05-29 03:36:52 +02:00
|
|
|
#include <iomanip>
|
2017-10-17 16:48:02 +02:00
|
|
|
#include <algorithm>
|
|
|
|
#include <regex>
|
|
|
|
#include <numeric>
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
void benchmark::ConsolePrinter::header()
|
|
|
|
{
|
|
|
|
std::cout << "# Benchmark, evals, iterations, total, min, max, median" << std::endl;
|
2017-02-07 19:07:29 +01:00
|
|
|
}
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
void benchmark::ConsolePrinter::result(const State& state)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
{
|
2017-10-17 16:48:02 +02:00
|
|
|
auto results = state.m_elapsed_results;
|
|
|
|
std::sort(results.begin(), results.end());
|
|
|
|
|
|
|
|
double total = state.m_num_iters * std::accumulate(results.begin(), results.end(), 0.0);
|
|
|
|
|
|
|
|
double front = 0;
|
|
|
|
double back = 0;
|
|
|
|
double median = 0;
|
|
|
|
|
|
|
|
if (!results.empty()) {
|
|
|
|
front = results.front();
|
|
|
|
back = results.back();
|
|
|
|
|
|
|
|
size_t mid = results.size() / 2;
|
|
|
|
median = results[mid];
|
|
|
|
if (0 == results.size() % 2) {
|
|
|
|
median = (results[mid] + results[mid + 1]) / 2;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
std::cout << std::setprecision(6);
|
|
|
|
std::cout << state.m_name << ", " << state.m_num_evals << ", " << state.m_num_iters << ", " << total << ", " << front << ", " << back << ", " << median << std::endl;
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
}
|
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
void benchmark::ConsolePrinter::footer() {}
|
|
|
|
benchmark::PlotlyPrinter::PlotlyPrinter(std::string plotly_url, int64_t width, int64_t height)
|
|
|
|
: m_plotly_url(plotly_url), m_width(width), m_height(height)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
{
|
2017-10-17 16:48:02 +02:00
|
|
|
}
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
void benchmark::PlotlyPrinter::header()
|
|
|
|
{
|
|
|
|
std::cout << "<html><head>"
|
|
|
|
<< "<script src=\"" << m_plotly_url << "\"></script>"
|
|
|
|
<< "</head><body><div id=\"myDiv\" style=\"width:" << m_width << "px; height:" << m_height << "px\"></div>"
|
|
|
|
<< "<script> var data = ["
|
|
|
|
<< std::endl;
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
}
|
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
void benchmark::PlotlyPrinter::result(const State& state)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
{
|
2017-10-17 16:48:02 +02:00
|
|
|
std::cout << "{ " << std::endl
|
|
|
|
<< " name: '" << state.m_name << "', " << std::endl
|
|
|
|
<< " y: [";
|
|
|
|
|
|
|
|
const char* prefix = "";
|
|
|
|
for (const auto& e : state.m_elapsed_results) {
|
|
|
|
std::cout << prefix << std::setprecision(6) << e;
|
|
|
|
prefix = ", ";
|
2016-05-29 03:36:52 +02:00
|
|
|
}
|
2017-10-17 16:48:02 +02:00
|
|
|
std::cout << "]," << std::endl
|
|
|
|
<< " boxpoints: 'all', jitter: 0.3, pointpos: 0, type: 'box',"
|
|
|
|
<< std::endl
|
|
|
|
<< "}," << std::endl;
|
|
|
|
}
|
|
|
|
|
|
|
|
void benchmark::PlotlyPrinter::footer()
|
|
|
|
{
|
|
|
|
std::cout << "]; var layout = { showlegend: false, yaxis: { rangemode: 'tozero', autorange: true } };"
|
|
|
|
<< "Plotly.newPlot('myDiv', data, layout);"
|
|
|
|
<< "</script></body></html>";
|
|
|
|
}
|
2017-10-25 22:38:24 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
|
|
|
|
benchmark::BenchRunner::BenchmarkMap& benchmark::BenchRunner::benchmarks()
|
|
|
|
{
|
|
|
|
static std::map<std::string, Bench> benchmarks_map;
|
|
|
|
return benchmarks_map;
|
|
|
|
}
|
|
|
|
|
|
|
|
benchmark::BenchRunner::BenchRunner(std::string name, benchmark::BenchFunction func, uint64_t num_iters_for_one_second)
|
|
|
|
{
|
|
|
|
benchmarks().insert(std::make_pair(name, Bench{func, num_iters_for_one_second}));
|
|
|
|
}
|
|
|
|
|
|
|
|
void benchmark::BenchRunner::RunAll(Printer& printer, uint64_t num_evals, double scaling, const std::string& filter, bool is_list_only)
|
|
|
|
{
|
|
|
|
perf_init();
|
|
|
|
if (!std::ratio_less_equal<benchmark::clock::period, std::micro>::value) {
|
|
|
|
std::cerr << "WARNING: Clock precision is worse than microsecond - benchmarks may be less accurate!\n";
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
}
|
2017-10-17 16:48:02 +02:00
|
|
|
|
|
|
|
std::regex reFilter(filter);
|
|
|
|
std::smatch baseMatch;
|
|
|
|
|
|
|
|
printer.header();
|
|
|
|
|
|
|
|
for (const auto& p : benchmarks()) {
|
|
|
|
if (!std::regex_match(p.first, baseMatch, reFilter)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t num_iters = static_cast<uint64_t>(p.second.num_iters_for_one_second * scaling);
|
|
|
|
if (0 == num_iters) {
|
|
|
|
num_iters = 1;
|
2016-05-29 03:36:52 +02:00
|
|
|
}
|
2017-10-17 16:48:02 +02:00
|
|
|
State state(p.first, num_evals, num_iters, printer);
|
|
|
|
if (!is_list_only) {
|
|
|
|
p.second.func(state);
|
2016-05-29 03:36:52 +02:00
|
|
|
}
|
2017-10-17 16:48:02 +02:00
|
|
|
printer.result(state);
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
}
|
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
printer.footer();
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
perf_fini();
|
|
|
|
}
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
bool benchmark::State::UpdateTimer(const benchmark::time_point current_time)
|
|
|
|
{
|
|
|
|
if (m_start_time != time_point()) {
|
|
|
|
std::chrono::duration<double> diff = current_time - m_start_time;
|
|
|
|
m_elapsed_results.push_back(diff.count() / m_num_iters);
|
2017-01-13 21:59:44 +01:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
if (m_elapsed_results.size() == m_num_evals) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
m_num_iters_left = m_num_iters - 1;
|
|
|
|
return true;
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
}
|