2018-01-02 18:12:05 +01:00
|
|
|
// Copyright (c) 2015-2017 The Bitcoin Core developers
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
// Distributed under the MIT software license, see the accompanying
|
|
|
|
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
2015-10-27 17:44:13 +01:00
|
|
|
|
|
|
|
#ifndef BITCOIN_BENCH_BENCH_H
|
|
|
|
#define BITCOIN_BENCH_BENCH_H
|
|
|
|
|
2017-05-13 17:52:14 +02:00
|
|
|
#include <functional>
|
|
|
|
#include <limits>
|
2015-10-27 17:44:13 +01:00
|
|
|
#include <map>
|
|
|
|
#include <string>
|
2017-10-17 16:48:02 +02:00
|
|
|
#include <vector>
|
2017-10-25 22:38:24 +02:00
|
|
|
#include <chrono>
|
2015-10-27 17:44:13 +01:00
|
|
|
|
|
|
|
#include <boost/preprocessor/cat.hpp>
|
|
|
|
#include <boost/preprocessor/stringize.hpp>
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
|
|
|
// Simple micro-benchmarking framework; API mostly matches a subset of the Google Benchmark
|
|
|
|
// framework (see https://github.com/google/benchmark)
|
2017-03-21 19:49:08 +01:00
|
|
|
// Why not use the Google Benchmark framework? Because adding Yet Another Dependency
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
// (that uses cmake as its build system and has lots of features we don't need) isn't
|
|
|
|
// worth it.
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Usage:
|
|
|
|
|
|
|
|
static void CODE_TO_TIME(benchmark::State& state)
|
|
|
|
{
|
|
|
|
... do any setup needed...
|
|
|
|
while (state.KeepRunning()) {
|
|
|
|
... do stuff you want to time...
|
|
|
|
}
|
|
|
|
... do any cleanup needed...
|
|
|
|
}
|
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
// default to running benchmark for 5000 iterations
|
|
|
|
BENCHMARK(CODE_TO_TIME, 5000);
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
|
|
|
*/
|
2017-10-17 16:48:02 +02:00
|
|
|
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
namespace benchmark {
|
2017-10-17 16:48:02 +02:00
|
|
|
// In case high_resolution_clock is steady, prefer that, otherwise use steady_clock.
|
|
|
|
struct best_clock {
|
|
|
|
using hi_res_clock = std::chrono::high_resolution_clock;
|
|
|
|
using steady_clock = std::chrono::steady_clock;
|
|
|
|
using type = std::conditional<hi_res_clock::is_steady, hi_res_clock, steady_clock>::type;
|
|
|
|
};
|
|
|
|
using clock = best_clock::type;
|
|
|
|
using time_point = clock::time_point;
|
|
|
|
using duration = clock::duration;
|
|
|
|
|
|
|
|
class Printer;
|
|
|
|
|
|
|
|
class State
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
std::string m_name;
|
|
|
|
uint64_t m_num_iters_left;
|
|
|
|
const uint64_t m_num_iters;
|
|
|
|
const uint64_t m_num_evals;
|
|
|
|
std::vector<double> m_elapsed_results;
|
|
|
|
time_point m_start_time;
|
|
|
|
|
|
|
|
bool UpdateTimer(time_point finish_time);
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
State(std::string name, uint64_t num_evals, double num_iters, Printer& printer) : m_name(name), m_num_iters_left(0), m_num_iters(num_iters), m_num_evals(num_evals)
|
|
|
|
{
|
|
|
|
}
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
inline bool KeepRunning()
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
{
|
2017-10-17 16:48:02 +02:00
|
|
|
if (m_num_iters_left--) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool result = UpdateTimer(clock::now());
|
|
|
|
// measure again so runtime of UpdateTimer is not included
|
|
|
|
m_start_time = clock::now();
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
};
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
typedef std::function<void(State&)> BenchFunction;
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
class BenchRunner
|
|
|
|
{
|
|
|
|
struct Bench {
|
|
|
|
BenchFunction func;
|
|
|
|
uint64_t num_iters_for_one_second;
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
};
|
2017-10-17 16:48:02 +02:00
|
|
|
typedef std::map<std::string, Bench> BenchmarkMap;
|
|
|
|
static BenchmarkMap& benchmarks();
|
|
|
|
|
|
|
|
public:
|
|
|
|
BenchRunner(std::string name, BenchFunction func, uint64_t num_iters_for_one_second);
|
|
|
|
|
|
|
|
static void RunAll(Printer& printer, uint64_t num_evals, double scaling, const std::string& filter, bool is_list_only);
|
|
|
|
};
|
|
|
|
|
|
|
|
// interface to output benchmark results.
|
|
|
|
class Printer
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
virtual ~Printer() {}
|
|
|
|
virtual void header() = 0;
|
|
|
|
virtual void result(const State& state) = 0;
|
|
|
|
virtual void footer() = 0;
|
|
|
|
};
|
|
|
|
|
|
|
|
// default printer to console, shows min, max, median.
|
|
|
|
class ConsolePrinter : public Printer
|
|
|
|
{
|
|
|
|
public:
|
2018-05-20 09:15:39 +02:00
|
|
|
void header() override;
|
|
|
|
void result(const State& state) override;
|
|
|
|
void footer() override;
|
2017-10-17 16:48:02 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
// creates box plot with plotly.js
|
|
|
|
class PlotlyPrinter : public Printer
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
PlotlyPrinter(std::string plotly_url, int64_t width, int64_t height);
|
2018-05-20 09:15:39 +02:00
|
|
|
void header() override;
|
|
|
|
void result(const State& state) override;
|
|
|
|
void footer() override;
|
2017-10-17 16:48:02 +02:00
|
|
|
|
|
|
|
private:
|
|
|
|
std::string m_plotly_url;
|
|
|
|
int64_t m_width;
|
|
|
|
int64_t m_height;
|
|
|
|
};
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
}
|
|
|
|
|
2017-10-17 16:48:02 +02:00
|
|
|
|
|
|
|
// BENCHMARK(foo, num_iters_for_one_second) expands to: benchmark::BenchRunner bench_11foo("foo", num_iterations);
|
|
|
|
// Choose a num_iters_for_one_second that takes roughly 1 second. The goal is that all benchmarks should take approximately
|
|
|
|
// the same time, and scaling factor can be used that the total time is appropriate for your system.
|
|
|
|
#define BENCHMARK(n, num_iters_for_one_second) \
|
|
|
|
benchmark::BenchRunner BOOST_PP_CAT(bench_, BOOST_PP_CAT(__LINE__, n))(BOOST_PP_STRINGIZE(n), n, (num_iters_for_one_second));
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2015-10-27 17:44:13 +01:00
|
|
|
#endif // BITCOIN_BENCH_BENCH_H
|