2018-01-02 18:12:05 +01:00
// Copyright (c) 2015-2017 The Bitcoin Core developers
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
2017-11-10 01:57:53 +01:00
# include <bench/bench.h>
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
2017-11-10 01:57:53 +01:00
# include <crypto/sha256.h>
# include <key.h>
# include <validation.h>
# include <util.h>
# include <random.h>
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
2017-10-17 16:48:02 +02:00
# include <boost/lexical_cast.hpp>
# include <memory>
static const int64_t DEFAULT_BENCH_EVALUATIONS = 5 ;
static const char * DEFAULT_BENCH_FILTER = " .* " ;
static const char * DEFAULT_BENCH_SCALING = " 1.0 " ;
static const char * DEFAULT_BENCH_PRINTER = " console " ;
static const char * DEFAULT_PLOT_PLOTLYURL = " https://cdn.plot.ly/plotly-latest.min.js " ;
static const int64_t DEFAULT_PLOT_WIDTH = 1024 ;
static const int64_t DEFAULT_PLOT_HEIGHT = 768 ;
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
int
main ( int argc , char * * argv )
{
2017-10-17 16:48:02 +02:00
gArgs . ParseParameters ( argc , argv ) ;
2018-03-30 22:47:36 +02:00
if ( HelpRequested ( gArgs ) ) {
2017-10-17 16:48:02 +02:00
std : : cout < < HelpMessageGroup ( _ ( " Options: " ) )
< < HelpMessageOpt ( " -? " , _ ( " Print this help message and exit " ) )
< < HelpMessageOpt ( " -list " , _ ( " List benchmarks without executing them. Can be combined with -scaling and -filter " ) )
< < HelpMessageOpt ( " -evals=<n> " , strprintf ( _ ( " Number of measurement evaluations to perform. (default: %u) " ) , DEFAULT_BENCH_EVALUATIONS ) )
< < HelpMessageOpt ( " -filter=<regex> " , strprintf ( _ ( " Regular expression filter to select benchmark by name (default: %s) " ) , DEFAULT_BENCH_FILTER ) )
< < HelpMessageOpt ( " -scaling=<n> " , strprintf ( _ ( " Scaling factor for benchmark's runtime (default: %u) " ) , DEFAULT_BENCH_SCALING ) )
< < HelpMessageOpt ( " -printer=(console|plot) " , strprintf ( _ ( " Choose printer format. console: print data to console. plot: Print results as HTML graph (default: %s) " ) , DEFAULT_BENCH_PRINTER ) )
< < HelpMessageOpt ( " -plot-plotlyurl=<uri> " , strprintf ( _ ( " URL to use for plotly.js (default: %s) " ) , DEFAULT_PLOT_PLOTLYURL ) )
< < HelpMessageOpt ( " -plot-width=<x> " , strprintf ( _ ( " Plot width in pixel (default: %u) " ) , DEFAULT_PLOT_WIDTH ) )
< < HelpMessageOpt ( " -plot-height=<x> " , strprintf ( _ ( " Plot height in pixel (default: %u) " ) , DEFAULT_PLOT_HEIGHT ) ) ;
return 0 ;
}
2017-07-14 08:26:04 +02:00
SHA256AutoDetect ( ) ;
2017-06-23 23:18:44 +02:00
RandomInit ( ) ;
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
ECC_Start ( ) ;
SetupEnvironment ( ) ;
2017-10-17 16:48:02 +02:00
int64_t evaluations = gArgs . GetArg ( " -evals " , DEFAULT_BENCH_EVALUATIONS ) ;
std : : string regex_filter = gArgs . GetArg ( " -filter " , DEFAULT_BENCH_FILTER ) ;
std : : string scaling_str = gArgs . GetArg ( " -scaling " , DEFAULT_BENCH_SCALING ) ;
bool is_list_only = gArgs . GetBoolArg ( " -list " , false ) ;
double scaling_factor = boost : : lexical_cast < double > ( scaling_str ) ;
std : : unique_ptr < benchmark : : Printer > printer ( new benchmark : : ConsolePrinter ( ) ) ;
std : : string printer_arg = gArgs . GetArg ( " -printer " , DEFAULT_BENCH_PRINTER ) ;
if ( " plot " = = printer_arg ) {
printer . reset ( new benchmark : : PlotlyPrinter (
gArgs . GetArg ( " -plot-plotlyurl " , DEFAULT_PLOT_PLOTLYURL ) ,
gArgs . GetArg ( " -plot-width " , DEFAULT_PLOT_WIDTH ) ,
gArgs . GetArg ( " -plot-height " , DEFAULT_PLOT_HEIGHT ) ) ) ;
}
benchmark : : BenchRunner : : RunAll ( * printer , evaluations , scaling_factor , regex_filter , is_list_only ) ;
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
ECC_Stop ( ) ;
}