Previously the benchmark code used an integer division (%) with
a non-constant in the inner-loop. This is quite slow on many
processors, especially ones like ARM that lack a hardware divide.
Even on fairly recent x86_64 like haswell an integer division can
take something like 100 cycles-- making it comparable to the
runtime of siphash.
This change avoids the division by using bitmasking instead. This
was especially easy since the count was only increased by doubling.
This change also restarts the timing when the execution time was
very low this avoids mintimes of zero in cases where one execution
ends up below the timer resolution. It also reduces the impact of
the overhead on the final result.
The formatting of the prints is changed to not use scientific
notation make it more machine readable (in particular, gnuplot
croaks on the non-fixedpoint, and it doesn't sort correctly).
This also hoists out all the floating point divisions out of the
semi-hot path because it was easy to do so.
It might be prudent to break out the critical test into a macro
just to guarantee that it gets inlined. It might also make sense
to just save out the intermediate counts and times and get the
floating point completely out of the timing loop (because e.g.
on hardware without a fast hardware FPU like some ARM it will
still be slow enough to distort the results). I haven't done
either of these in this commit.
Fixing formatting
Adding test case into automatically generated test case set
Clean up commits
removing extra whitespace from eol
Removing extra whitespace on macro line
Previously these functions would infinitely loop if sync failed;
now they have a default timeout of 60 seconds, after which an
AssertionError is raised.
sync_blocks() has also been improved and now compares the tip
hash of each node, rather than just using block count.
If a node is offline failed outbound connection attempts will crank up
the addrman counter and effectively blow away our state.
This change reduces the problem by only counting attempts made while
the node believes it has outbound connections to at least two
netgroups.
Connect and addnode connections are also not counted, as there is no
reason to unequally penalize them for their more frequent
connections -- though there should be no real effect from this
unless their addnode configureation is later removed.
Wasteful repeated connection attempts while only a few connections are
up are avoided via nLastTry.
This is still somewhat incomplete protection because our outbound
peers could be down but not timed out or might all be on 'local'
networks (although the requirement for multiple netgroups helps).
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
Any attacker who managed to make an evil commit that changed something in the
contrib/verify-commits/ directory could just as easily remove the warning
and/or modify it to not display the evil commits; telling the user to check
those commits specifically misleads them into checking just those commits
rather than the script itself.
Now that caches are distinct (https://github.com/travis-ci/travis-ci/issues/4393),
we can use the Travis minimal image.
The minimal image should take less time to setup and lead to quicker builds.
Also addressed while I'm in here:
- No need to delete the broken google-chrome repo in the minimal image
- Set the hostname to work-around an openjdk bug
- Remove the non-functional apt-cache option
- Remove useless message at completion
- Install jre where the java tests are run