This modifies the .travis.yml file to install a release version of glide
instead of using its latest master branch. This will prevent upstream
changes from inadvertently breaking the CI builds.
This contains a bit of cleanup and additional logic to improve the
recently-added ability to specify additional checkpoints via the
--addcheckpoint option.
In particular:
- Improve error messages in the checkpoint parsing
- Correct the mergeCheckpoints function to weed out duplicate height
checkpoints while using the most-recently provided one as described by
its comment
- Add an assertion to blockchain.New that the provided checkpoints are
sorted as required
- Keep comments to 80 columns and use two spaces after periods in them to
be consistent with the rest of the code base
- Make the entry in doc.go match the actual btcd -h output
The most recent version of glide creates a working directory named
.glide that includes .go files and thus must be excluded from the find
operation that generates merged test coverage of all packages.
A DNS lookup was being attempted on onion addresses causing
connections to fail. This has been fixed by introducing type
onionAddr (which implements a net.Addr interface) and passing
it to btcdDial.
Also, the following onion related fixes have been made:
* getaddednodeinfo - updated to handle onion addrs.
* TorLookupIP - fixed err being shadowed.
* newServer - rename tcpAddr to netAddr
* addrStringToNetAddr - skip if host is already an IP addr.
* addrStringToNetAddr - err if tor is disabled
* getaddednodeinfo - check if host is already an IP addr.
ScriptVerifyNullFail defines that signatures must be empty if a
CHECKSIG or CHECKMULTISIG operation fails.
This commit also enables ScriptVerifyNullFail at the mempool policy
level.
This updates the data driven transaction script tests to use the most
recent format and test data as implemented by Core so the test data can
more easily be updated and help prove cross-compatibility correctness.
In particular, the new format combines the previously separate valid and
invalid test data files into a single file and adds a field for the
expected result. This is a nice improvement since it means tests can
now ensure script failures are due to a specific expected reason as
opposed to only generically detecting failure as the previous format
required.
The btcd script engine typically returns more fine grained errors than
the test data expects, so the test adapter handles this by allowing
expected errors in the test data to be mapped to multiple txscript
errors.
It should also be noted that the tests related to segwit have been
stripped from the data since the segwit PR has not landed in master yet,
however the test adapter does recognize the new ability for optional
segwit data to be supplied, though it will need to properly construct
the transaction using that data when the time comes.
This converts the majority of script errors from generic errors created
via errors.New and fmt.Errorf to use a concrete type that implements the
error interface with an error code and description.
This allows callers to programmatically detect the type of error via
type assertions and an error code while still allowing the errors to
provide more context.
For example, instead of just having an error the reads "disabled opcode"
as would happen prior to these changes when a disabled opcode is
encountered, the error will now read "attempt to execute disabled opcode
OP_FOO".
While it was previously possible to programmatically detect many errors
due to them being exported, they provided no additional context and
there were also various instances that were just returning errors
created on the spot which callers could not reliably detect without
resorting to looking at the actual error message, which is nearly always
bad practice.
Also, while here, export the MaxStackSize and MaxScriptSize constants
since they can be useful for consumers of the package and perform some
minor cleanup of some of the tests.
If a host is down and doesn't send a TCP RST, the net.Dial function
blocks until the OS times out the connection. Convert to using
DialTimeout with a 30 second default timeout.
Fail fast if glide or gometalinter isn't available.
And also fail if running glide or gometalinter fails.
Before goclean.sh could pass even if glide or gometalinter wasn't
available at all. It seems that even though we use `set -ex`, their
failure was hidden either inside the $() or behind the pipe.
This adds tests which exercise the BIP0009 state transitions using the
newly available soft fork status information in the getblockchaininfo
RPC.
The following is an overview of the tests added:
- Assert the chain height is 0 and the state is ThresholdDefined
- Generate 1 fewer blocks than needed to reach the first state
transition
- Assert chain height is expected and state is still ThresholdDefined
- Generate 1 more block to reach the first state transition
- Assert chain height is expected and state moved to ThresholdStarted
- Generate enough blocks to reach the next state transition window, but
only signal support in 1 fewer than the required number to achieve
ThresholdLockedIn
- Assert chain height is expected and state is still ThresholdStarted
- Generate enough blocks to reach the next state transition window with
only the exact number of blocks required to achieve locked in status
signalling support
- Assert chain height is expected and state moved to ThresholdLockedIn
- Generate 1 fewer blocks than needed to reach the next state transition
- Assert chain height is expected and state is still ThresholdLockedIn
- Generate 1 more block to reach the next state transition
- Assert chain height is expected and state moved to ThresholdActive
In addition, it updates the existing BIP0009 mining tests to include
extra assertions that the chain height is at the expected height in
addition to checking that the bits are correctly set according to the
expected state.
This commit updates the fields of GetBlockChainInfoResult to reflect
the current state of the RPC as widely implemented by other full-node
implementations.
Replace assignments to individual fields of wire.NetAddress with
creating the entire object at once, as one would do if the type was
immutable.
In some places this replaces the creation of a NetAddress with a
high-precision timestamp with a call to a 'constructor' that converts
the timestamp to single second precision. For consistency, the tests
have also been changed to use single-precision timestamps.
Lastly, the number of allocations in readNetAddress have been reduced by
reading the services directly into the NetAddress instead of first into
a temporary variable.
The thresholdState and deploymentState functions expect the block node
for the block prior to which the threshold state is calculated, however
the startup code which checked the threshold states was using the
current best node instead of its parent.
While here, also update the comments and rename a couple of variables to
help make this fact more clear.
The CSV consensus rules dictate that the opcode fails when the
transaction version is not at least version 2, however that only applies
if the disable flag is not set in the sequence.
This is not an issue at the current time because we do not yet enforce
CSV at a consensus level, however, I noticed this discrepancy when doing
a thorough audit of the CSV paths due to the ongoing work to add full
consensus-enforced CSV support.
As a result, this must be merged prior to enabling consensus enforcement
for CSV or it would open up the potential for a hard fork.
This modifies the code to only enforce the fairly expensive BIP0030
(duplicate transcactions) checks when the chain has not yet reached the
BIP0034 activation height (and is not one of the 2 special historical
blocks that break the rule and prompted BIP0034 to being with) since
that BIP made it impossible to create duplicate coinbases and thus
removed the possibility of creating transactions that overwrite older
ones.
This is a rather large optimization because the check is expensive due
to involving a ton of cache misses in the utxoset. For example, the
following are times it took to perform the BIP0030 check on blocks
425490 - 425502 and a system with a relatively old Hitachi spinner HDD:
block 425490: 674.5857ms
block 425491: 726.5923ms
block 425492: 827.6051ms
block 425493: 680.0863ms
block 425494: 722.0917ms
block 425495: 700.0889ms
block 425496: 647.5823ms
block 425497: 445.0565ms
block 425498: 602.5765ms
block 425499: 375.0476ms
block 425500: 771.0979ms
block 425501: 461.5586ms
block 425502: 603.0766ms
As can be seen from these numbers, this reduces the block validation
time by an average of just over half a second for the given
representative data set and hardware.
Signed-off-by: Dave Collins <davec@conformal.com>
Now that all softforking is done via BIP0009 versionbits, replace the
old isMajorityVersion deployment mechanism with hard coded historical
block heights at which they became active.
Since the activation heights vary per network, this adds new parameters
to the chaincfg.Params struct for them and sets the correct heights at
which each softfork became active on each chain.
It should be noted that this is a technically hard fork since the
behavior of alternate chain history is different with these hard-coded
activation heights as opposed to the old isMajorityVersion code. In
particular, an alternate chain history could activate one of the soft
forks earlier than these hard-coded heights which means the old code
would reject blocks which violate the new soft fork rules whereas this
new code would not.
However, all of the soft forks this refers to were activated so far in
the chain history there is there is no way a reorg that long could
happen and checkpoints reject alternate chains before the most recent
checkpoint anyways. Furthermore, the same change was made in Bitcoin
Core so this needs to be changed to be consistent anyways.
This commit adds all of the infrastructure needed to support BIP0009
soft forks.
The following is an overview of the changes:
- Add new configuration options to the chaincfg package which allows the
rule deployments to be defined per chain
- Implement code to calculate the threshold state as required by BIP0009
- Use threshold state caches that are stored to the database in order
to accelerate startup time
- Remove caches that are invalid due to definition changes in the
params including additions, deletions, and changes to existing
entries
- Detect and warn when a new unknown rule is about to activate or has
been activated in the block connection code
- Detect and warn when 50% of the last 100 blocks have unexpected
versions.
- Remove the latest block version from wire since it no longer applies
- Add a version parameter to the wire.NewBlockHeader function since the
default is no longer available
- Update the miner block template generation code to use the calculated
block version based on the currently defined rule deployments and
their threshold states as of the previous block
- Add tests for new error type
- Add tests for threshold state cache
addrmgr.GetAddress() had a parameter `class string` originally intended
to support looking up addresses according to some type of filter such as
IPv4, IPv6, and only those which support specific wire.ServiceFlags
(full nodes, nodes that support bloom filters, nodes that support
segwit, etc). But currently the parameter is unused and also has an
inappropriate type `string`.
If it would ever be used, it's easy to add back and should then get an
appropriate type such as something that allows bitflags to be set so
that the caller could request combinations such as peers that support
IPv6, are full nodes, and support bloom filters.
This corrects an issue introduced by commit
e8f63bc295 where a failure to lookup a
hostname could lead to a panic in certain circumstances. An error is
now returned in that case as expected.
This modifies the error handling path in the peer read loop such that it
will no longer log an error when the error is io.ErrUnexpectedEOF. This
is being done because that error is almost always the result of a peer
being remotely disconnected and thus it isn't useful information to log.
However, since it might actually be due to a malformed message, a reject
message is still queued up to be sent back to the peer (which will
simply be discarded if the peer disconnected) before it is disconnected.
While it would be ideal to only log if it's not due to a disconnect, and
the code already attempts to handle that, it's not 100% possible to
detect upon the read returning an error without attempting to read again
which will not happen until the next loop iteration.