This commit updates the new segwit validation logic within block
validation to be guarded by an initial check to the version bits state
before conditionally enforcing the logic based off of the state.
This commit implements the new block validation rules as defined by
BIP0141. The new rules include the constraints that if a block has
transactions with witness data, then there MUST be a commitment within
the conies transaction to the root of a new merkle tree which commits
to the wtxid of all transactions. Additionally, rather than limiting
the size of a block by size in bytes, blocks are now limited by their
total weight unit. Similarly, a newly define “sig op cost” is now used
to limit the signature validation cost of transactions found within
blocks.
This commit implements the new “weight” metric introduced as part of
the segwit soft-fork. Post-fork activation, rather than limiting the
size of blocks and transactions based purely on serialized size, a new
metric “weight” will instead be used as a way to more accurately
reflect the costs of a tx/block on the system. With blocks constrained
by weight, the maximum block-size increases to ~4MB.
This commit implements the flag activation portion of BIP 0147. The
verification behavior triggered by the NULLDUMMY script verification
flag has been present within btcd for some time, however it wasn’t
activated by default.
With this commit, once segwit has activated, the ScriptStrictMultiSig
will also be activated within the Script VM. Additionally, the
ScriptStrictMultiSig is now a standard script verification flag which
is used unconditionally within the mempool.
This commit modifies the logic within the block manager and service to
preferentially fetch transactions and blocks which include witness data
from fully upgraded peers.
Once the initial version handshake has completed, the server now tracks
which of the connected peers are witness enabled (they advertise
SFNodeWitness). From then on, if a peer is witness enabled, then btcd
will always request full witness data when fetching
transactions/blocks.
This corrects the assertion in the decodeSpentTxOut function so it does
not improperly cause a panic when unwinding transactions during a reorg
under certain circumstances. In particular, the provided transaction
version that is passed when a stxo entry does not exist is now -1 in
order to properly distinguish it from the zero value.
It also updates the tests accordingly.
This was discovered by the reorg on testnet from block
00000000000018c58c2d2816f03dac327d975a18af6edf1a369df67ecddaf816 to
0000000000001c1161a367156465cc6226e9f862d9c585f94db5779fdf5455ff.
The btclog package has been changed to defining its own logging
interface (rather than seelog's) and provides a default implementation
for callers to use.
There are two primary advantages to the new logger implementation.
First, all log messages are created before the call returns. Compared
to seelog, this prevents data races when mutable variables are logged.
Second, the new logger does not implement any kind of artifical rate
limiting (what seelog refers to as "adaptive logging"). Log messages
are outputted as soon as possible and the application will appear to
perform much better when watching standard output.
Because log rotation is not a feature of the btclog logging
implementation, it is handled by the main package by importing a file
rotation package that provides an io.Reader interface for creating
output to a rotating file output. The rotator has been configured
with the same defaults that btcd previously used in the seelog config
(10MB file limits with maximum of 3 rolls) but now compresses newly
created roll files. Due to the high compressibility of log text, the
compressed files typically reduce to around 15-30% of the original
10MB file.
The github markdown interpreter has been changed such that it no longer
allows spaces in between the brackets and parenthesis of links and now
requires a newline in between anchors and other formatting. This
updates all of the markdown files accordingly.
While here, it also corrects a couple of inconsistencies in some of the
README.md files.
This commit modifies the existing block validation logic to examine the
current version bits state of the CSV soft-fork, enforcing the new
validation rules (BIPs 68, 112, and 113) accordingly based on the
current `ThesholdState`.
This simplifies the code based on the recommendations of the gosimple
lint tool.
Also, it increases the deadline for the linters to run to 10 minutes and
reduces the number of threads that is uses. This is being done because
the Travis environment has become increasingly slower and it also seems
to be hampered by too many threads running concurrently.
This modifies the blockNode and BestState structs in the blockchain
package to store hashes directly instead of pointers to them and updates
callers to deal with the API change in the exported BestState struct.
In general, the preferred approach for hashes moving forward is to store
hash values in complex data structures, particularly those that will be
used for cache entries, and accept pointers to hashes in arguments to
functions.
Some of the reasoning behind making this change is:
- It is generally preferred to avoid storing pointers to data in cache
objects since doing so can easily lead to storing interior pointers
into other structs that then can't be GC'd
- Keeping the hash values directly in the block node provides better
cache locality
Since the code base is currently in the process of changing over to
decouple download and connection logic, but not all of the necessary
parts are updated yet, ensure blocks that are in the database, but do
not have an associated main chain block index entry, are treated as if
they do not exist for the purposes of chain connection and selection
logic.
This refactors the block index logic into a separate struct and
introduces an individual lock for it so it can be queried independent of
the chain lock.
This modifies the block node structure to include a couple of extra
fields needed to be able to reconstruct the block header from a node,
and exposes a new function from chain to fetch the block headers which
takes advantage of the new functionality to reconstruct the headers from
memory when possible. Finally, it updates both the p2p and RPC servers
to make use of the new function.
This is useful since many of the block header fields need to be kept in
order to form the block index anyways and storing the extra fields means
the database does not have to be consulted when headers are requested if
the associated node is still in memory.
The following timings show representative performance gains as measured
from one system:
new: Time to fetch 100000 headers: 59ms
old: Time to fetch 100000 headers: 4783ms
This removes the CalcPastMedianTime since it is now exposed much more
efficiently via the MedianTime field of the BestState snapshot returned
from the BestSnapshot function.
This modifies the block nodes used in the blockchain package for keeping
track of the block index to use int64 for the timestamps instead of
time.Time.
This is being done because a time.Time takes 24 bytes while an int64
only takes 8 and the plan is to eventually move the entire block index
into memory instead of the current dynamically-loaded version, so
cutting the number of bytes used for the timestamp by a third is highly
desirable.
Also, the consensus code requires working with unix-style timestamps
anyways, so switching over to them in the block node does not seem
unreasonable.
Finally, this does not go so far as to change all of the time.Time
references, particularly those that are in the public API, so it is
purely an internal change.
This modifies the blockchain code to store all blocks that have passed
proof-of-work and contextual validity tests in the database even if they
may ultimately fail to connect.
This eliminates the need to store those blocks in memory, allows them to
be available as orphans later even if they were never part of the main
chain, and helps pave the way toward being able to separate the download
logic from the connection logic.
This contains a bit of cleanup and additional logic to improve the
recently-added ability to specify additional checkpoints via the
--addcheckpoint option.
In particular:
- Improve error messages in the checkpoint parsing
- Correct the mergeCheckpoints function to weed out duplicate height
checkpoints while using the most-recently provided one as described by
its comment
- Add an assertion to blockchain.New that the provided checkpoints are
sorted as required
- Keep comments to 80 columns and use two spaces after periods in them to
be consistent with the rest of the code base
- Make the entry in doc.go match the actual btcd -h output
The thresholdState and deploymentState functions expect the block node
for the block prior to which the threshold state is calculated, however
the startup code which checked the threshold states was using the
current best node instead of its parent.
While here, also update the comments and rename a couple of variables to
help make this fact more clear.
This modifies the code to only enforce the fairly expensive BIP0030
(duplicate transcactions) checks when the chain has not yet reached the
BIP0034 activation height (and is not one of the 2 special historical
blocks that break the rule and prompted BIP0034 to being with) since
that BIP made it impossible to create duplicate coinbases and thus
removed the possibility of creating transactions that overwrite older
ones.
This is a rather large optimization because the check is expensive due
to involving a ton of cache misses in the utxoset. For example, the
following are times it took to perform the BIP0030 check on blocks
425490 - 425502 and a system with a relatively old Hitachi spinner HDD:
block 425490: 674.5857ms
block 425491: 726.5923ms
block 425492: 827.6051ms
block 425493: 680.0863ms
block 425494: 722.0917ms
block 425495: 700.0889ms
block 425496: 647.5823ms
block 425497: 445.0565ms
block 425498: 602.5765ms
block 425499: 375.0476ms
block 425500: 771.0979ms
block 425501: 461.5586ms
block 425502: 603.0766ms
As can be seen from these numbers, this reduces the block validation
time by an average of just over half a second for the given
representative data set and hardware.
Signed-off-by: Dave Collins <davec@conformal.com>
Now that all softforking is done via BIP0009 versionbits, replace the
old isMajorityVersion deployment mechanism with hard coded historical
block heights at which they became active.
Since the activation heights vary per network, this adds new parameters
to the chaincfg.Params struct for them and sets the correct heights at
which each softfork became active on each chain.
It should be noted that this is a technically hard fork since the
behavior of alternate chain history is different with these hard-coded
activation heights as opposed to the old isMajorityVersion code. In
particular, an alternate chain history could activate one of the soft
forks earlier than these hard-coded heights which means the old code
would reject blocks which violate the new soft fork rules whereas this
new code would not.
However, all of the soft forks this refers to were activated so far in
the chain history there is there is no way a reorg that long could
happen and checkpoints reject alternate chains before the most recent
checkpoint anyways. Furthermore, the same change was made in Bitcoin
Core so this needs to be changed to be consistent anyways.
This commit adds all of the infrastructure needed to support BIP0009
soft forks.
The following is an overview of the changes:
- Add new configuration options to the chaincfg package which allows the
rule deployments to be defined per chain
- Implement code to calculate the threshold state as required by BIP0009
- Use threshold state caches that are stored to the database in order
to accelerate startup time
- Remove caches that are invalid due to definition changes in the
params including additions, deletions, and changes to existing
entries
- Detect and warn when a new unknown rule is about to activate or has
been activated in the block connection code
- Detect and warn when 50% of the last 100 blocks have unexpected
versions.
- Remove the latest block version from wire since it no longer applies
- Add a version parameter to the wire.NewBlockHeader function since the
default is no longer available
- Update the miner block template generation code to use the calculated
block version based on the currently defined rule deployments and
their threshold states as of the previous block
- Add tests for new error type
- Add tests for threshold state cache
This modifies the NewMsgTx function to accept the transaction version as
a parameter and updates all callers.
The reason for this change is so the transaction version can be bumped
in wire without breaking existing tests and to provide the caller with
the flexibility to create the specific transaction version they desire.
This commit adds a new function to the package: `LockTimeToSequence`.
The function is a simple utility function which aides the caller to
mapping a targeted time or block based relative lock-time to the
appropriate sequence number.
This commit introduces the concept of “sequence locks” borrowed from
Bitcoin Core for converting an input’s relative time-locks to an
absolute value based on a particular block for input maturity
evaluation.
A sequence lock is computed as the most distant maturity height/time
amongst all the referenced outputs within a particular transaction.
A transaction with sequence locks activated within any of its inputs
can *only* be included within a block if from the point-of-view of that
block either the time-based or height-based maturity for all referenced
inputs has been met.
A transaction with sequence locks can only be accepted to the mempool
iff from the point-of-view of the *next* (yet to be found block) all
referenced inputs within the transaction are mature.
This modifies the ExtractCoinbaseHeight function to recognize small
canonically serialized block heights in coinbase scripts of blocks
higher than version 2.
This allows regression test chains in which blocks encode the serialized
height in the coinbase starting from block 1.
This adds a full-blown testing infrastructure in order to test consensus
validation rules. It is built around the idea of dynamically generating
full blocks that target specific rules linked together to form a block
chain. In order to properly test the rules, each test instance starts
with a valid block that is then modified in the specific way needed to
test a specific rule.
Blocks which exercise following rules have been added for this initial
version. These tests were largely ported from the original Java-based
'official' block acceptance tests as well as some additional tests
available in the Core python port. It is expected that further tests
can be added over time as consensus rules change.
* Enough valid blocks to have a stable base of mature coinbases to spend
for futher tests
* Basic forking and chain reorganization
* Double spends on forks
* Too much proof-of-work coinbase (extending main chain, in block that
forces a reorg, and in a valid fork)
* Max and too many signature operations via various combinations of
OP_CHECKSIG, OP_MULTISIG, OP_CHECKSIGVERIFY, and OP_MULTISIGVERIFY
* Too many and max signature operations with offending sigop after
invalid data push
* Max and too many signature operations via pay-to-script-hash redeem
scripts
* Attempt to spend tx created on a different fork
* Attempt to spend immature coinbase (on main chain and fork)
* Max size block and block that exceeds the max size
* Children of rejected blocks are either orphans or rejected
* Coinbase script too small and too large
* Max length coinbase script
* Attempt to spend tx in blocks that failed to connect
* Valid non-coinbase tx in place of coinbase
* Block with no transactions
* Invalid proof-of-work
* Block with a timestamp too far in the future
* Invalid merkle root
* Invalid proof-of-work limit (bits header field)
* Negative proof-of-work limit (bits header field)
* Two coinbase transactions
* Duplicate transactions
* Spend from transaction that does not exist
* Timestamp exactly at and one second after the median time
* Blocks with same hash via merkle root tricks
* Spend from transaction index that is out of range
* Transaction that spends more that its inputs provide
* Transaction with same hash as an existing tx that has not been
fully spent (BIP0030)
* Non-final coinbase and non-coinbase txns
* Max size block with canonical encoding which exceeds max size with
non-canonical encoding
* Spend from transaction earlier in same block
* Spend from transaction later in same block
* Double spend transaction from earlier in same block
* Coinbase that pays more than subsidy + fees
* Coinbase that includes subsidy + fees
* Invalid opcode in dead execution path
* Reorganization of txns with OP_RETURN outputs
* Spend of an OP_RETURN output
* Transaction with multiple OP_RETURN outputs
* Large max-sized block reorganization test (disabled by default since
it takes a long time and a lot of memory to run)
Finally, the README.md files in the main and docs directories have been
updated to reflect the use of the new testing framework.
This removes the exported CalcPastTimeMedian function from the
blockchain package as it is no longer needed since the information is
now available via the BestState snapshot.
Also, update the only known caller of this, which is the chain state in
block manager, to use the snapshot instead. In reality, now that
everything the block manager chain state provides is available via the
blockchain BestState snapshot, the entire thing can be removed, however
that will be done in a separate to commit to keep the changes targeted.
This modifies the blockchain.ProcessBlock function to return an
additional boolean as the first parameter which indicates whether or not
the block ended up on the main chain.
This is primarily useful for upcoming test code that needs to be able to
tell the difference between a block accepted to a side chain and a block
that either extends the main chain or causes a reorganize that causes it
to become the main chain. However, it is also useful for the addblock
utility since it allows a better error in the case a file with out of
order blocks is provided.