The block index now tracks the set of dirty block nodes with status
changes that haven't been persisted and flushes the changes to the DB
at the appropriate times.
Currently only the blocks in the active chain are loaded into the
block index on initialization. This instead iterates over the entire
block index bucket in LevelDB and loads all nodes.
The bucket contains block headers keyed by the block height encoded as
big-endian concatenated with the block hash. This allows block headers
to be fetched from the DB in height order with a cursor.
These method allows safe concurrent access to reading and modifying
block node statuses. When block statuses get persisted in a later
change, the setter methods can be used to mark block nodes as dirty.
Each node in the block index records some flags about its validation
state. This is just stored in memory for now, but can save effort if
attempting to reconnect a block that failed validation or was
disconnected.
This was only used to test block proposals, which has been changed to
instead use CheckConnectBlockTemplate. The flag complicated the
implementation of some chain processing routines and would be
difficult to implement with headers-first syncing.
This renames CheckConnectBlock to CheckConnectBlockTemplate and
modifies it to be easily consumable by the getblocktemplate RPC
handler. Performs full block validation now instead of partial
validation.
This propagates the interrupt channel through to blockchain and the
indexers so that it is possible to interrupt long-running operations
such as catching up indexes.
This refactors the code that locates blocks (inventory discovery) out of
server and into blockchain where it can make use of the new much more
efficient chain view and more easily be tested. As an aside, it really
belongs in blockchain anyways since it's purely dealing with the block
index and best chain.
Since the majority of the network has moved to header-based semantics,
this also provides an additional optimization to allow headers to be
located directly versus needing to first discover the hashes and then
fetch the headers.
The new functions are named LocateBlocks and LocateHeaders. The former
returns a slice of located hashes and the latter returns a slice of
located headers.
Finally, it also updates the RPC server getheaders call and related
plumbing to use the new LocateHeaders function.
A comprehensive suite of tests is provided to ensure both functions
behave correctly for both correct and incorrect block locators.
- Remove inMainChain from block nodes since that can now be efficiently
determined by using the chain view
- Track the best chain via a chain view instead of a single block node
- Use the tip of the best chain view everywhere bestNode was used
- Update chain view tip instead of updating best node
- Change reorg logic to use more efficient chain view fork finding logic
- Change block locator code over to use more efficient chain view logic
- Remove now unused block-index-based block locator code
- Move BlockLocator definition to chain.go
- Move BlockLocatorFromHash and LatestBlockLocator to chain.go
- Update both to use more efficient chain view logic
- Rework IsCheckpointCandidate to use block index and chain view
- Optimize MainChainHasBlock to use chain view instead of hitting db
- Move to chain.go since it no longer involves database I/O
- Removed error return since it can no longer fail
- Optimize BlockHeightByHash to use chain view instead of hitting db
- Move to chain.go since it no longer involves database I/O
- Removed error return since it can no longer fail
- Optimize BlockHashByHeight to use chain view instead of hitting db
- Move to chain.go since it no longer involves database I/O
- Removed error return since it can no longer fail
- Optimize HeightRange to use chain view instead of hitting db
- Move to chain.go since it no longer involves database I/O
- Optimize BlockByHeight to use chain view for main chain check
- Optimize BlockByHash to use chain view for main chain check
This removes the DisableVerify function and related state since nothing
uses it anymore since the command line option was removed. It is a
remnant of initial development.
This exposes the ability to more efficiently create a block locator from
a chain view for a given block node by using their ability to do O(1)
lookups.
It also adds tests to ensure the behavior is correct.
This significantly optimizes and simplifies the generation of block
locators by making use of the fact that all block nodes are now in
memory and therefore it is no longer necessary to consult the database
for the hashes or worry about issues related to dynamic loading of nodes.
Also, it slightly modifies the algorithm so that the doubling doesn't
start for one additional iteration in order to mirror other prominent
clients on the network. Due to the way block locators are used, this
does not change any semantics in terms of requesting and locating
blocks.
Finally, the semantics of BlockLocatorFromHash have been changed to
return a locator for the current tip in the case the hash is unknown.
This is far preferable since only including the passed block hash, when
it isn't known, could end up leading to causing a redownload of the
entire chain under certain circumstances.
Putting the test code in the same package makes it easier for forks
since they don't have to change the import paths as much and it also
gets rid of the need for internal_test.go to bridge.
While here, remove the reorganization test since it is much better
handled by the full block tests and is no longer needed and do some
light cleanup on a few other tests.
The full block tests had to remain in the separate test package since it
is a circular dependency otherwise. This did require duplicating some
of the chain setup code, but given the other benefits this is
acceptable.
This introduces the concept of a synthetic block chain that can be used
in the tests to avoid needing setup a full blown chain instance with a
database and generate valid blocks and converts the sequence lock tests
in TestCalcSequenceLock to use it.
Not only does this speed up the test execution time, but it allows the
dependency on rpctest to be removed which will allow the sequence locks
tests to be consolidated into the main package without creating a
circular dependency.
This modifies the function to set the tip in the new chainview code to
bulk copy existing nodes when it needs to expand the cap rather than
simply creating a new empty slice and allowing the walk code below it to
repopulate it. This is a nice optimization since, in practice, most of
the time expanding the cap is only required when the active chain is
being extended after having run for a while which means the end result
is that it will be able to bulk copy all the nodes and just add the most
recent one versus having to walk them all and add them back.
Also, while here expand the tests for setting the tip to ensure the
nodes contained in the resulting view are correct after forcing the
resizes and correct a bug they exposed where changing between a
longer-shorter-longer chain where the longer chain is the same chain
could result in not populating the view correctly.
Finally, update the fake nodes generated by the tests to use a
nonce generated by a deterministic prng in order to ensure the hashes of
all fake nodes are unique, but reproducible.
This implements a new type in the blockchain package that takes
advantage of the fact that all block nodes are now in memory to provide
a flat view of a specific chain of blocks (a specific branch of the
overall block tree) from a given tip all the way back to the genesis
block along with several convenience functions such as efficiently
comparing two views, quickly finding the fork point (if any) between two
views, and O(1) lookup of the node at a specific height.
The view is not currently used, but the intent is that the code will be
refactored to make use of these views to simplify and optimize several
areas such as best chain selection and reorg logic and finding successor
nodes. They will also greatly simplify the process of disconnecting the
download logic from the connection logic.
A comprehensive suite of tests is provided to ensure the chain views
behave correctly.
This cleans up the test in TestCalcSequenceLock in the following ways:
- Use calculated values instead of magic value so it is easier to update
the tests as needed
- Make tests match the comments
- Change comments to be more consistent and fix some grammar errors
- Set mempool flag for unconfirmed tx tests since they are intended to
mimic transactions in the mempool
This modifies the code that determines the most recently known
checkpoint to take advantage of recent changes which make the entire
block index available in memory by only storing a reference to the
specific node in the index that represents the latest known checkpoint.
Previously, the entire block was stored and new checkpoints required
loading it from the database.
This completely removes the threshold state database caching code since
it can very quickly be calculated at startup now that the entire block
index is loaded first.
This reworks the block index code such that it loads all of the headers
in the main chain at startup and constructs the full block index
accordingly.
Since the full index from the current best tip all the way back to the
genesis block is now guaranteed to be in memory, this also removes all
code related to dynamically loading the nodes and updates some of the
logic to take advantage of the fact traversing the block index can
longer potentially fail. There are also more optimizations and
simplifications that can be made in the future as a result of this.
Due to removing all of the extra overhead of tracking the dynamic state,
and ensuring the block node structs are aligned to eliminate extra
padding, the end result of a fully populated block index now takes quite
a bit less memory than the previous dynamically loaded version.
The main downside is that it now takes a while to start whereas it was
nearly instant before, however, it is much better to provide more
efficient runtime operation since that is its ultimate purpose and the
benefits far outweigh this downside.
Some benefits are:
- Since every block node is in memory, the recent code which
reconstructs headers from block nodes means that all headers can
always be served from memory which is important since the majority of
the network has moved to header-based semantics
- Several of the error paths can be removed since they are no longer
necessary
- It is no longer expensive to calculate CSV sequence locks or median
times of blocks way in the past
- It will be possible to create much more efficient iteration and
simplified views of the overall index
- The entire threshold state database cache can be removed since it is
cheap to construct it from the full block index as needed
An overview of the logic changes are as follows:
- Move AncestorNode from blockIndex to blockNode and greatly simplify
since it no longer has to deal with the possibility of dynamically
loading nodes and related failures
- Rename RelativeNode to RelativeAncestor, move to blockNode, and
redefine in terms of AncestorNode
- Move CalcPastMedianTime from blockIndex to blockNode and remove no
longer necessary test for nil
- Change calcSequenceLock to use Ancestor instead of RelativeAncestor
since it reads more clearly
This takes care of a few minor nits on the recently merged subscribe
code. In particular:
- Avoid extra unnecessary allocation on notifications slice
- Avoid defer overhead on notification mutex in sendNotifications
- Make test function comment start with the name of the function per
Effective Go guidelines
- Use constant for number of subscribers in test
- Don't exceed column 80 in test print
The BlockChain struct emits notifications for various events, but
it is only possible to register one listener. This changes the
interface and implementations to allow multiple listeners.
This replaces the ErrDoubleSpend and ErrMissingTx error codes with a
single error code named ErrMissingTxOut and updates the relevant errors
and expected test results accordingly.
Once upon a time, the code relied on a transaction index, so it was able
to definitively differentiate between a transaction output that
legitimately did not exist and one that had already been spent.
However, since the code now uses a pruned utxoset, it is no longer
possible to reliably differentiate since once all outputs of a
transaction are spent, it is removed from the utxoset completely.
Consequently, a missing transaction could be either because the
transaction never existed or because it is fully spent.
This commit updates the new segwit validation logic within block
validation to be guarded by an initial check to the version bits state
before conditionally enforcing the logic based off of the state.