lbcd/blockmanager.go

1553 lines
50 KiB
Go
Raw Normal View History

blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Copyright (c) 2013-2016 The btcsuite developers
2013-08-06 23:55:22 +02:00
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package main
import (
"container/list"
"net"
2013-08-06 23:55:22 +02:00
"os"
"path/filepath"
"sort"
2013-08-06 23:55:22 +02:00
"sync"
"sync/atomic"
2013-08-06 23:55:22 +02:00
"time"
2014-07-02 15:50:08 +02:00
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
mempool: Refactor mempool code to its own package. (#737) This does the minimum work necessary to refactor the mempool code into its own package. The idea is that separating this code into its own package will greatly improve its testability, allow independent benchmarking and profiling, and open up some interesting opportunities for future development related to the memory pool. There are likely some areas related to policy that could be further refactored, however it is better to do that in future commits in order to keep the changeset as small as possible during this refactor. Overview of the major changes: - Create the new package - Move several files into the new package: - mempool.go -> mempool/mempool.go - mempoolerror.go -> mempool/error.go - policy.go -> mempool/policy.go - policy_test.go -> mempool/policy_test.go - Update mempool logging to use the new mempool package logger - Rename mempoolPolicy to Policy (so it's now mempool.Policy) - Rename mempoolConfig to Config (so it's now mempool.Config) - Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc) - Rename txMemPool to TxPool (so it's now mempool.TxPool) - Move defaultBlockPrioritySize to the new package and export it - Export DefaultMinRelayTxFee from the mempool package - Export the CalcPriority function from the mempool package - Introduce a new RawMempoolVerbose function on the TxPool and update the RPC server to use it - Update all references to the mempool to use the package. - Add a skeleton README.md
2016-08-19 18:08:37 +02:00
"github.com/btcsuite/btcd/mempool"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
2013-08-06 23:55:22 +02:00
)
const (
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// minInFlightBlocks is the minimum number of blocks that should be
// in the request queue for headers-first mode before requesting
// more.
minInFlightBlocks = 10
// blockDbNamePrefix is the prefix for the block database name. The
// database type is appended to this value to form the full block
// database name.
blockDbNamePrefix = "blocks"
// maxRejectedTxns is the maximum number of rejected transactions
// hashes to store in memory.
maxRejectedTxns = 1000
// maxRequestedBlocks is the maximum number of requested block
// hashes to store in memory.
maxRequestedBlocks = wire.MaxInvPerMsg
// maxRequestedTxns is the maximum number of requested transactions
// hashes to store in memory.
maxRequestedTxns = wire.MaxInvPerMsg
2013-08-06 23:55:22 +02:00
)
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
// zeroHash is the zero value hash (all zeros). It is defined as a convenience.
var zeroHash chainhash.Hash
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
// newPeerMsg signifies a newly connected peer to the block handler.
type newPeerMsg struct {
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
peer *serverPeer
}
2013-08-06 23:55:22 +02:00
// blockMsg packages a bitcoin block message and the peer it came from together
// so the block handler has access to that information.
type blockMsg struct {
block *btcutil.Block
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
peer *serverPeer
2013-08-06 23:55:22 +02:00
}
// invMsg packages a bitcoin inv message and the peer it came from together
// so the block handler has access to that information.
type invMsg struct {
inv *wire.MsgInv
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
peer *serverPeer
}
2014-12-22 01:27:30 +01:00
// headersMsg packages a bitcoin headers message and the peer it came from
// together so the block handler has access to that information.
type headersMsg struct {
headers *wire.MsgHeaders
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
peer *serverPeer
}
// donePeerMsg signifies a newly disconnected peer to the block handler.
type donePeerMsg struct {
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
peer *serverPeer
}
2013-08-06 23:55:22 +02:00
// txMsg packages a bitcoin tx message and the peer it came from together
// so the block handler has access to that information.
type txMsg struct {
tx *btcutil.Tx
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
peer *serverPeer
2013-08-06 23:55:22 +02:00
}
// getSyncPeerMsg is a message type to be sent across the message channel for
// retrieving the current sync peer.
type getSyncPeerMsg struct {
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
reply chan *serverPeer
}
// processBlockResponse is a response sent to the reply channel of a
// processBlockMsg.
type processBlockResponse struct {
isOrphan bool
err error
}
// processBlockMsg is a message type to be sent across the message channel
// for requested a block is processed. Note this call differs from blockMsg
2014-09-08 21:19:47 +02:00
// above in that blockMsg is intended for blocks that came from peers and have
// extra handling whereas this message essentially is just a concurrent safe
// way to call ProcessBlock on the internal block chain instance.
type processBlockMsg struct {
block *btcutil.Block
flags blockchain.BehaviorFlags
reply chan processBlockResponse
}
// isCurrentMsg is a message type to be sent across the message channel for
// requesting whether or not the block manager believes it is synced with
// the currently connected peers.
type isCurrentMsg struct {
reply chan bool
}
// pauseMsg is a message type to be sent across the message channel for
// pausing the block manager. This effectively provides the caller with
// exclusive access over the manager until a receive is performed on the
// unpause channel.
type pauseMsg struct {
unpause <-chan struct{}
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// headerNode is used as a node in a list of headers that are linked together
// between checkpoints.
type headerNode struct {
height int32
hash *chainhash.Hash
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
}
2013-08-06 23:55:22 +02:00
// blockManager provides a concurrency safe block manager for handling all
// incoming blocks.
2013-08-06 23:55:22 +02:00
type blockManager struct {
server *server
started int32
shutdown int32
chain *blockchain.BlockChain
rejectedTxns map[chainhash.Hash]struct{}
requestedTxns map[chainhash.Hash]struct{}
requestedBlocks map[chainhash.Hash]struct{}
progressLogger *blockProgressLogger
syncPeer *serverPeer
msgChan chan interface{}
wg sync.WaitGroup
quit chan struct{}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// The following fields are used for headers-first mode.
headersFirstMode bool
headerList *list.List
startHeader *list.Element
nextCheckpoint *chaincfg.Checkpoint
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// resetHeaderState sets the headers-first mode state to values appropriate for
// syncing from a new peer.
func (b *blockManager) resetHeaderState(newestHash *chainhash.Hash, newestHeight int32) {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
b.headersFirstMode = false
b.headerList.Init()
b.startHeader = nil
// When there is a next checkpoint, add an entry for the latest known
// block into the header pool. This allows the next downloaded header
// to prove it links to the chain properly.
if b.nextCheckpoint != nil {
node := headerNode{height: newestHeight, hash: newestHash}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
b.headerList.PushBack(&node)
}
}
// findNextHeaderCheckpoint returns the next checkpoint after the passed height.
// It returns nil when there is not one either because the height is already
// later than the final checkpoint or some other reason such as disabled
// checkpoints.
func (b *blockManager) findNextHeaderCheckpoint(height int32) *chaincfg.Checkpoint {
// There is no next checkpoint if checkpoints are disabled or there are
// none for this current network.
if cfg.DisableCheckpoints {
return nil
}
checkpoints := b.chain.Checkpoints()
if len(checkpoints) == 0 {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
return nil
}
// There is no next checkpoint if the height is already after the final
// checkpoint.
finalCheckpoint := &checkpoints[len(checkpoints)-1]
if height >= finalCheckpoint.Height {
return nil
}
// Find the next checkpoint.
nextCheckpoint := finalCheckpoint
for i := len(checkpoints) - 2; i >= 0; i-- {
if height >= checkpoints[i].Height {
break
}
nextCheckpoint = &checkpoints[i]
}
return nextCheckpoint
2013-08-06 23:55:22 +02:00
}
// startSync will choose the best peer among the available candidate peers to
// download/sync the blockchain from. When syncing is already running, it
// simply returns. It also examines the candidates for any which are no longer
// candidates and removes them as needed.
func (b *blockManager) startSync(peers *list.List) {
// Return now if we're already syncing.
if b.syncPeer != nil {
return
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
best := b.chain.BestSnapshot()
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
var bestPeer *serverPeer
var enext *list.Element
for e := peers.Front(); e != nil; e = enext {
enext = e.Next()
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
sp := e.Value.(*serverPeer)
// Remove sync candidate peers that are no longer candidates due
// to passing their latest known block. NOTE: The < is
// intentional as opposed to <=. While techcnically the peer
// doesn't have a later block when it's equal, it will likely
// have one soon so it is a reasonable choice. It also allows
// the case where both are at 0 such as during regression test.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
if sp.LastBlock() < best.Height {
peers.Remove(e)
continue
}
// TODO(davec): Use a better algorithm to choose the best peer.
// For now, just pick the first available candidate.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
bestPeer = sp
}
// Start syncing from the best peer if one was selected.
if bestPeer != nil {
// Clear the requestedBlocks if the sync peer changes, otherwise
// we may ignore blocks we need that the last sync peer failed
// to send.
b.requestedBlocks = make(map[chainhash.Hash]struct{})
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
locator, err := b.chain.LatestBlockLocator()
if err != nil {
bmgrLog.Errorf("Failed to get block locator for the "+
"latest block: %v", err)
return
}
bmgrLog.Infof("Syncing to block height %d from peer %v",
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
bestPeer.LastBlock(), bestPeer.Addr())
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// When the current height is less than a known checkpoint we
// can use block headers to learn about which blocks comprise
// the chain up to the checkpoint and perform less validation
// for them. This is possible since each header contains the
// hash of the previous header and a merkle root. Therefore if
// we validate all of the received headers link together
// properly and the checkpoint hashes match, we can be sure the
// hashes for the blocks in between are accurate. Further, once
// the full blocks are downloaded, the merkle root is computed
// and compared against the value in the header which proves the
// full block hasn't been tampered with.
//
// Once we have passed the final checkpoint, or checkpoints are
// disabled, use standard inv messages learn about the blocks
// and fully validate them. Finally, regression test mode does
// not support the headers-first approach so do normal block
// downloads when in regression test mode.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
if b.nextCheckpoint != nil &&
best.Height < b.nextCheckpoint.Height &&
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
!cfg.RegressionTest && !cfg.DisableCheckpoints {
bestPeer.PushGetHeadersMsg(locator, b.nextCheckpoint.Hash)
b.headersFirstMode = true
bmgrLog.Infof("Downloading headers for blocks %d to "+
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
"%d from peer %s", best.Height+1,
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.nextCheckpoint.Height, bestPeer.Addr())
} else {
bestPeer.PushGetBlocksMsg(locator, &zeroHash)
}
b.syncPeer = bestPeer
} else {
bmgrLog.Warnf("No sync peer candidates available")
}
}
// isSyncCandidate returns whether or not the peer is a candidate to consider
// syncing from.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) isSyncCandidate(sp *serverPeer) bool {
// Typically a peer is not a candidate for sync if it's not a full node,
// however regression test is special in that the regression tool is
// not a full node and still needs to be considered a sync candidate.
if cfg.RegressionTest {
// The peer is not a candidate if it's not coming from localhost
// or the hostname can't be determined for some reason.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
host, _, err := net.SplitHostPort(sp.Addr())
if err != nil {
return false
}
if host != "127.0.0.1" && host != "localhost" {
return false
}
} else {
// The peer is not a candidate for sync if it's not a full node.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
if sp.Services()&wire.SFNodeNetwork != wire.SFNodeNetwork {
return false
}
}
// Candidate if all checks passed.
return true
}
// handleNewPeerMsg deals with new peers that have signalled they may
// be considered as a sync peer (they have already successfully negotiated). It
2013-09-29 22:26:03 +02:00
// also starts syncing if needed. It is invoked from the syncHandler goroutine.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) handleNewPeerMsg(peers *list.List, sp *serverPeer) {
// Ignore if in the process of shutting down.
if atomic.LoadInt32(&b.shutdown) != 0 {
return
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
bmgrLog.Infof("New valid peer %s (%s)", sp, sp.UserAgent())
// Ignore the peer if it's not a sync candidate.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
if !b.isSyncCandidate(sp) {
return
}
// Add the peer as a candidate to sync from.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
peers.PushBack(sp)
// Start syncing by choosing the best candidate if needed.
b.startSync(peers)
}
// handleDonePeerMsg deals with peers that have signalled they are done. It
// removes the peer as a candidate for syncing and in the case where it was
// the current sync peer, attempts to select a new best peer to sync from. It
// is invoked from the syncHandler goroutine.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) handleDonePeerMsg(peers *list.List, sp *serverPeer) {
// Remove the peer from the list of candidate peers.
for e := peers.Front(); e != nil; e = e.Next() {
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
if e.Value == sp {
peers.Remove(e)
break
}
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
bmgrLog.Infof("Lost peer %s", sp)
// Remove requested transactions from the global map so that they will
// be fetched from elsewhere next time we get an inv.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
for k := range sp.requestedTxns {
delete(b.requestedTxns, k)
}
// Remove requested blocks from the global map so that they will be
2013-10-03 18:10:26 +02:00
// fetched from elsewhere next time we get an inv.
2016-11-16 00:43:43 +01:00
// TODO: we could possibly here check which peers have these blocks
// and request them now to speed things up a little.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
for k := range sp.requestedBlocks {
delete(b.requestedBlocks, k)
}
// Attempt to find a new peer to sync from if the quitting peer is the
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// sync peer. Also, reset the headers-first state if in headers-first
// mode so
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
if b.syncPeer != nil && b.syncPeer == sp {
b.syncPeer = nil
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
if b.headersFirstMode {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
best := b.chain.BestSnapshot()
b.resetHeaderState(&best.Hash, best.Height)
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
}
b.startSync(peers)
}
}
// handleTxMsg handles transaction messages from all peers.
func (b *blockManager) handleTxMsg(tmsg *txMsg) {
// NOTE: BitcoinJ, and possibly other wallets, don't follow the spec of
// sending an inventory message and allowing the remote peer to decide
// whether or not they want to request the transaction via a getdata
// message. Unfortunately, the reference implementation permits
// unrequested data, so it has allowed wallets that don't follow the
// spec to proliferate. While this is not ideal, there is no check here
// to disconnect peers for sending unsolicited transactions to provide
// interoperability.
txHash := tmsg.tx.Hash()
// Ignore transactions that we have already rejected. Do not
// send a reject message here because if the transaction was already
// rejected, the transaction was unsolicited.
if _, exists := b.rejectedTxns[*txHash]; exists {
bmgrLog.Debugf("Ignoring unsolicited previously rejected "+
"transaction %v from %s", txHash, tmsg.peer)
return
}
// Process the transaction to include validation, insertion in the
// memory pool, orphan handling, etc.
allowOrphans := cfg.MaxOrphanTxs > 0
acceptedTxs, err := b.server.txMemPool.ProcessTransaction(tmsg.tx,
allowOrphans, true, mempool.Tag(tmsg.peer.ID()))
// Remove transaction from request maps. Either the mempool/chain
// already knows about it and as such we shouldn't have any more
// instances of trying to fetch it, or we failed to insert and thus
// we'll retry next time we get an inv.
delete(tmsg.peer.requestedTxns, *txHash)
delete(b.requestedTxns, *txHash)
if err != nil {
// Do not request this transaction again until a new block
// has been processed.
b.rejectedTxns[*txHash] = struct{}{}
b.limitMap(b.rejectedTxns, maxRejectedTxns)
// When the error is a rule error, it means the transaction was
// simply rejected as opposed to something actually going wrong,
// so log it as such. Otherwise, something really did go wrong,
// so log it as an actual error.
mempool: Refactor mempool code to its own package. (#737) This does the minimum work necessary to refactor the mempool code into its own package. The idea is that separating this code into its own package will greatly improve its testability, allow independent benchmarking and profiling, and open up some interesting opportunities for future development related to the memory pool. There are likely some areas related to policy that could be further refactored, however it is better to do that in future commits in order to keep the changeset as small as possible during this refactor. Overview of the major changes: - Create the new package - Move several files into the new package: - mempool.go -> mempool/mempool.go - mempoolerror.go -> mempool/error.go - policy.go -> mempool/policy.go - policy_test.go -> mempool/policy_test.go - Update mempool logging to use the new mempool package logger - Rename mempoolPolicy to Policy (so it's now mempool.Policy) - Rename mempoolConfig to Config (so it's now mempool.Config) - Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc) - Rename txMemPool to TxPool (so it's now mempool.TxPool) - Move defaultBlockPrioritySize to the new package and export it - Export DefaultMinRelayTxFee from the mempool package - Export the CalcPriority function from the mempool package - Introduce a new RawMempoolVerbose function on the TxPool and update the RPC server to use it - Update all references to the mempool to use the package. - Add a skeleton README.md
2016-08-19 18:08:37 +02:00
if _, ok := err.(mempool.RuleError); ok {
bmgrLog.Debugf("Rejected transaction %v from %s: %v",
txHash, tmsg.peer, err)
} else {
bmgrLog.Errorf("Failed to process transaction %v: %v",
txHash, err)
}
// Convert the error into an appropriate reject message and
// send it.
mempool: Refactor mempool code to its own package. (#737) This does the minimum work necessary to refactor the mempool code into its own package. The idea is that separating this code into its own package will greatly improve its testability, allow independent benchmarking and profiling, and open up some interesting opportunities for future development related to the memory pool. There are likely some areas related to policy that could be further refactored, however it is better to do that in future commits in order to keep the changeset as small as possible during this refactor. Overview of the major changes: - Create the new package - Move several files into the new package: - mempool.go -> mempool/mempool.go - mempoolerror.go -> mempool/error.go - policy.go -> mempool/policy.go - policy_test.go -> mempool/policy_test.go - Update mempool logging to use the new mempool package logger - Rename mempoolPolicy to Policy (so it's now mempool.Policy) - Rename mempoolConfig to Config (so it's now mempool.Config) - Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc) - Rename txMemPool to TxPool (so it's now mempool.TxPool) - Move defaultBlockPrioritySize to the new package and export it - Export DefaultMinRelayTxFee from the mempool package - Export the CalcPriority function from the mempool package - Introduce a new RawMempoolVerbose function on the TxPool and update the RPC server to use it - Update all references to the mempool to use the package. - Add a skeleton README.md
2016-08-19 18:08:37 +02:00
code, reason := mempool.ErrToRejectErr(err)
tmsg.peer.PushRejectMsg(wire.CmdTx, code, reason, txHash,
false)
return
}
b.server.AnnounceNewTransactions(acceptedTxs)
}
// current returns true if we believe we are synced with our peers, false if we
// still have blocks to check
func (b *blockManager) current() bool {
if !b.chain.IsCurrent() {
return false
}
// if blockChain thinks we are current and we have no syncPeer it
// is probably right.
if b.syncPeer == nil {
return true
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// No matter what chain thinks, if we are below the block we are syncing
// to we are not current.
if b.chain.BestSnapshot().Height < b.syncPeer.LastBlock() {
return false
}
return true
}
// handleBlockMsg handles block messages from all peers.
func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
// If we didn't ask for this block then the peer is misbehaving.
blockHash := bmsg.block.Hash()
if _, exists := bmsg.peer.requestedBlocks[*blockHash]; !exists {
// The regression test intentionally sends some blocks twice
// to test duplicate block insertion fails. Don't disconnect
// the peer or ignore the block when we're in regression test
// mode in this case so the chain code is actually fed the
// duplicate blocks.
if !cfg.RegressionTest {
bmgrLog.Warnf("Got unrequested block %v from %s -- "+
"disconnecting", blockHash, bmsg.peer.Addr())
bmsg.peer.Disconnect()
return
}
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// When in headers-first mode, if the block matches the hash of the
// first header in the list of headers that are being fetched, it's
// eligible for less validation since the headers have already been
// verified to link together and are valid up to the next checkpoint.
// Also, remove the list entry for all blocks except the checkpoint
// since it is needed to verify the next round of headers links
// properly.
isCheckpointBlock := false
behaviorFlags := blockchain.BFNone
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
if b.headersFirstMode {
firstNodeEl := b.headerList.Front()
if firstNodeEl != nil {
firstNode := firstNodeEl.Value.(*headerNode)
if blockHash.IsEqual(firstNode.hash) {
behaviorFlags |= blockchain.BFFastAdd
if firstNode.hash.IsEqual(b.nextCheckpoint.Hash) {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
isCheckpointBlock = true
} else {
b.headerList.Remove(firstNodeEl)
}
}
}
}
// Remove block from request maps. Either chain will know about it and
// so we shouldn't have any more instances of trying to fetch it, or we
// will fail the insert and thus we'll retry next time we get an inv.
delete(bmsg.peer.requestedBlocks, *blockHash)
delete(b.requestedBlocks, *blockHash)
// Process the block to include validation, best chain selection, orphan
// handling, etc.
_, isOrphan, err := b.chain.ProcessBlock(bmsg.block, behaviorFlags)
2013-08-06 23:55:22 +02:00
if err != nil {
// When the error is a rule error, it means the block was simply
// rejected as opposed to something actually going wrong, so log
// it as such. Otherwise, something really did go wrong, so log
// it as an actual error.
if _, ok := err.(blockchain.RuleError); ok {
bmgrLog.Infof("Rejected block %v from %s: %v", blockHash,
bmsg.peer, err)
} else {
bmgrLog.Errorf("Failed to process block %v: %v",
blockHash, err)
}
indexers: Implement optional tx/address indexes. This introduces a new indexing infrastructure for supporting optional indexes using the new database and blockchain infrastructure along with two concrete indexer implementations which provide both a transaction-by-hash and a transaction-by-address index. The new infrastructure is mostly separated into a package named indexers which is housed under the blockchain package. In order to support this, a new interface named IndexManager has been introduced in the blockchain package which provides methods to be notified when the chain has been initialized and when blocks are connected and disconnected from the main chain. A concrete implementation of an index manager is provided by the new indexers package. The new indexers package also provides a new interface named Indexer which allows the index manager to manage concrete index implementations which conform to the interface. The following is high level overview of the main index infrastructure changes: - Define a new IndexManager interface in the blockchain package and modify the package to make use of the interface when specified - Create a new indexers package - Provides an Index interface which allows concrete indexes to plugin to an index manager - Provides a concrete IndexManager implementation - Handles the lifecycle of all indexes it manages - Tracks the index tips - Handles catching up disabled indexes that have been reenabled - Handles reorgs while the index was disabled - Invokes the appropriate methods for all managed indexes to allow them to index and deindex the blocks and transactions - Implement a transaction-by-hash index - Makes use of internal block IDs to save a significant amount of space and indexing costs over the old transaction index format - Implement a transaction-by-address index - Makes use of a leveling scheme in order to provide a good tradeoff between space required and indexing costs - Supports enabling and disabling indexes at will - Support the ability to drop indexes if they are no longer desired The following is an overview of the btcd changes: - Add a new index logging subsystem - Add new options --txindex and --addrindex in order to enable the optional indexes - NOTE: The transaction index will automatically be enabled when the address index is enabled because it depends on it - Add new options --droptxindex and --dropaddrindex to allow the indexes to be removed - NOTE: The address index will also be removed when the transaction index is dropped because it depends on it - Update getrawtransactions RPC to make use of the transaction index - Reimplement the searchrawtransaction RPC that makes use of the address index - Update sample-btcd.conf to include sample usage for the new optional index flags
2016-02-19 05:51:18 +01:00
if dbErr, ok := err.(database.Error); ok && dbErr.ErrorCode ==
database.ErrCorruption {
panic(dbErr)
}
// Convert the error into an appropriate reject message and
// send it.
mempool: Refactor mempool code to its own package. (#737) This does the minimum work necessary to refactor the mempool code into its own package. The idea is that separating this code into its own package will greatly improve its testability, allow independent benchmarking and profiling, and open up some interesting opportunities for future development related to the memory pool. There are likely some areas related to policy that could be further refactored, however it is better to do that in future commits in order to keep the changeset as small as possible during this refactor. Overview of the major changes: - Create the new package - Move several files into the new package: - mempool.go -> mempool/mempool.go - mempoolerror.go -> mempool/error.go - policy.go -> mempool/policy.go - policy_test.go -> mempool/policy_test.go - Update mempool logging to use the new mempool package logger - Rename mempoolPolicy to Policy (so it's now mempool.Policy) - Rename mempoolConfig to Config (so it's now mempool.Config) - Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc) - Rename txMemPool to TxPool (so it's now mempool.TxPool) - Move defaultBlockPrioritySize to the new package and export it - Export DefaultMinRelayTxFee from the mempool package - Export the CalcPriority function from the mempool package - Introduce a new RawMempoolVerbose function on the TxPool and update the RPC server to use it - Update all references to the mempool to use the package. - Add a skeleton README.md
2016-08-19 18:08:37 +02:00
code, reason := mempool.ErrToRejectErr(err)
bmsg.peer.PushRejectMsg(wire.CmdBlock, code, reason,
blockHash, false)
2013-08-06 23:55:22 +02:00
return
}
// Meta-data about the new block this peer is reporting. We use this
// below to update this peer's lastest block height and the heights of
// other peers based on their last announced block hash. This allows us
// to dynamically update the block heights of peers, avoiding stale
// heights when looking for a new sync peer. Upon acceptance of a block
// or recognition of an orphan, we also use this information to update
// the block heights over other peers who's invs may have been ignored
// if we are actively syncing while the chain is not yet current or
// who may have lost the lock announcment race.
var heightUpdate int32
var blkHashUpdate *chainhash.Hash
// Request the parents for the orphan block from the peer that sent it.
if isOrphan {
// We've just received an orphan block from a peer. In order
// to update the height of the peer, we try to extract the
// block height from the scriptSig of the coinbase transaction.
// Extraction is only attempted if the block's version is
// high enough (ver 2+).
header := &bmsg.block.MsgBlock().Header
if blockchain.ShouldHaveSerializedBlockHeight(header) {
coinbaseTx := bmsg.block.Transactions()[0]
cbHeight, err := blockchain.ExtractCoinbaseHeight(coinbaseTx)
if err != nil {
bmgrLog.Warnf("Unable to extract height from "+
"coinbase tx: %v", err)
} else {
bmgrLog.Debugf("Extracted height of %v from "+
"orphan block", cbHeight)
heightUpdate = cbHeight
blkHashUpdate = blockHash
}
}
orphanRoot := b.chain.GetOrphanRoot(blockHash)
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
locator, err := b.chain.LatestBlockLocator()
if err != nil {
bmgrLog.Warnf("Failed to get block locator for the "+
"latest block: %v", err)
} else {
bmsg.peer.PushGetBlocksMsg(locator, orphanRoot)
}
} else {
// When the block is not an orphan, log information about it and
// update the chain state.
b.progressLogger.LogBlockHeight(bmsg.block)
// Update this peer's latest block height, for future
2015-09-24 15:12:04 +02:00
// potential sync node candidacy.
best := b.chain.BestSnapshot()
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
heightUpdate = best.Height
blkHashUpdate = &best.Hash
// Clear the rejected transactions.
b.rejectedTxns = make(map[chainhash.Hash]struct{})
// Allow any clients performing long polling via the
// getblocktemplate RPC to be notified when the new block causes
// their old block template to become stale.
rpcServer := b.server.rpcServer
if rpcServer != nil {
rpcServer.gbtWorkState.NotifyBlockConnected(blockHash)
}
}
// Update the block height for this peer. But only send a message to
// the server for updating peer heights if this is an orphan or our
2015-09-24 15:12:04 +02:00
// chain is "current". This avoids sending a spammy amount of messages
// if we're syncing the chain from scratch.
if blkHashUpdate != nil && heightUpdate != 0 {
bmsg.peer.UpdateLastBlockHeight(heightUpdate)
if isOrphan || b.current() {
go b.server.UpdatePeerHeights(blkHashUpdate, heightUpdate, bmsg.peer)
}
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// Nothing more to do if we aren't in headers-first mode.
if !b.headersFirstMode {
return
}
// This is headers-first mode, so if the block is not a checkpoint
// request more blocks using the header list when the request queue is
// getting short.
if !isCheckpointBlock {
if b.startHeader != nil &&
len(bmsg.peer.requestedBlocks) < minInFlightBlocks {
b.fetchHeaderBlocks()
}
return
}
// This is headers-first mode and the block is a checkpoint. When
// there is a next checkpoint, get the next round of headers by asking
// for headers starting from the block after this one up to the next
// checkpoint.
prevHeight := b.nextCheckpoint.Height
prevHash := b.nextCheckpoint.Hash
b.nextCheckpoint = b.findNextHeaderCheckpoint(prevHeight)
if b.nextCheckpoint != nil {
locator := blockchain.BlockLocator([]*chainhash.Hash{prevHash})
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
err := bmsg.peer.PushGetHeadersMsg(locator, b.nextCheckpoint.Hash)
if err != nil {
bmgrLog.Warnf("Failed to send getheaders message to "+
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
"peer %s: %v", bmsg.peer.Addr(), err)
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
return
}
bmgrLog.Infof("Downloading headers for blocks %d to %d from "+
"peer %s", prevHeight+1, b.nextCheckpoint.Height,
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.syncPeer.Addr())
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
return
}
// This is headers-first mode, the block is a checkpoint, and there are
// no more checkpoints, so switch to normal mode by requesting blocks
// from the block after this one up to the end of the chain (zero hash).
b.headersFirstMode = false
b.headerList.Init()
bmgrLog.Infof("Reached the final checkpoint -- switching to normal mode")
locator := blockchain.BlockLocator([]*chainhash.Hash{blockHash})
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
err = bmsg.peer.PushGetBlocksMsg(locator, &zeroHash)
if err != nil {
bmgrLog.Warnf("Failed to send getblocks message to peer %s: %v",
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
bmsg.peer.Addr(), err)
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
return
}
2013-08-06 23:55:22 +02:00
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// fetchHeaderBlocks creates and sends a request to the syncPeer for the next
// list of blocks to be downloaded based on the current list of headers.
func (b *blockManager) fetchHeaderBlocks() {
2014-03-13 14:45:41 +01:00
// Nothing to do if there is no start header.
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
if b.startHeader == nil {
bmgrLog.Warnf("fetchHeaderBlocks called with no start header")
return
}
// Build up a getdata request for the list of blocks the headers
// describe. The size hint will be limited to wire.MaxInvPerMsg by
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// the function, so no need to double check it here.
gdmsg := wire.NewMsgGetDataSizeHint(uint(b.headerList.Len()))
numRequested := 0
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
for e := b.startHeader; e != nil; e = e.Next() {
node, ok := e.Value.(*headerNode)
if !ok {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
bmgrLog.Warn("Header list node type is not a headerNode")
continue
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
iv := wire.NewInvVect(wire.InvTypeBlock, node.hash)
2014-07-07 19:07:33 +02:00
haveInv, err := b.haveInventory(iv)
if err != nil {
bmgrLog.Warnf("Unexpected failure when checking for "+
2014-07-07 19:07:33 +02:00
"existing inventory during header block "+
"fetch: %v", err)
}
if !haveInv {
b.requestedBlocks[*node.hash] = struct{}{}
b.syncPeer.requestedBlocks[*node.hash] = struct{}{}
gdmsg.AddInvVect(iv)
numRequested++
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
b.startHeader = e.Next()
if numRequested >= wire.MaxInvPerMsg {
break
}
}
if len(gdmsg.InvList) > 0 {
b.syncPeer.QueueMessage(gdmsg, nil)
}
}
// handleHeadersMsghandles headers messages from all peers.
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
func (b *blockManager) handleHeadersMsg(hmsg *headersMsg) {
// The remote peer is misbehaving if we didn't request headers.
msg := hmsg.headers
numHeaders := len(msg.Headers)
if !b.headersFirstMode {
bmgrLog.Warnf("Got %d unrequested headers from %s -- "+
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
"disconnecting", numHeaders, hmsg.peer.Addr())
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
hmsg.peer.Disconnect()
return
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// Nothing to do for an empty headers message.
if numHeaders == 0 {
return
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// Process all of the received headers ensuring each one connects to the
// previous and that checkpoints match.
receivedCheckpoint := false
var finalHash *chainhash.Hash
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
for _, blockHeader := range msg.Headers {
blockHash := blockHeader.BlockHash()
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
finalHash = &blockHash
// Ensure there is a previous header to compare against.
prevNodeEl := b.headerList.Back()
if prevNodeEl == nil {
bmgrLog.Warnf("Header list does not contain a previous" +
"element as expected -- disconnecting peer")
hmsg.peer.Disconnect()
return
}
// Ensure the header properly connects to the previous one and
// add it to the list of headers.
node := headerNode{hash: &blockHash}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
prevNode := prevNodeEl.Value.(*headerNode)
if prevNode.hash.IsEqual(&blockHeader.PrevBlock) {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
node.height = prevNode.height + 1
e := b.headerList.PushBack(&node)
if b.startHeader == nil {
b.startHeader = e
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
} else {
bmgrLog.Warnf("Received block header that does not "+
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
"properly connect to the chain from peer %s "+
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
"-- disconnecting", hmsg.peer.Addr())
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
hmsg.peer.Disconnect()
return
}
// Verify the header at the next checkpoint height matches.
if node.height == b.nextCheckpoint.Height {
if node.hash.IsEqual(b.nextCheckpoint.Hash) {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
receivedCheckpoint = true
bmgrLog.Infof("Verified downloaded block "+
"header against checkpoint at height "+
"%d/hash %s", node.height, node.hash)
} else {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
bmgrLog.Warnf("Block header at height %d/hash "+
"%s from peer %s does NOT match "+
"expected checkpoint hash of %s -- "+
"disconnecting", node.height,
node.hash, hmsg.peer.Addr(),
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
b.nextCheckpoint.Hash)
hmsg.peer.Disconnect()
return
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
break
}
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// When this header is a checkpoint, switch to fetching the blocks for
// all of the headers since the last checkpoint.
if receivedCheckpoint {
// Since the first entry of the list is always the final block
// that is already in the database and is only used to ensure
// the next header links properly, it must be removed before
// fetching the blocks.
b.headerList.Remove(b.headerList.Front())
bmgrLog.Infof("Received %v block headers: Fetching blocks",
b.headerList.Len())
b.progressLogger.SetLastLogTime(time.Now())
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
b.fetchHeaderBlocks()
return
}
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// This header is not a checkpoint, so request the next batch of
// headers starting from the latest known header and ending with the
// next checkpoint.
locator := blockchain.BlockLocator([]*chainhash.Hash{finalHash})
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
err := hmsg.peer.PushGetHeadersMsg(locator, b.nextCheckpoint.Hash)
if err != nil {
bmgrLog.Warnf("Failed to send getheaders message to "+
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
"peer %s: %v", hmsg.peer.Addr(), err)
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
return
}
}
// haveInventory returns whether or not the inventory represented by the passed
// inventory vector is known. This includes checking all of the various places
// inventory can be when it is in different states such as blocks that are part
// of the main chain, on a side chain, in the orphan pool, and transactions that
2014-01-11 05:32:05 +01:00
// are in the memory pool (either the main pool or orphan pool).
func (b *blockManager) haveInventory(invVect *wire.InvVect) (bool, error) {
switch invVect.Type {
case wire.InvTypeBlock:
// Ask chain if the block is known to it in any form (main
// chain, side chain, or orphan).
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
return b.chain.HaveBlock(&invVect.Hash)
case wire.InvTypeTx:
// Ask the transaction memory pool if the transaction is known
// to it in any form (main pool or orphan).
if b.server.txMemPool.HaveTransaction(&invVect.Hash) {
2014-07-07 19:07:33 +02:00
return true, nil
}
// Check if the transaction exists from the point of view of the
// end of the main chain.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
entry, err := b.chain.FetchUtxoEntry(&invVect.Hash)
if err != nil {
return false, err
}
return entry != nil && !entry.IsFullySpent(), nil
}
// The requested inventory is is an unsupported type, so just claim
// it is known to avoid requesting it.
2014-07-07 19:07:33 +02:00
return true, nil
}
// handleInvMsg handles inv messages from all peers.
// We examine the inventory advertised by the remote peer and act accordingly.
func (b *blockManager) handleInvMsg(imsg *invMsg) {
// Attempt to find the final block in the inventory list. There may
// not be one.
lastBlock := -1
invVects := imsg.inv.InvList
for i := len(invVects) - 1; i >= 0; i-- {
if invVects[i].Type == wire.InvTypeBlock {
lastBlock = i
break
}
}
2015-09-24 15:12:04 +02:00
// If this inv contains a block announcement, and this isn't coming from
// our current sync peer or we're current, then update the last
// announced block for this peer. We'll use this information later to
// update the heights of peers based on blocks we've accepted that they
// previously announced.
if lastBlock != -1 && (imsg.peer != b.syncPeer || b.current()) {
imsg.peer.UpdateLastAnnouncedBlock(&invVects[lastBlock].Hash)
}
// Ignore invs from peers that aren't the sync if we are not current.
// Helps prevent fetching a mass of orphans.
if imsg.peer != b.syncPeer && !b.current() {
return
}
// If our chain is current and a peer announces a block we already
// know of, then update their current block height.
if lastBlock != -1 && b.current() {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
blkHeight, err := b.chain.BlockHeightByHash(&invVects[lastBlock].Hash)
if err == nil {
imsg.peer.UpdateLastBlockHeight(blkHeight)
}
}
// Request the advertised inventory if we don't already have it. Also,
// request parent blocks of orphans if we receive one we already have.
// Finally, attempt to detect potential stalls due to long side chains
// we already have and request more blocks to prevent them.
for i, iv := range invVects {
// Ignore unsupported inventory types.
if iv.Type != wire.InvTypeBlock && iv.Type != wire.InvTypeTx {
continue
}
// Add the inventory to the cache of known inventory
// for the peer.
imsg.peer.AddKnownInventory(iv)
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// Ignore inventory when we're in headers-first mode.
if b.headersFirstMode {
continue
}
// Request the inventory if we don't already have it.
2014-07-07 19:07:33 +02:00
haveInv, err := b.haveInventory(iv)
if err != nil {
bmgrLog.Warnf("Unexpected failure when checking for "+
2014-07-07 19:07:33 +02:00
"existing inventory during inv message "+
"processing: %v", err)
continue
}
if !haveInv {
if iv.Type == wire.InvTypeTx {
// Skip the transaction if it has already been
// rejected.
if _, exists := b.rejectedTxns[iv.Hash]; exists {
continue
}
}
// Add it to the request queue.
imsg.peer.requestQueue = append(imsg.peer.requestQueue, iv)
continue
}
if iv.Type == wire.InvTypeBlock {
// The block is an orphan block that we already have.
// When the existing orphan was processed, it requested
// the missing parent blocks. When this scenario
// happens, it means there were more blocks missing
// than are allowed into a single inventory message. As
// a result, once this peer requested the final
// advertised block, the remote peer noticed and is now
// resending the orphan block as an available block
// to signal there are more missing blocks that need to
// be requested.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
if b.chain.IsKnownOrphan(&iv.Hash) {
// Request blocks starting at the latest known
// up to the root of the orphan that just came
// in.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
orphanRoot := b.chain.GetOrphanRoot(&iv.Hash)
locator, err := b.chain.LatestBlockLocator()
if err != nil {
bmgrLog.Errorf("PEER: Failed to get block "+
"locator for the latest block: "+
"%v", err)
continue
}
imsg.peer.PushGetBlocksMsg(locator, orphanRoot)
continue
}
// We already have the final block advertised by this
// inventory message, so force a request for more. This
2013-09-27 04:06:01 +02:00
// should only happen if we're on a really long side
// chain.
if i == lastBlock {
// Request blocks after this one up to the
// final one the remote peer knows about (zero
// stop hash).
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
locator := b.chain.BlockLocatorFromHash(&iv.Hash)
imsg.peer.PushGetBlocksMsg(locator, &zeroHash)
}
}
}
// Request as much as possible at once. Anything that won't fit into
// the request will be requested on the next inv message.
numRequested := 0
gdmsg := wire.NewMsgGetData()
requestQueue := imsg.peer.requestQueue
for len(requestQueue) != 0 {
iv := requestQueue[0]
requestQueue[0] = nil
requestQueue = requestQueue[1:]
switch iv.Type {
case wire.InvTypeBlock:
// Request the block if there is not already a pending
// request.
if _, exists := b.requestedBlocks[iv.Hash]; !exists {
b.requestedBlocks[iv.Hash] = struct{}{}
b.limitMap(b.requestedBlocks, maxRequestedBlocks)
imsg.peer.requestedBlocks[iv.Hash] = struct{}{}
gdmsg.AddInvVect(iv)
numRequested++
}
case wire.InvTypeTx:
// Request the transaction if there is not already a
// pending request.
if _, exists := b.requestedTxns[iv.Hash]; !exists {
b.requestedTxns[iv.Hash] = struct{}{}
b.limitMap(b.requestedTxns, maxRequestedTxns)
imsg.peer.requestedTxns[iv.Hash] = struct{}{}
gdmsg.AddInvVect(iv)
numRequested++
}
}
if numRequested >= wire.MaxInvPerMsg {
break
}
}
imsg.peer.requestQueue = requestQueue
if len(gdmsg.InvList) > 0 {
imsg.peer.QueueMessage(gdmsg, nil)
}
}
// limitMap is a helper function for maps that require a maximum limit by
// evicting a random transaction if adding a new value would cause it to
// overflow the maximum allowed.
func (b *blockManager) limitMap(m map[chainhash.Hash]struct{}, limit int) {
if len(m)+1 > limit {
// Remove a random entry from the map. For most compilers, Go's
// range statement iterates starting at a random item although
// that is not 100% guaranteed by the spec. The iteration order
// is not important here because an adversary would have to be
// able to pull off preimage attacks on the hashing function in
// order to target eviction of specific entries anyways.
for txHash := range m {
delete(m, txHash)
return
}
}
}
// blockHandler is the main handler for the block manager. It must be run
// as a goroutine. It processes block and inv messages in a separate goroutine
2013-10-04 07:34:24 +02:00
// from the peer handlers so the block (MsgBlock) messages are handled by a
// single thread without needing to lock memory data structures. This is
// important because the block manager controls which blocks are needed and how
// the fetching should proceed.
2013-08-06 23:55:22 +02:00
func (b *blockManager) blockHandler() {
candidatePeers := list.New()
2013-08-06 23:55:22 +02:00
out:
for {
2013-08-06 23:55:22 +02:00
select {
case m := <-b.msgChan:
switch msg := m.(type) {
case *newPeerMsg:
b.handleNewPeerMsg(candidatePeers, msg.peer)
case *txMsg:
b.handleTxMsg(msg)
msg.peer.txProcessed <- struct{}{}
case *blockMsg:
b.handleBlockMsg(msg)
msg.peer.blockProcessed <- struct{}{}
2013-09-27 04:06:01 +02:00
case *invMsg:
b.handleInvMsg(msg)
2013-08-06 23:55:22 +02:00
case *headersMsg:
b.handleHeadersMsg(msg)
case *donePeerMsg:
b.handleDonePeerMsg(candidatePeers, msg.peer)
case getSyncPeerMsg:
msg.reply <- b.syncPeer
case processBlockMsg:
_, isOrphan, err := b.chain.ProcessBlock(
msg.block, msg.flags)
if err != nil {
msg.reply <- processBlockResponse{
isOrphan: false,
err: err,
}
}
// Allow any clients performing long polling via the
// getblocktemplate RPC to be notified when the new block causes
// their old block template to become stale.
rpcServer := b.server.rpcServer
if rpcServer != nil {
rpcServer.gbtWorkState.NotifyBlockConnected(msg.block.Hash())
}
msg.reply <- processBlockResponse{
isOrphan: isOrphan,
err: nil,
}
case isCurrentMsg:
msg.reply <- b.current()
case pauseMsg:
// Wait until the sender unpauses the manager.
<-msg.unpause
default:
bmgrLog.Warnf("Invalid message type in block "+
"handler: %T", msg)
}
2013-08-06 23:55:22 +02:00
case <-b.quit:
break out
}
}
Implement a built-in concurrent CPU miner. This commit implements a built-in concurrent CPU miner that can be enabled with the combination of the --generate and --miningaddr options. The --blockminsize, --blockmaxsize, and --blockprioritysize configuration options wich already existed prior to this commit control the block template generation and hence affect blocks mined via the new CPU miner. The following is a quick overview of the changes and design: - Starting btcd with --generate and no addresses specified via --miningaddr will give an error and exit immediately - Makes use of multiple worker goroutines which independently create block templates, solve them, and submit the solved blocks - The default number of worker threads are based on the number of processor cores in the system and can be dynamically changed at run-time - There is a separate speed monitor goroutine used to collate periodic updates from the workers to calculate overall hashing speed - The current mining state, number of workers, and hashes per second can be queried - Updated sample-btcd.conf file has been updated to include the coin generation (mining) settings - Updated doc.go for the new command line options In addition the old --getworkkey option is now deprecated in favor of the new --miningaddr option. This was changed for a few reasons: - There is no reason to have a separate list of keys for getwork and CPU mining - getwork is deprecated and will be going away in the future so that means the --getworkkey flag will also be going away - Having the work 'key' in the option can be confused with wanting a private key while --miningaddr make it a little more clear it is an address that is required Closes #137. Reviewed by @jrick.
2014-06-12 03:09:38 +02:00
2013-08-06 23:55:22 +02:00
b.wg.Done()
bmgrLog.Trace("Block handler done")
2013-08-06 23:55:22 +02:00
}
// handleNotifyMsg handles notifications from blockchain. It does things such
2013-09-29 22:26:03 +02:00
// as request orphan block parents and relay accepted blocks to connected peers.
func (b *blockManager) handleNotifyMsg(notification *blockchain.Notification) {
2013-08-06 23:55:22 +02:00
switch notification.Type {
2013-09-29 22:26:03 +02:00
// A block has been accepted into the block chain. Relay it to other
// peers.
case blockchain.NTBlockAccepted:
// Don't relay if we are not current. Other peers that are
// current should already know about it.
if !b.current() {
return
}
block, ok := notification.Data.(*btcutil.Block)
if !ok {
bmgrLog.Warnf("Chain accepted notification is not a block.")
break
}
// Generate the inventory vector and relay it.
iv := wire.NewInvVect(wire.InvTypeBlock, block.Hash())
b.server.RelayInventory(iv, block.MsgBlock().Header)
// A block has been connected to the main block chain.
case blockchain.NTBlockConnected:
block, ok := notification.Data.(*btcutil.Block)
if !ok {
bmgrLog.Warnf("Chain connected notification is not a block.")
break
}
// Remove all of the transactions (except the coinbase) in the
// connected block from the transaction pool. Secondly, remove any
// transactions which are now double spends as a result of these
// new transactions. Finally, remove any transaction that is
// no longer an orphan. Transactions which depend on a confirmed
// transaction are NOT removed recursively because they are still
// valid.
for _, tx := range block.Transactions()[1:] {
b.server.txMemPool.RemoveTransaction(tx, false)
b.server.txMemPool.RemoveDoubleSpends(tx)
mempool: Stricter orphan evaluation and eviction. This modifies the way orphan removal and processing is done to more aggressively remove orphans that can no longer be valid due to other transactions being added or removed from the primary transaction pool. The net effect of these changes is that orphan pool will typically be much smaller which greatly improves its effectiveness. Previously, it would typically quickly reach the max allowed worst-case usage and effectively stay there forever. The following is a summary of the changes: - Modify the map that tracks which orphans redeem a given transaction to instead track by the specific outpoints that are redeemed - Modify the various orphan removal and processing functions to accept the full transaction rather than just its hash - Introduce a new flag on removeOrphans which specifies whether or not to remove the transactions that redeem the orphan being removed as well which is necessary since only some paths require it - Add a new function named removeOrphanDoubleSpends that is invoked whenever a transaction is added to the main pool and thus the outputs they spent become concrete spends - Introduce a new flag on maybeAcceptTransaction which specifies whether or not duplicate orphans should be rejected since only some paths require it - Modify processOrphans as follows: - Make use of the modified map - Use newly available flags and logic work more strictly work with tx chains - Recursively remove any orphans that also redeem any outputs redeemed by the accepted transactions - Several new tests to ensure proper functionality - Removing an orphan that doesn't exist is removed both when there is another orphan that redeems it and when there is not - Removing orphans works properly with orphan chains per the new remove redeemers flag - Removal of multi-input orphans that double spend an output when a concrete redeemer enters the transaction pool
2016-08-23 19:26:26 +02:00
b.server.txMemPool.RemoveOrphan(tx)
acceptedTxs := b.server.txMemPool.ProcessOrphans(tx)
b.server.AnnounceNewTransactions(acceptedTxs)
}
if r := b.server.rpcServer; r != nil {
// Now that this block is in the blockchain we can mark
// all the transactions (except the coinbase) as no
// longer needing rebroadcasting.
for _, tx := range block.Transactions()[1:] {
iv := wire.NewInvVect(wire.InvTypeTx, tx.Hash())
b.server.RemoveRebroadcastInventory(iv)
}
// Notify registered websocket clients of incoming block.
r.ntfnMgr.NotifyBlockConnected(block)
}
// A block has been disconnected from the main block chain.
case blockchain.NTBlockDisconnected:
block, ok := notification.Data.(*btcutil.Block)
if !ok {
bmgrLog.Warnf("Chain disconnected notification is not a block.")
break
}
// Reinsert all of the transactions (except the coinbase) into
// the transaction pool.
for _, tx := range block.Transactions()[1:] {
_, _, err := b.server.txMemPool.MaybeAcceptTransaction(tx,
false, false)
if err != nil {
// Remove the transaction and all transactions
// that depend on it if it wasn't accepted into
// the transaction pool.
b.server.txMemPool.RemoveTransaction(tx, true)
}
}
// Notify registered websocket clients.
if r := b.server.rpcServer; r != nil {
r.ntfnMgr.NotifyBlockDisconnected(block)
}
2013-08-06 23:55:22 +02:00
}
}
// NewPeer informs the block manager of a newly active peer.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) NewPeer(sp *serverPeer) {
// Ignore if we are shutting down.
if atomic.LoadInt32(&b.shutdown) != 0 {
return
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.msgChan <- &newPeerMsg{peer: sp}
}
// QueueTx adds the passed transaction message and peer to the block handling
// queue.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) QueueTx(tx *btcutil.Tx, sp *serverPeer) {
// Don't accept more transactions if we're shutting down.
if atomic.LoadInt32(&b.shutdown) != 0 {
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
sp.txProcessed <- struct{}{}
return
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.msgChan <- &txMsg{tx: tx, peer: sp}
}
2013-08-06 23:55:22 +02:00
// QueueBlock adds the passed block message and peer to the block handling queue.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) QueueBlock(block *btcutil.Block, sp *serverPeer) {
2013-08-06 23:55:22 +02:00
// Don't accept more blocks if we're shutting down.
if atomic.LoadInt32(&b.shutdown) != 0 {
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
sp.blockProcessed <- struct{}{}
2013-08-06 23:55:22 +02:00
return
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.msgChan <- &blockMsg{block: block, peer: sp}
2013-08-06 23:55:22 +02:00
}
// QueueInv adds the passed inv message and peer to the block handling queue.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) QueueInv(inv *wire.MsgInv, sp *serverPeer) {
2013-09-27 04:06:01 +02:00
// No channel handling here because peers do not need to block on inv
// messages.
if atomic.LoadInt32(&b.shutdown) != 0 {
return
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.msgChan <- &invMsg{inv: inv, peer: sp}
}
// QueueHeaders adds the passed headers message and peer to the block handling
// queue.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) QueueHeaders(headers *wire.MsgHeaders, sp *serverPeer) {
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
// No channel handling here because peers do not need to block on
// headers messages.
if atomic.LoadInt32(&b.shutdown) != 0 {
return
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.msgChan <- &headersMsg{headers: headers, peer: sp}
}
// DonePeer informs the blockmanager that a peer has disconnected.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) DonePeer(sp *serverPeer) {
// Ignore if we are shutting down.
if atomic.LoadInt32(&b.shutdown) != 0 {
return
}
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
b.msgChan <- &donePeerMsg{peer: sp}
}
2013-08-06 23:55:22 +02:00
// Start begins the core block handler which processes block and inv messages.
func (b *blockManager) Start() {
// Already started?
if atomic.AddInt32(&b.started, 1) != 1 {
2013-08-06 23:55:22 +02:00
return
}
bmgrLog.Trace("Starting block manager")
b.wg.Add(1)
2013-08-06 23:55:22 +02:00
go b.blockHandler()
}
// Stop gracefully shuts down the block manager by stopping all asynchronous
// handlers and waiting for them to finish.
func (b *blockManager) Stop() error {
if atomic.AddInt32(&b.shutdown, 1) != 1 {
bmgrLog.Warnf("Block manager is already in the process of " +
2013-08-06 23:55:22 +02:00
"shutting down")
return nil
}
bmgrLog.Infof("Block manager shutting down")
2013-08-06 23:55:22 +02:00
close(b.quit)
b.wg.Wait()
return nil
}
// SyncPeer returns the current sync peer.
peer: Refactor peer code into its own package. This commit introduces package peer which contains peer related features refactored from peer.go. The following is an overview of the features the package provides: - Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol - Full duplex reading and writing of bitcoin protocol messages - Automatic handling of the initial handshake process including protocol version negotiation - Automatic periodic keep-alive pinging and pong responses - Asynchronous message queueing of outbound messages with optional channel for notification when the message is actually sent - Inventory message batching and send trickling with known inventory detection and avoidance - Ability to wait for shutdown/disconnect - Flexible peer configuration - Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections as they see fit (proxies, etc.) - User agent name and version - Bitcoin network - Service support signalling (full nodes, bloom filters, etc.) - Maximum supported protocol version - Ability to register callbacks for handling bitcoin protocol messages - Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support - Disconnects the peer when the protocol version is high enough - Does not invoke the related callbacks for older protocol versions - Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version - Helper functions for pushing addresses, getblocks, getheaders, and reject messages - These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization - Full documentation with example usage - Test coverage In addition to the addition of the new package, btcd has been refactored to make use of the new package by extending the basic peer it provides to work with the blockmanager and server to act as a full node. The following is a broad overview of the changes to integrate the package: - The server is responsible for all connection management including persistent peers and banning - Callbacks for all messages that are required to implement a full node are registered - Logic necessary to serve data and behave as a full node is now in the callback registered with the peer Finally, the following peer-related things have been improved as a part of this refactor: - Don't log or send reject message due to peer disconnects - Remove trace logs that aren't particularly helpful - Finish an old TODO to switch the queue WaitGroup over to a channel - Improve various comments and fix some code consistency cases - Improve a few logging bits - Implement a most-recently-used nonce tracking for detecting self connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
func (b *blockManager) SyncPeer() *serverPeer {
reply := make(chan *serverPeer)
b.msgChan <- getSyncPeerMsg{reply: reply}
return <-reply
}
// ProcessBlock makes use of ProcessBlock on an internal instance of a block
// chain. It is funneled through the block manager since btcchain is not safe
// for concurrent access.
func (b *blockManager) ProcessBlock(block *btcutil.Block, flags blockchain.BehaviorFlags) (bool, error) {
Implement a built-in concurrent CPU miner. This commit implements a built-in concurrent CPU miner that can be enabled with the combination of the --generate and --miningaddr options. The --blockminsize, --blockmaxsize, and --blockprioritysize configuration options wich already existed prior to this commit control the block template generation and hence affect blocks mined via the new CPU miner. The following is a quick overview of the changes and design: - Starting btcd with --generate and no addresses specified via --miningaddr will give an error and exit immediately - Makes use of multiple worker goroutines which independently create block templates, solve them, and submit the solved blocks - The default number of worker threads are based on the number of processor cores in the system and can be dynamically changed at run-time - There is a separate speed monitor goroutine used to collate periodic updates from the workers to calculate overall hashing speed - The current mining state, number of workers, and hashes per second can be queried - Updated sample-btcd.conf file has been updated to include the coin generation (mining) settings - Updated doc.go for the new command line options In addition the old --getworkkey option is now deprecated in favor of the new --miningaddr option. This was changed for a few reasons: - There is no reason to have a separate list of keys for getwork and CPU mining - getwork is deprecated and will be going away in the future so that means the --getworkkey flag will also be going away - Having the work 'key' in the option can be confused with wanting a private key while --miningaddr make it a little more clear it is an address that is required Closes #137. Reviewed by @jrick.
2014-06-12 03:09:38 +02:00
reply := make(chan processBlockResponse, 1)
b.msgChan <- processBlockMsg{block: block, flags: flags, reply: reply}
response := <-reply
return response.isOrphan, response.err
}
// IsCurrent returns whether or not the block manager believes it is synced with
// the connected peers.
func (b *blockManager) IsCurrent() bool {
reply := make(chan bool)
b.msgChan <- isCurrentMsg{reply: reply}
return <-reply
}
// Pause pauses the block manager until the returned channel is closed.
//
// Note that while paused, all peer and block processing is halted. The
// message sender should avoid pausing the block manager for long durations.
func (b *blockManager) Pause() chan<- struct{} {
c := make(chan struct{})
b.msgChan <- pauseMsg{c}
return c
}
// checkpointSorter implements sort.Interface to allow a slice of checkpoints to
// be sorted.
type checkpointSorter []chaincfg.Checkpoint
// Len returns the number of checkpoints in the slice. It is part of the
// sort.Interface implementation.
func (s checkpointSorter) Len() int {
return len(s)
}
// Swap swaps the checkpoints at the passed indices. It is part of the
// sort.Interface implementation.
func (s checkpointSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// Less returns whether the checkpoint with index i should sort before the
// checkpoint with index j. It is part of the sort.Interface implementation.
func (s checkpointSorter) Less(i, j int) bool {
return s[i].Height < s[j].Height
}
// mergeCheckpoints returns two slices of checkpoints merged into one slice
// such that the checkpoints are sorted by height. In the case the additional
// checkpoints contain a checkpoint with the same height as a checkpoint in the
// default checkpoints, the additional checkpoint will take precedence and
// overwrite the default one.
func mergeCheckpoints(defaultCheckpoints, additional []chaincfg.Checkpoint) []chaincfg.Checkpoint {
// Create a map of the additional checkpoints to remove duplicates while
// leaving the most recently-specified checkpoint.
extra := make(map[int32]chaincfg.Checkpoint)
for _, checkpoint := range additional {
extra[checkpoint.Height] = checkpoint
}
// Add all default checkpoints that do not have an override in the
// additional checkpoints.
numDefault := len(defaultCheckpoints)
checkpoints := make([]chaincfg.Checkpoint, 0, numDefault+len(extra))
for _, checkpoint := range defaultCheckpoints {
if _, exists := extra[checkpoint.Height]; !exists {
checkpoints = append(checkpoints, checkpoint)
}
}
// Append the additional checkpoints and return the sorted results.
for _, checkpoint := range extra {
checkpoints = append(checkpoints, checkpoint)
}
sort.Sort(checkpointSorter(checkpoints))
return checkpoints
}
2013-08-06 23:55:22 +02:00
// newBlockManager returns a new bitcoin block manager.
// Use Start to begin processing asynchronous block and inv updates.
indexers: Implement optional tx/address indexes. This introduces a new indexing infrastructure for supporting optional indexes using the new database and blockchain infrastructure along with two concrete indexer implementations which provide both a transaction-by-hash and a transaction-by-address index. The new infrastructure is mostly separated into a package named indexers which is housed under the blockchain package. In order to support this, a new interface named IndexManager has been introduced in the blockchain package which provides methods to be notified when the chain has been initialized and when blocks are connected and disconnected from the main chain. A concrete implementation of an index manager is provided by the new indexers package. The new indexers package also provides a new interface named Indexer which allows the index manager to manage concrete index implementations which conform to the interface. The following is high level overview of the main index infrastructure changes: - Define a new IndexManager interface in the blockchain package and modify the package to make use of the interface when specified - Create a new indexers package - Provides an Index interface which allows concrete indexes to plugin to an index manager - Provides a concrete IndexManager implementation - Handles the lifecycle of all indexes it manages - Tracks the index tips - Handles catching up disabled indexes that have been reenabled - Handles reorgs while the index was disabled - Invokes the appropriate methods for all managed indexes to allow them to index and deindex the blocks and transactions - Implement a transaction-by-hash index - Makes use of internal block IDs to save a significant amount of space and indexing costs over the old transaction index format - Implement a transaction-by-address index - Makes use of a leveling scheme in order to provide a good tradeoff between space required and indexing costs - Supports enabling and disabling indexes at will - Support the ability to drop indexes if they are no longer desired The following is an overview of the btcd changes: - Add a new index logging subsystem - Add new options --txindex and --addrindex in order to enable the optional indexes - NOTE: The transaction index will automatically be enabled when the address index is enabled because it depends on it - Add new options --droptxindex and --dropaddrindex to allow the indexes to be removed - NOTE: The address index will also be removed when the transaction index is dropped because it depends on it - Update getrawtransactions RPC to make use of the transaction index - Reimplement the searchrawtransaction RPC that makes use of the address index - Update sample-btcd.conf to include sample usage for the new optional index flags
2016-02-19 05:51:18 +01:00
func newBlockManager(s *server, indexManager blockchain.IndexManager) (*blockManager, error) {
2013-08-06 23:55:22 +02:00
bm := blockManager{
server: s,
rejectedTxns: make(map[chainhash.Hash]struct{}),
requestedTxns: make(map[chainhash.Hash]struct{}),
requestedBlocks: make(map[chainhash.Hash]struct{}),
progressLogger: newBlockProgressLogger("Processed", bmgrLog),
msgChan: make(chan interface{}, cfg.MaxPeers*3),
headerList: list.New(),
quit: make(chan struct{}),
2013-08-06 23:55:22 +02:00
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Merge given checkpoints with the default ones unless they are disabled.
var checkpoints []chaincfg.Checkpoint
if !cfg.DisableCheckpoints {
checkpoints = mergeCheckpoints(s.chainParams.Checkpoints, cfg.addCheckpoints)
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Create a new block chain instance with the appropriate configuration.
var err error
bm.chain, err = blockchain.New(&blockchain.Config{
DB: s.db,
ChainParams: s.chainParams,
Checkpoints: checkpoints,
TimeSource: s.timeSource,
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
Notifications: bm.handleNotifyMsg,
SigCache: s.sigCache,
indexers: Implement optional tx/address indexes. This introduces a new indexing infrastructure for supporting optional indexes using the new database and blockchain infrastructure along with two concrete indexer implementations which provide both a transaction-by-hash and a transaction-by-address index. The new infrastructure is mostly separated into a package named indexers which is housed under the blockchain package. In order to support this, a new interface named IndexManager has been introduced in the blockchain package which provides methods to be notified when the chain has been initialized and when blocks are connected and disconnected from the main chain. A concrete implementation of an index manager is provided by the new indexers package. The new indexers package also provides a new interface named Indexer which allows the index manager to manage concrete index implementations which conform to the interface. The following is high level overview of the main index infrastructure changes: - Define a new IndexManager interface in the blockchain package and modify the package to make use of the interface when specified - Create a new indexers package - Provides an Index interface which allows concrete indexes to plugin to an index manager - Provides a concrete IndexManager implementation - Handles the lifecycle of all indexes it manages - Tracks the index tips - Handles catching up disabled indexes that have been reenabled - Handles reorgs while the index was disabled - Invokes the appropriate methods for all managed indexes to allow them to index and deindex the blocks and transactions - Implement a transaction-by-hash index - Makes use of internal block IDs to save a significant amount of space and indexing costs over the old transaction index format - Implement a transaction-by-address index - Makes use of a leveling scheme in order to provide a good tradeoff between space required and indexing costs - Supports enabling and disabling indexes at will - Support the ability to drop indexes if they are no longer desired The following is an overview of the btcd changes: - Add a new index logging subsystem - Add new options --txindex and --addrindex in order to enable the optional indexes - NOTE: The transaction index will automatically be enabled when the address index is enabled because it depends on it - Add new options --droptxindex and --dropaddrindex to allow the indexes to be removed - NOTE: The address index will also be removed when the transaction index is dropped because it depends on it - Update getrawtransactions RPC to make use of the transaction index - Reimplement the searchrawtransaction RPC that makes use of the address index - Update sample-btcd.conf to include sample usage for the new optional index flags
2016-02-19 05:51:18 +01:00
IndexManager: indexManager,
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
})
if err != nil {
return nil, err
}
best := bm.chain.BestSnapshot()
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
if !cfg.DisableCheckpoints {
// Initialize the next checkpoint based on the current height.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
bm.nextCheckpoint = bm.findNextHeaderCheckpoint(best.Height)
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
if bm.nextCheckpoint != nil {
bm.resetHeaderState(&best.Hash, best.Height)
Rework and improve headers-first mode. This commit improves how the headers-first mode works in several ways. The previous headers-first code was an initial implementation that did not have all of the bells and whistles and a few less than ideal characteristics. This commit improves the heaers-first code to resolve the issues discussed next. - The previous code only used headers-first mode when starting out from block height 0 rather than allowing it to work starting at any height before the final checkpoint. This means if you stopped the chain download at any point before the final checkpoint and restarted, it would not resume and you therefore would not have the benefit of the faster processing offered by headers-first mode. - Previously all headers (even those after the final checkpoint) were downloaded and only the final checkpoint was verified. This resulted in the following issues: - As the block chain grew, increasingly larger numbers of headers were downloaded and kept in memory - If the node the node serving up the headers was serving an invalid chain, it wouldn't be detected until downloading a large number of headers - When an invalid checkpoint was detected, no action was taken to recover which meant the chain download would essentially be stalled - The headers were kept in memory even though they didn't need to be as merely keeping track of the hashes and heights is enough to provde they properly link together and checkpoints match - There was no logging when headers were being downloaded so it could appear like nothing was happening - Duplicate requests for the same headers weren't being filtered which meant is was possible to inadvertently download the same headers twice only to throw them away. This commit resolves these issues with the following changes: - The current height is now examined at startup and prior each sync peer selection to allow it to resume headers-first mode starting from the known height to the next checkpoint - All checkpoints are now verified and the headers are only downloaded from the current known block height up to the next checkpoint. This has several desirable properties: - The amount of memory required is bounded by the maximum distance between to checkpoints rather than the entire length of the chain - A node serving up an invalid chain is detected very quickly and with little work - When an invalid checkpoint is detected, the headers are simply discarded and the peer is disconnected for serving an invalid chain - When the sync peer disconnets, all current headers are thrown away and, due to the new aforementioned resume code, when a new sync peer is selected, headers-first mode will continue from the last known good block - In addition to reduced memory usage from only keeping information about headers between two checkpoints, the only information now kept in memory about the headers is the hash and height rather than the entire header - There is now logging information about what is happening with headers - Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
}
} else {
bmgrLog.Info("Checkpoints are disabled")
}
return &bm, nil
2013-08-06 23:55:22 +02:00
}
// removeRegressionDB removes the existing regression test database if running
// in regression test mode and it already exists.
func removeRegressionDB(dbPath string) error {
// Don't do anything if not in regression test mode.
if !cfg.RegressionTest {
return nil
}
// Remove the old regression test database if it already exists.
fi, err := os.Stat(dbPath)
if err == nil {
btcdLog.Infof("Removing regression test database from '%s'", dbPath)
if fi.IsDir() {
err := os.RemoveAll(dbPath)
if err != nil {
return err
}
} else {
err := os.Remove(dbPath)
if err != nil {
return err
}
}
}
return nil
}
// dbPath returns the path to the block database given a database type.
func blockDbPath(dbType string) string {
// The database name is based on the database type.
dbName := blockDbNamePrefix + "_" + dbType
if dbType == "sqlite" {
dbName = dbName + ".db"
}
dbPath := filepath.Join(cfg.DataDir, dbName)
return dbPath
}
// warnMultipeDBs shows a warning if multiple block database types are detected.
// This is not a situation most users want. It is handy for development however
// to support multiple side-by-side databases.
func warnMultipeDBs() {
// This is intentionally not using the known db types which depend
// on the database types compiled into the binary since we want to
// detect legacy db types as well.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
dbTypes := []string{"ffldb", "leveldb", "sqlite"}
duplicateDbPaths := make([]string, 0, len(dbTypes)-1)
for _, dbType := range dbTypes {
if dbType == cfg.DbType {
continue
}
// Store db path as a duplicate db if it exists.
dbPath := blockDbPath(dbType)
if fileExists(dbPath) {
duplicateDbPaths = append(duplicateDbPaths, dbPath)
}
}
// Warn if there are extra databases.
if len(duplicateDbPaths) > 0 {
selectedDbPath := blockDbPath(cfg.DbType)
btcdLog.Warnf("WARNING: There are multiple block chain databases "+
"using different database types.\nYou probably don't "+
"want to waste disk space by having more than one.\n"+
"Your current database is located at [%v].\nThe "+
"additional database is located at %v", selectedDbPath,
duplicateDbPaths)
}
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// loadBlockDB loads (or creates when needed) the block database taking into
// account the selected database backend and returns a handle to it. It also
// contains additional logic such warning the user if there are multiple
// databases which consume space on the file system and ensuring the regression
// test database is clean when in regression test mode.
func loadBlockDB() (database.DB, error) {
// The memdb backend does not have a file path associated with it, so
// handle it uniquely. We also don't want to worry about the multiple
// database type warnings when running with the memory database.
if cfg.DbType == "memdb" {
btcdLog.Infof("Creating block database in memory.")
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
db, err := database.Create(cfg.DbType)
if err != nil {
return nil, err
}
return db, nil
}
warnMultipeDBs()
// The database name is based on the database type.
dbPath := blockDbPath(cfg.DbType)
// The regression test is special in that it needs a clean database for
// each run, so remove it now if it already exists.
removeRegressionDB(dbPath)
btcdLog.Infof("Loading block database from '%s'", dbPath)
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
db, err := database.Open(cfg.DbType, dbPath, activeNetParams.Net)
2013-08-06 23:55:22 +02:00
if err != nil {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Return the error if it's not because the database doesn't
// exist.
if dbErr, ok := err.(database.Error); !ok || dbErr.ErrorCode !=
database.ErrDbDoesNotExist {
2013-08-06 23:55:22 +02:00
return nil, err
}
// Create the db if it does not exist.
err = os.MkdirAll(cfg.DataDir, 0700)
2013-08-06 23:55:22 +02:00
if err != nil {
return nil, err
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
db, err = database.Create(cfg.DbType, dbPath, activeNetParams.Net)
2013-08-06 23:55:22 +02:00
if err != nil {
return nil, err
}
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
btcdLog.Info("Block database loaded")
2013-08-06 23:55:22 +02:00
return db, nil
}