2017-08-11 01:09:22 +02:00
|
|
|
// Copyright (c) 2013-2017 The btcsuite developers
|
2013-08-06 23:55:22 +02:00
|
|
|
// Use of this source code is governed by an ISC
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
2017-08-24 22:33:51 +02:00
|
|
|
package netsync
|
2013-08-06 23:55:22 +02:00
|
|
|
|
|
|
|
import (
|
2013-09-03 23:55:14 +02:00
|
|
|
"container/list"
|
2019-04-16 03:25:37 +02:00
|
|
|
"math/rand"
|
2013-10-04 05:31:00 +02:00
|
|
|
"net"
|
2013-08-06 23:55:22 +02:00
|
|
|
"sync"
|
2013-10-03 01:33:42 +02:00
|
|
|
"sync/atomic"
|
2013-08-06 23:55:22 +02:00
|
|
|
"time"
|
2014-07-02 15:50:08 +02:00
|
|
|
|
2015-01-30 23:25:42 +01:00
|
|
|
"github.com/btcsuite/btcd/blockchain"
|
2015-02-06 06:18:27 +01:00
|
|
|
"github.com/btcsuite/btcd/chaincfg"
|
2016-08-08 21:04:33 +02:00
|
|
|
"github.com/btcsuite/btcd/chaincfg/chainhash"
|
2015-08-26 11:54:55 +02:00
|
|
|
"github.com/btcsuite/btcd/database"
|
2016-08-19 18:08:37 +02:00
|
|
|
"github.com/btcsuite/btcd/mempool"
|
2017-08-15 07:03:06 +02:00
|
|
|
peerpkg "github.com/btcsuite/btcd/peer"
|
2015-02-05 22:16:39 +01:00
|
|
|
"github.com/btcsuite/btcd/wire"
|
2015-01-15 17:30:38 +01:00
|
|
|
"github.com/btcsuite/btcutil"
|
2013-08-06 23:55:22 +02:00
|
|
|
)
|
|
|
|
|
|
|
|
const (
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// minInFlightBlocks is the minimum number of blocks that should be
|
|
|
|
// in the request queue for headers-first mode before requesting
|
|
|
|
// more.
|
|
|
|
minInFlightBlocks = 10
|
|
|
|
|
2016-04-10 08:12:57 +02:00
|
|
|
// maxRejectedTxns is the maximum number of rejected transactions
|
2016-08-08 21:04:33 +02:00
|
|
|
// hashes to store in memory.
|
2016-04-10 08:12:57 +02:00
|
|
|
maxRejectedTxns = 1000
|
|
|
|
|
|
|
|
// maxRequestedBlocks is the maximum number of requested block
|
2016-08-08 21:04:33 +02:00
|
|
|
// hashes to store in memory.
|
2016-04-10 08:12:57 +02:00
|
|
|
maxRequestedBlocks = wire.MaxInvPerMsg
|
|
|
|
|
|
|
|
// maxRequestedTxns is the maximum number of requested transactions
|
2016-08-08 21:04:33 +02:00
|
|
|
// hashes to store in memory.
|
2016-04-10 08:12:57 +02:00
|
|
|
maxRequestedTxns = wire.MaxInvPerMsg
|
2019-04-16 05:35:45 +02:00
|
|
|
|
|
|
|
// maxStallDuration is the time after which we will disconnect our
|
|
|
|
// current sync peer if we haven't made progress.
|
|
|
|
maxStallDuration = 3 * time.Minute
|
|
|
|
|
|
|
|
// stallSampleInterval the interval at which we will check to see if our
|
|
|
|
// sync has stalled.
|
|
|
|
stallSampleInterval = 30 * time.Second
|
2013-08-06 23:55:22 +02:00
|
|
|
)
|
|
|
|
|
peer: Refactor peer code into its own package.
This commit introduces package peer which contains peer related features
refactored from peer.go.
The following is an overview of the features the package provides:
- Provides a basic concurrent safe bitcoin peer for handling bitcoin
communications via the peer-to-peer protocol
- Full duplex reading and writing of bitcoin protocol messages
- Automatic handling of the initial handshake process including protocol
version negotiation
- Automatic periodic keep-alive pinging and pong responses
- Asynchronous message queueing of outbound messages with optional
channel for notification when the message is actually sent
- Inventory message batching and send trickling with known inventory
detection and avoidance
- Ability to wait for shutdown/disconnect
- Flexible peer configuration
- Caller is responsible for creating outgoing connections and listening
for incoming connections so they have flexibility to establish
connections as they see fit (proxies, etc.)
- User agent name and version
- Bitcoin network
- Service support signalling (full nodes, bloom filters, etc.)
- Maximum supported protocol version
- Ability to register callbacks for handling bitcoin protocol messages
- Proper handling of bloom filter related commands when the caller does
not specify the related flag to signal support
- Disconnects the peer when the protocol version is high enough
- Does not invoke the related callbacks for older protocol versions
- Snapshottable peer statistics such as the total number of bytes read
and written, the remote address, user agent, and negotiated protocol
version
- Helper functions for pushing addresses, getblocks, getheaders, and
reject messages
- These could all be sent manually via the standard message output
function, but the helpers provide additional nice functionality such
as duplicate filtering and address randomization
- Full documentation with example usage
- Test coverage
In addition to the addition of the new package, btcd has been refactored
to make use of the new package by extending the basic peer it provides to
work with the blockmanager and server to act as a full node. The
following is a broad overview of the changes to integrate the package:
- The server is responsible for all connection management including
persistent peers and banning
- Callbacks for all messages that are required to implement a full node
are registered
- Logic necessary to serve data and behave as a full node is now in the
callback registered with the peer
Finally, the following peer-related things have been improved as a part
of this refactor:
- Don't log or send reject message due to peer disconnects
- Remove trace logs that aren't particularly helpful
- Finish an old TODO to switch the queue WaitGroup over to a channel
- Improve various comments and fix some code consistency cases
- Improve a few logging bits
- Implement a most-recently-used nonce tracking for detecting self
connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
|
|
|
// zeroHash is the zero value hash (all zeros). It is defined as a convenience.
|
2016-08-08 21:04:33 +02:00
|
|
|
var zeroHash chainhash.Hash
|
peer: Refactor peer code into its own package.
This commit introduces package peer which contains peer related features
refactored from peer.go.
The following is an overview of the features the package provides:
- Provides a basic concurrent safe bitcoin peer for handling bitcoin
communications via the peer-to-peer protocol
- Full duplex reading and writing of bitcoin protocol messages
- Automatic handling of the initial handshake process including protocol
version negotiation
- Automatic periodic keep-alive pinging and pong responses
- Asynchronous message queueing of outbound messages with optional
channel for notification when the message is actually sent
- Inventory message batching and send trickling with known inventory
detection and avoidance
- Ability to wait for shutdown/disconnect
- Flexible peer configuration
- Caller is responsible for creating outgoing connections and listening
for incoming connections so they have flexibility to establish
connections as they see fit (proxies, etc.)
- User agent name and version
- Bitcoin network
- Service support signalling (full nodes, bloom filters, etc.)
- Maximum supported protocol version
- Ability to register callbacks for handling bitcoin protocol messages
- Proper handling of bloom filter related commands when the caller does
not specify the related flag to signal support
- Disconnects the peer when the protocol version is high enough
- Does not invoke the related callbacks for older protocol versions
- Snapshottable peer statistics such as the total number of bytes read
and written, the remote address, user agent, and negotiated protocol
version
- Helper functions for pushing addresses, getblocks, getheaders, and
reject messages
- These could all be sent manually via the standard message output
function, but the helpers provide additional nice functionality such
as duplicate filtering and address randomization
- Full documentation with example usage
- Test coverage
In addition to the addition of the new package, btcd has been refactored
to make use of the new package by extending the basic peer it provides to
work with the blockmanager and server to act as a full node. The
following is a broad overview of the changes to integrate the package:
- The server is responsible for all connection management including
persistent peers and banning
- Callbacks for all messages that are required to implement a full node
are registered
- Logic necessary to serve data and behave as a full node is now in the
callback registered with the peer
Finally, the following peer-related things have been improved as a part
of this refactor:
- Don't log or send reject message due to peer disconnects
- Remove trace logs that aren't particularly helpful
- Finish an old TODO to switch the queue WaitGroup over to a channel
- Improve various comments and fix some code consistency cases
- Improve a few logging bits
- Implement a most-recently-used nonce tracking for detecting self
connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
|
|
|
|
2013-10-01 00:53:21 +02:00
|
|
|
// newPeerMsg signifies a newly connected peer to the block handler.
|
|
|
|
type newPeerMsg struct {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer *peerpkg.Peer
|
2013-10-01 00:53:21 +02:00
|
|
|
}
|
|
|
|
|
2013-08-06 23:55:22 +02:00
|
|
|
// blockMsg packages a bitcoin block message and the peer it came from together
|
|
|
|
// so the block handler has access to that information.
|
|
|
|
type blockMsg struct {
|
|
|
|
block *btcutil.Block
|
2017-08-15 07:03:06 +02:00
|
|
|
peer *peerpkg.Peer
|
2017-08-15 04:22:21 +02:00
|
|
|
reply chan struct{}
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
|
2013-09-27 02:41:02 +02:00
|
|
|
// invMsg packages a bitcoin inv message and the peer it came from together
|
|
|
|
// so the block handler has access to that information.
|
|
|
|
type invMsg struct {
|
2015-02-05 22:16:39 +01:00
|
|
|
inv *wire.MsgInv
|
2017-08-15 07:03:06 +02:00
|
|
|
peer *peerpkg.Peer
|
2013-09-27 02:41:02 +02:00
|
|
|
}
|
|
|
|
|
2014-12-22 01:27:30 +01:00
|
|
|
// headersMsg packages a bitcoin headers message and the peer it came from
|
|
|
|
// together so the block handler has access to that information.
|
2013-11-16 00:16:51 +01:00
|
|
|
type headersMsg struct {
|
2015-02-05 22:16:39 +01:00
|
|
|
headers *wire.MsgHeaders
|
2017-08-15 07:03:06 +02:00
|
|
|
peer *peerpkg.Peer
|
2013-11-16 00:16:51 +01:00
|
|
|
}
|
|
|
|
|
2020-07-10 09:19:38 +02:00
|
|
|
// notFoundMsg packages a bitcoin notfound message and the peer it came from
|
|
|
|
// together so the block handler has access to that information.
|
|
|
|
type notFoundMsg struct {
|
|
|
|
notFound *wire.MsgNotFound
|
|
|
|
peer *peerpkg.Peer
|
|
|
|
}
|
|
|
|
|
2013-10-01 00:53:21 +02:00
|
|
|
// donePeerMsg signifies a newly disconnected peer to the block handler.
|
|
|
|
type donePeerMsg struct {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer *peerpkg.Peer
|
2013-10-01 00:53:21 +02:00
|
|
|
}
|
|
|
|
|
2013-08-06 23:55:22 +02:00
|
|
|
// txMsg packages a bitcoin tx message and the peer it came from together
|
|
|
|
// so the block handler has access to that information.
|
|
|
|
type txMsg struct {
|
2017-08-15 04:22:21 +02:00
|
|
|
tx *btcutil.Tx
|
2017-08-15 07:03:06 +02:00
|
|
|
peer *peerpkg.Peer
|
2017-08-15 04:22:21 +02:00
|
|
|
reply chan struct{}
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
|
2014-03-12 19:02:38 +01:00
|
|
|
// getSyncPeerMsg is a message type to be sent across the message channel for
|
2014-02-04 16:39:00 +01:00
|
|
|
// retrieving the current sync peer.
|
|
|
|
type getSyncPeerMsg struct {
|
2017-08-15 07:03:06 +02:00
|
|
|
reply chan int32
|
2014-02-04 16:39:00 +01:00
|
|
|
}
|
|
|
|
|
2014-03-20 08:06:10 +01:00
|
|
|
// processBlockResponse is a response sent to the reply channel of a
|
|
|
|
// processBlockMsg.
|
|
|
|
type processBlockResponse struct {
|
|
|
|
isOrphan bool
|
|
|
|
err error
|
|
|
|
}
|
|
|
|
|
|
|
|
// processBlockMsg is a message type to be sent across the message channel
|
|
|
|
// for requested a block is processed. Note this call differs from blockMsg
|
2014-09-08 21:19:47 +02:00
|
|
|
// above in that blockMsg is intended for blocks that came from peers and have
|
2014-03-20 08:06:10 +01:00
|
|
|
// extra handling whereas this message essentially is just a concurrent safe
|
|
|
|
// way to call ProcessBlock on the internal block chain instance.
|
|
|
|
type processBlockMsg struct {
|
|
|
|
block *btcutil.Block
|
2015-01-30 23:25:42 +01:00
|
|
|
flags blockchain.BehaviorFlags
|
2014-03-20 08:06:10 +01:00
|
|
|
reply chan processBlockResponse
|
|
|
|
}
|
|
|
|
|
|
|
|
// isCurrentMsg is a message type to be sent across the message channel for
|
2017-08-24 23:46:59 +02:00
|
|
|
// requesting whether or not the sync manager believes it is synced with the
|
|
|
|
// currently connected peers.
|
2014-03-20 08:06:10 +01:00
|
|
|
type isCurrentMsg struct {
|
|
|
|
reply chan bool
|
|
|
|
}
|
|
|
|
|
2015-02-19 20:51:44 +01:00
|
|
|
// pauseMsg is a message type to be sent across the message channel for
|
2017-08-24 23:46:59 +02:00
|
|
|
// pausing the sync manager. This effectively provides the caller with
|
2015-02-19 20:51:44 +01:00
|
|
|
// exclusive access over the manager until a receive is performed on the
|
|
|
|
// unpause channel.
|
|
|
|
type pauseMsg struct {
|
|
|
|
unpause <-chan struct{}
|
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// headerNode is used as a node in a list of headers that are linked together
|
|
|
|
// between checkpoints.
|
|
|
|
type headerNode struct {
|
2015-08-08 04:20:49 +02:00
|
|
|
height int32
|
2016-08-08 21:04:33 +02:00
|
|
|
hash *chainhash.Hash
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// peerSyncState stores additional information that the SyncManager tracks
|
2017-08-15 07:03:06 +02:00
|
|
|
// about a peer.
|
|
|
|
type peerSyncState struct {
|
|
|
|
syncCandidate bool
|
|
|
|
requestQueue []*wire.InvVect
|
|
|
|
requestedTxns map[chainhash.Hash]struct{}
|
|
|
|
requestedBlocks map[chainhash.Hash]struct{}
|
|
|
|
}
|
|
|
|
|
2020-07-03 14:10:32 +02:00
|
|
|
// limitAdd is a helper function for maps that require a maximum limit by
|
|
|
|
// evicting a random value if adding the new value would cause it to
|
|
|
|
// overflow the maximum allowed.
|
|
|
|
func limitAdd(m map[chainhash.Hash]struct{}, hash chainhash.Hash, limit int) {
|
|
|
|
if len(m)+1 > limit {
|
|
|
|
// Remove a random entry from the map. For most compilers, Go's
|
|
|
|
// range statement iterates starting at a random item although
|
|
|
|
// that is not 100% guaranteed by the spec. The iteration order
|
|
|
|
// is not important here because an adversary would have to be
|
|
|
|
// able to pull off preimage attacks on the hashing function in
|
|
|
|
// order to target eviction of specific entries anyways.
|
|
|
|
for txHash := range m {
|
|
|
|
delete(m, txHash)
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
m[hash] = struct{}{}
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// SyncManager is used to communicate block related messages with peers. The
|
|
|
|
// SyncManager is started as by executing Start() in a goroutine. Once started,
|
|
|
|
// it selects peers to sync from and starts the initial block download. Once the
|
|
|
|
// chain is in sync, the SyncManager handles incoming block and header
|
|
|
|
// notifications and relays announcements of new blocks to peers.
|
|
|
|
type SyncManager struct {
|
2017-08-15 07:03:06 +02:00
|
|
|
peerNotifier PeerNotifier
|
|
|
|
started int32
|
|
|
|
shutdown int32
|
|
|
|
chain *blockchain.BlockChain
|
|
|
|
txMemPool *mempool.TxPool
|
|
|
|
chainParams *chaincfg.Params
|
|
|
|
progressLogger *blockProgressLogger
|
|
|
|
msgChan chan interface{}
|
|
|
|
wg sync.WaitGroup
|
|
|
|
quit chan struct{}
|
|
|
|
|
|
|
|
// These fields should only be accessed from the blockHandler thread
|
2019-04-16 05:35:45 +02:00
|
|
|
rejectedTxns map[chainhash.Hash]struct{}
|
|
|
|
requestedTxns map[chainhash.Hash]struct{}
|
|
|
|
requestedBlocks map[chainhash.Hash]struct{}
|
|
|
|
syncPeer *peerpkg.Peer
|
|
|
|
peerStates map[*peerpkg.Peer]*peerSyncState
|
|
|
|
lastProgressTime time.Time
|
2013-11-16 00:16:51 +01:00
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// The following fields are used for headers-first mode.
|
|
|
|
headersFirstMode bool
|
|
|
|
headerList *list.List
|
|
|
|
startHeader *list.Element
|
2015-02-06 06:18:27 +01:00
|
|
|
nextCheckpoint *chaincfg.Checkpoint
|
2017-11-14 05:37:35 +01:00
|
|
|
|
|
|
|
// An optional fee estimator.
|
|
|
|
feeEstimator *mempool.FeeEstimator
|
2013-11-16 00:16:51 +01:00
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// resetHeaderState sets the headers-first mode state to values appropriate for
|
|
|
|
// syncing from a new peer.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) resetHeaderState(newestHash *chainhash.Hash, newestHeight int32) {
|
|
|
|
sm.headersFirstMode = false
|
|
|
|
sm.headerList.Init()
|
|
|
|
sm.startHeader = nil
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
|
|
|
|
// When there is a next checkpoint, add an entry for the latest known
|
|
|
|
// block into the header pool. This allows the next downloaded header
|
|
|
|
// to prove it links to the chain properly.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.nextCheckpoint != nil {
|
2016-08-08 21:04:33 +02:00
|
|
|
node := headerNode{height: newestHeight, hash: newestHash}
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.headerList.PushBack(&node)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// findNextHeaderCheckpoint returns the next checkpoint after the passed height.
|
|
|
|
// It returns nil when there is not one either because the height is already
|
|
|
|
// later than the final checkpoint or some other reason such as disabled
|
|
|
|
// checkpoints.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) findNextHeaderCheckpoint(height int32) *chaincfg.Checkpoint {
|
|
|
|
checkpoints := sm.chain.Checkpoints()
|
2014-05-27 06:20:20 +02:00
|
|
|
if len(checkpoints) == 0 {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// There is no next checkpoint if the height is already after the final
|
|
|
|
// checkpoint.
|
|
|
|
finalCheckpoint := &checkpoints[len(checkpoints)-1]
|
|
|
|
if height >= finalCheckpoint.Height {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Find the next checkpoint.
|
|
|
|
nextCheckpoint := finalCheckpoint
|
|
|
|
for i := len(checkpoints) - 2; i >= 0; i-- {
|
|
|
|
if height >= checkpoints[i].Height {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
nextCheckpoint = &checkpoints[i]
|
|
|
|
}
|
|
|
|
return nextCheckpoint
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
|
2013-09-03 23:55:14 +02:00
|
|
|
// startSync will choose the best peer among the available candidate peers to
|
|
|
|
// download/sync the blockchain from. When syncing is already running, it
|
|
|
|
// simply returns. It also examines the candidates for any which are no longer
|
|
|
|
// candidates and removes them as needed.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) startSync() {
|
2013-09-03 23:55:14 +02:00
|
|
|
// Return now if we're already syncing.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.syncPeer != nil {
|
2013-09-03 23:55:14 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-15 07:03:06 +02:00
|
|
|
// Once the segwit soft-fork package has activated, we only
|
|
|
|
// want to sync from peers which are witness enabled to ensure
|
|
|
|
// that we fully validate all blockchain data.
|
2017-08-24 23:46:59 +02:00
|
|
|
segwitActive, err := sm.chain.IsDeploymentActive(chaincfg.DeploymentSegwit)
|
2017-08-15 07:03:06 +02:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Errorf("Unable to query for segwit soft-fork state: %v", err)
|
2017-08-15 07:03:06 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
best := sm.chain.BestSnapshot()
|
2019-04-16 03:25:37 +02:00
|
|
|
var higherPeers, equalPeers []*peerpkg.Peer
|
2017-08-24 23:46:59 +02:00
|
|
|
for peer, state := range sm.peerStates {
|
2017-08-15 07:03:06 +02:00
|
|
|
if !state.syncCandidate {
|
2016-10-19 01:54:55 +02:00
|
|
|
continue
|
|
|
|
}
|
2017-08-15 07:03:06 +02:00
|
|
|
|
|
|
|
if segwitActive && !peer.IsWitnessEnabled() {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Debugf("peer %v not witness enabled, skipping", peer)
|
2016-10-19 01:54:55 +02:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2013-09-03 23:55:14 +02:00
|
|
|
// Remove sync candidate peers that are no longer candidates due
|
2013-10-04 05:31:00 +02:00
|
|
|
// to passing their latest known block. NOTE: The < is
|
2017-05-30 16:59:51 +02:00
|
|
|
// intentional as opposed to <=. While technically the peer
|
2013-10-04 05:31:00 +02:00
|
|
|
// doesn't have a later block when it's equal, it will likely
|
|
|
|
// have one soon so it is a reasonable choice. It also allows
|
|
|
|
// the case where both are at 0 such as during regression test.
|
2017-08-15 07:03:06 +02:00
|
|
|
if peer.LastBlock() < best.Height {
|
|
|
|
state.syncCandidate = false
|
2013-09-03 23:55:14 +02:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2019-04-16 03:25:37 +02:00
|
|
|
// If the peer is at the same height as us, we'll add it a set
|
|
|
|
// of backup peers in case we do not find one with a higher
|
|
|
|
// height. If we are synced up with all of our peers, all of
|
|
|
|
// them will be in this set.
|
|
|
|
if peer.LastBlock() == best.Height {
|
|
|
|
equalPeers = append(equalPeers, peer)
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
// This peer has a height greater than our own, we'll consider
|
|
|
|
// it in the set of better peers from which we'll randomly
|
|
|
|
// select.
|
|
|
|
higherPeers = append(higherPeers, peer)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Pick randomly from the set of peers greater than our block height,
|
|
|
|
// falling back to a random peer of the same height if none are greater.
|
|
|
|
//
|
|
|
|
// TODO(conner): Use a better algorithm to ranking peers based on
|
|
|
|
// observed metrics and/or sync in parallel.
|
|
|
|
var bestPeer *peerpkg.Peer
|
|
|
|
switch {
|
|
|
|
case len(higherPeers) > 0:
|
|
|
|
bestPeer = higherPeers[rand.Intn(len(higherPeers))]
|
|
|
|
|
|
|
|
case len(equalPeers) > 0:
|
|
|
|
bestPeer = equalPeers[rand.Intn(len(equalPeers))]
|
2013-09-03 23:55:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Start syncing from the best peer if one was selected.
|
|
|
|
if bestPeer != nil {
|
2016-04-10 08:12:57 +02:00
|
|
|
// Clear the requestedBlocks if the sync peer changes, otherwise
|
|
|
|
// we may ignore blocks we need that the last sync peer failed
|
|
|
|
// to send.
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.requestedBlocks = make(map[chainhash.Hash]struct{})
|
2016-04-10 08:12:57 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
locator, err := sm.chain.LatestBlockLocator()
|
2013-09-03 23:55:14 +02:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Errorf("Failed to get block locator for the "+
|
2013-09-03 23:55:14 +02:00
|
|
|
"latest block: %v", err)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Syncing to block height %d from peer %v",
|
peer: Refactor peer code into its own package.
This commit introduces package peer which contains peer related features
refactored from peer.go.
The following is an overview of the features the package provides:
- Provides a basic concurrent safe bitcoin peer for handling bitcoin
communications via the peer-to-peer protocol
- Full duplex reading and writing of bitcoin protocol messages
- Automatic handling of the initial handshake process including protocol
version negotiation
- Automatic periodic keep-alive pinging and pong responses
- Asynchronous message queueing of outbound messages with optional
channel for notification when the message is actually sent
- Inventory message batching and send trickling with known inventory
detection and avoidance
- Ability to wait for shutdown/disconnect
- Flexible peer configuration
- Caller is responsible for creating outgoing connections and listening
for incoming connections so they have flexibility to establish
connections as they see fit (proxies, etc.)
- User agent name and version
- Bitcoin network
- Service support signalling (full nodes, bloom filters, etc.)
- Maximum supported protocol version
- Ability to register callbacks for handling bitcoin protocol messages
- Proper handling of bloom filter related commands when the caller does
not specify the related flag to signal support
- Disconnects the peer when the protocol version is high enough
- Does not invoke the related callbacks for older protocol versions
- Snapshottable peer statistics such as the total number of bytes read
and written, the remote address, user agent, and negotiated protocol
version
- Helper functions for pushing addresses, getblocks, getheaders, and
reject messages
- These could all be sent manually via the standard message output
function, but the helpers provide additional nice functionality such
as duplicate filtering and address randomization
- Full documentation with example usage
- Test coverage
In addition to the addition of the new package, btcd has been refactored
to make use of the new package by extending the basic peer it provides to
work with the blockmanager and server to act as a full node. The
following is a broad overview of the changes to integrate the package:
- The server is responsible for all connection management including
persistent peers and banning
- Callbacks for all messages that are required to implement a full node
are registered
- Logic necessary to serve data and behave as a full node is now in the
callback registered with the peer
Finally, the following peer-related things have been improved as a part
of this refactor:
- Don't log or send reject message due to peer disconnects
- Remove trace logs that aren't particularly helpful
- Finish an old TODO to switch the queue WaitGroup over to a channel
- Improve various comments and fix some code consistency cases
- Improve a few logging bits
- Implement a most-recently-used nonce tracking for detecting self
connections and generate a unique nonce for each peer
2015-10-02 08:03:20 +02:00
|
|
|
bestPeer.LastBlock(), bestPeer.Addr())
|
2013-11-16 00:16:51 +01:00
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// When the current height is less than a known checkpoint we
|
|
|
|
// can use block headers to learn about which blocks comprise
|
|
|
|
// the chain up to the checkpoint and perform less validation
|
|
|
|
// for them. This is possible since each header contains the
|
|
|
|
// hash of the previous header and a merkle root. Therefore if
|
|
|
|
// we validate all of the received headers link together
|
|
|
|
// properly and the checkpoint hashes match, we can be sure the
|
|
|
|
// hashes for the blocks in between are accurate. Further, once
|
|
|
|
// the full blocks are downloaded, the merkle root is computed
|
|
|
|
// and compared against the value in the header which proves the
|
|
|
|
// full block hasn't been tampered with.
|
|
|
|
//
|
|
|
|
// Once we have passed the final checkpoint, or checkpoints are
|
|
|
|
// disabled, use standard inv messages learn about the blocks
|
|
|
|
// and fully validate them. Finally, regression test mode does
|
|
|
|
// not support the headers-first approach so do normal block
|
|
|
|
// downloads when in regression test mode.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.nextCheckpoint != nil &&
|
|
|
|
best.Height < sm.nextCheckpoint.Height &&
|
|
|
|
sm.chainParams != &chaincfg.RegressionNetParams {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
bestPeer.PushGetHeadersMsg(locator, sm.nextCheckpoint.Hash)
|
|
|
|
sm.headersFirstMode = true
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Downloading headers for blocks %d to "+
|
2015-08-26 06:03:18 +02:00
|
|
|
"%d from peer %s", best.Height+1,
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.nextCheckpoint.Height, bestPeer.Addr())
|
2013-11-16 00:16:51 +01:00
|
|
|
} else {
|
|
|
|
bestPeer.PushGetBlocksMsg(locator, &zeroHash)
|
|
|
|
}
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.syncPeer = bestPeer
|
2019-04-16 05:35:45 +02:00
|
|
|
|
|
|
|
// Reset the last progress time now that we have a non-nil
|
|
|
|
// syncPeer to avoid instantly detecting it as stalled in the
|
|
|
|
// event the progress time hasn't been updated recently.
|
|
|
|
sm.lastProgressTime = time.Now()
|
2013-10-08 00:09:33 +02:00
|
|
|
} else {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("No sync peer candidates available")
|
2013-09-03 23:55:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-10-04 05:31:00 +02:00
|
|
|
// isSyncCandidate returns whether or not the peer is a candidate to consider
|
|
|
|
// syncing from.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) isSyncCandidate(peer *peerpkg.Peer) bool {
|
2013-10-04 05:31:00 +02:00
|
|
|
// Typically a peer is not a candidate for sync if it's not a full node,
|
|
|
|
// however regression test is special in that the regression tool is
|
|
|
|
// not a full node and still needs to be considered a sync candidate.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.chainParams == &chaincfg.RegressionNetParams {
|
2013-10-04 05:31:00 +02:00
|
|
|
// The peer is not a candidate if it's not coming from localhost
|
|
|
|
// or the hostname can't be determined for some reason.
|
2017-08-15 07:03:06 +02:00
|
|
|
host, _, err := net.SplitHostPort(peer.Addr())
|
2013-10-04 05:31:00 +02:00
|
|
|
if err != nil {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
if host != "127.0.0.1" && host != "localhost" {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
} else {
|
2016-10-19 01:54:55 +02:00
|
|
|
// The peer is not a candidate for sync if it's not a full
|
|
|
|
// node. Additionally, if the segwit soft-fork package has
|
|
|
|
// activated, then the peer must also be upgraded.
|
2017-08-24 23:46:59 +02:00
|
|
|
segwitActive, err := sm.chain.IsDeploymentActive(chaincfg.DeploymentSegwit)
|
2016-10-19 01:54:55 +02:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Errorf("Unable to query for segwit "+
|
2016-10-19 01:54:55 +02:00
|
|
|
"soft-fork state: %v", err)
|
|
|
|
}
|
2017-08-15 07:03:06 +02:00
|
|
|
nodeServices := peer.Services()
|
2016-10-19 01:54:55 +02:00
|
|
|
if nodeServices&wire.SFNodeNetwork != wire.SFNodeNetwork ||
|
2017-08-15 07:03:06 +02:00
|
|
|
(segwitActive && !peer.IsWitnessEnabled()) {
|
2013-10-04 05:31:00 +02:00
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Candidate if all checks passed.
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
2013-10-01 00:53:21 +02:00
|
|
|
// handleNewPeerMsg deals with new peers that have signalled they may
|
2013-09-03 23:55:14 +02:00
|
|
|
// be considered as a sync peer (they have already successfully negotiated). It
|
2013-09-29 22:26:03 +02:00
|
|
|
// also starts syncing if needed. It is invoked from the syncHandler goroutine.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) handleNewPeerMsg(peer *peerpkg.Peer) {
|
2013-09-03 23:55:14 +02:00
|
|
|
// Ignore if in the process of shutting down.
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
2013-09-03 23:55:14 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("New valid peer %s (%s)", peer, peer.UserAgent())
|
2013-10-02 23:49:31 +02:00
|
|
|
|
2017-08-15 07:03:06 +02:00
|
|
|
// Initialize the peer state
|
2017-08-24 23:46:59 +02:00
|
|
|
isSyncCandidate := sm.isSyncCandidate(peer)
|
|
|
|
sm.peerStates[peer] = &peerSyncState{
|
2017-08-15 07:03:06 +02:00
|
|
|
syncCandidate: isSyncCandidate,
|
|
|
|
requestedTxns: make(map[chainhash.Hash]struct{}),
|
|
|
|
requestedBlocks: make(map[chainhash.Hash]struct{}),
|
2013-09-03 23:55:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Start syncing by choosing the best candidate if needed.
|
2017-08-24 23:46:59 +02:00
|
|
|
if isSyncCandidate && sm.syncPeer == nil {
|
|
|
|
sm.startSync()
|
2017-08-15 07:03:06 +02:00
|
|
|
}
|
2013-09-03 23:55:14 +02:00
|
|
|
}
|
|
|
|
|
2019-04-16 05:35:45 +02:00
|
|
|
// handleStallSample will switch to a new sync peer if the current one has
|
|
|
|
// stalled. This is detected when by comparing the last progress timestamp with
|
|
|
|
// the current time, and disconnecting the peer if we stalled before reaching
|
|
|
|
// their highest advertised block.
|
|
|
|
func (sm *SyncManager) handleStallSample() {
|
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we don't have an active sync peer, exit early.
|
|
|
|
if sm.syncPeer == nil {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// If the stall timeout has not elapsed, exit early.
|
|
|
|
if time.Since(sm.lastProgressTime) <= maxStallDuration {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check to see that the peer's sync state exists.
|
|
|
|
state, exists := sm.peerStates[sm.syncPeer]
|
|
|
|
if !exists {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
sm.clearRequestedState(state)
|
|
|
|
|
|
|
|
disconnectSyncPeer := sm.shouldDCStalledSyncPeer()
|
|
|
|
sm.updateSyncPeer(disconnectSyncPeer)
|
|
|
|
}
|
|
|
|
|
|
|
|
// shouldDCStalledSyncPeer determines whether or not we should disconnect a
|
|
|
|
// stalled sync peer. If the peer has stalled and its reported height is greater
|
|
|
|
// than our own best height, we will disconnect it. Otherwise, we will keep the
|
|
|
|
// peer connected in case we are already at tip.
|
|
|
|
func (sm *SyncManager) shouldDCStalledSyncPeer() bool {
|
|
|
|
lastBlock := sm.syncPeer.LastBlock()
|
|
|
|
startHeight := sm.syncPeer.StartingHeight()
|
|
|
|
|
|
|
|
var peerHeight int32
|
|
|
|
if lastBlock > startHeight {
|
|
|
|
peerHeight = lastBlock
|
|
|
|
} else {
|
|
|
|
peerHeight = startHeight
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we've stalled out yet the sync peer reports having more blocks for
|
|
|
|
// us we will disconnect them. This allows us at tip to not disconnect
|
|
|
|
// peers when we are equal or they temporarily lag behind us.
|
|
|
|
best := sm.chain.BestSnapshot()
|
|
|
|
return peerHeight > best.Height
|
|
|
|
}
|
|
|
|
|
2013-09-03 23:55:14 +02:00
|
|
|
// handleDonePeerMsg deals with peers that have signalled they are done. It
|
|
|
|
// removes the peer as a candidate for syncing and in the case where it was
|
|
|
|
// the current sync peer, attempts to select a new best peer to sync from. It
|
|
|
|
// is invoked from the syncHandler goroutine.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) handleDonePeerMsg(peer *peerpkg.Peer) {
|
|
|
|
state, exists := sm.peerStates[peer]
|
2017-08-15 07:03:06 +02:00
|
|
|
if !exists {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Received done peer message for unknown peer %s", peer)
|
2017-08-15 07:03:06 +02:00
|
|
|
return
|
2013-09-03 23:55:14 +02:00
|
|
|
}
|
|
|
|
|
2017-08-15 07:03:06 +02:00
|
|
|
// Remove the peer from the list of candidate peers.
|
2017-08-24 23:46:59 +02:00
|
|
|
delete(sm.peerStates, peer)
|
2017-08-15 07:03:06 +02:00
|
|
|
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Lost peer %s", peer)
|
2013-10-02 23:49:31 +02:00
|
|
|
|
2019-04-16 05:34:46 +02:00
|
|
|
sm.clearRequestedState(state)
|
|
|
|
|
|
|
|
if peer == sm.syncPeer {
|
|
|
|
// Update the sync peer. The server has already disconnected the
|
|
|
|
// peer before signaling to the sync manager.
|
|
|
|
sm.updateSyncPeer(false)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// clearRequestedState wipes all expected transactions and blocks from the sync
|
|
|
|
// manager's requested maps that were requested under a peer's sync state, This
|
|
|
|
// allows them to be rerequested by a subsequent sync peer.
|
|
|
|
func (sm *SyncManager) clearRequestedState(state *peerSyncState) {
|
2013-10-08 17:47:00 +02:00
|
|
|
// Remove requested transactions from the global map so that they will
|
|
|
|
// be fetched from elsewhere next time we get an inv.
|
2017-08-15 07:03:06 +02:00
|
|
|
for txHash := range state.requestedTxns {
|
2017-08-24 23:46:59 +02:00
|
|
|
delete(sm.requestedTxns, txHash)
|
2013-10-08 17:47:00 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Remove requested blocks from the global map so that they will be
|
2013-10-03 18:10:26 +02:00
|
|
|
// fetched from elsewhere next time we get an inv.
|
2016-11-16 00:43:43 +01:00
|
|
|
// TODO: we could possibly here check which peers have these blocks
|
2013-10-01 00:53:21 +02:00
|
|
|
// and request them now to speed things up a little.
|
2017-08-15 07:03:06 +02:00
|
|
|
for blockHash := range state.requestedBlocks {
|
2017-08-24 23:46:59 +02:00
|
|
|
delete(sm.requestedBlocks, blockHash)
|
2013-09-03 23:55:14 +02:00
|
|
|
}
|
2019-04-16 05:34:46 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// updateSyncPeer choose a new sync peer to replace the current one. If
|
|
|
|
// dcSyncPeer is true, this method will also disconnect the current sync peer.
|
|
|
|
// If we are in header first mode, any header state related to prefetching is
|
|
|
|
// also reset in preparation for the next sync peer.
|
|
|
|
func (sm *SyncManager) updateSyncPeer(dcSyncPeer bool) {
|
2019-04-16 05:35:45 +02:00
|
|
|
log.Debugf("Updating sync peer, no progress for: %v",
|
2019-04-16 05:34:46 +02:00
|
|
|
time.Since(sm.lastProgressTime))
|
|
|
|
|
|
|
|
// First, disconnect the current sync peer if requested.
|
|
|
|
if dcSyncPeer {
|
|
|
|
sm.syncPeer.Disconnect()
|
|
|
|
}
|
2013-10-03 18:59:34 +02:00
|
|
|
|
2019-04-16 05:34:46 +02:00
|
|
|
// Reset any header state before we choose our next active sync peer.
|
|
|
|
if sm.headersFirstMode {
|
|
|
|
best := sm.chain.BestSnapshot()
|
|
|
|
sm.resetHeaderState(&best.Hash, best.Height)
|
2013-10-03 18:59:34 +02:00
|
|
|
}
|
2019-04-16 05:34:46 +02:00
|
|
|
|
|
|
|
sm.syncPeer = nil
|
|
|
|
sm.startSync()
|
2013-09-03 23:55:14 +02:00
|
|
|
}
|
|
|
|
|
2013-10-08 17:47:00 +02:00
|
|
|
// handleTxMsg handles transaction messages from all peers.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) handleTxMsg(tmsg *txMsg) {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer := tmsg.peer
|
2017-08-24 23:46:59 +02:00
|
|
|
state, exists := sm.peerStates[peer]
|
2017-08-15 07:03:06 +02:00
|
|
|
if !exists {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Received tx message from unknown peer %s", peer)
|
2017-08-15 07:03:06 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2014-07-04 03:42:23 +02:00
|
|
|
// NOTE: BitcoinJ, and possibly other wallets, don't follow the spec of
|
|
|
|
// sending an inventory message and allowing the remote peer to decide
|
|
|
|
// whether or not they want to request the transaction via a getdata
|
2015-05-05 16:53:15 +02:00
|
|
|
// message. Unfortunately, the reference implementation permits
|
2014-07-04 03:42:23 +02:00
|
|
|
// unrequested data, so it has allowed wallets that don't follow the
|
|
|
|
// spec to proliferate. While this is not ideal, there is no check here
|
|
|
|
// to disconnect peers for sending unsolicited transactions to provide
|
|
|
|
// interoperability.
|
2016-08-08 21:04:33 +02:00
|
|
|
txHash := tmsg.tx.Hash()
|
2016-04-10 08:12:57 +02:00
|
|
|
|
|
|
|
// Ignore transactions that we have already rejected. Do not
|
|
|
|
// send a reject message here because if the transaction was already
|
|
|
|
// rejected, the transaction was unsolicited.
|
2017-08-24 23:46:59 +02:00
|
|
|
if _, exists = sm.rejectedTxns[*txHash]; exists {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Debugf("Ignoring unsolicited previously rejected "+
|
2017-08-15 07:03:06 +02:00
|
|
|
"transaction %v from %s", txHash, peer)
|
2016-04-10 08:12:57 +02:00
|
|
|
return
|
|
|
|
}
|
2013-10-08 17:47:00 +02:00
|
|
|
|
|
|
|
// Process the transaction to include validation, insertion in the
|
|
|
|
// memory pool, orphan handling, etc.
|
2017-08-24 23:46:59 +02:00
|
|
|
acceptedTxs, err := sm.txMemPool.ProcessTransaction(tmsg.tx,
|
2017-08-15 07:03:06 +02:00
|
|
|
true, true, mempool.Tag(peer.ID()))
|
2013-10-08 17:47:00 +02:00
|
|
|
|
|
|
|
// Remove transaction from request maps. Either the mempool/chain
|
|
|
|
// already knows about it and as such we shouldn't have any more
|
|
|
|
// instances of trying to fetch it, or we failed to insert and thus
|
|
|
|
// we'll retry next time we get an inv.
|
2017-08-15 07:03:06 +02:00
|
|
|
delete(state.requestedTxns, *txHash)
|
2017-08-24 23:46:59 +02:00
|
|
|
delete(sm.requestedTxns, *txHash)
|
2013-10-08 17:47:00 +02:00
|
|
|
|
|
|
|
if err != nil {
|
2016-04-10 08:12:57 +02:00
|
|
|
// Do not request this transaction again until a new block
|
|
|
|
// has been processed.
|
2020-07-03 14:10:32 +02:00
|
|
|
limitAdd(sm.rejectedTxns, *txHash, maxRejectedTxns)
|
2016-04-10 08:12:57 +02:00
|
|
|
|
2013-10-08 17:47:00 +02:00
|
|
|
// When the error is a rule error, it means the transaction was
|
|
|
|
// simply rejected as opposed to something actually going wrong,
|
|
|
|
// so log it as such. Otherwise, something really did go wrong,
|
|
|
|
// so log it as an actual error.
|
2016-08-19 18:08:37 +02:00
|
|
|
if _, ok := err.(mempool.RuleError); ok {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Debugf("Rejected transaction %v from %s: %v",
|
2017-08-15 07:03:06 +02:00
|
|
|
txHash, peer, err)
|
2013-10-08 17:47:00 +02:00
|
|
|
} else {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Errorf("Failed to process transaction %v: %v",
|
2014-07-09 05:01:13 +02:00
|
|
|
txHash, err)
|
2013-10-08 17:47:00 +02:00
|
|
|
}
|
2014-07-09 05:01:13 +02:00
|
|
|
|
|
|
|
// Convert the error into an appropriate reject message and
|
|
|
|
// send it.
|
2016-08-19 18:08:37 +02:00
|
|
|
code, reason := mempool.ErrToRejectErr(err)
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.PushRejectMsg(wire.CmdTx, code, reason, txHash, false)
|
2013-10-08 17:47:00 +02:00
|
|
|
return
|
|
|
|
}
|
2016-04-14 19:58:09 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.peerNotifier.AnnounceNewTransactions(acceptedTxs)
|
2013-10-08 17:47:00 +02:00
|
|
|
}
|
|
|
|
|
2013-10-03 21:48:10 +02:00
|
|
|
// current returns true if we believe we are synced with our peers, false if we
|
|
|
|
// still have blocks to check
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) current() bool {
|
|
|
|
if !sm.chain.IsCurrent() {
|
2013-10-03 21:48:10 +02:00
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// if blockChain thinks we are current and we have no syncPeer it
|
|
|
|
// is probably right.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.syncPeer == nil {
|
2013-10-03 21:48:10 +02:00
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
2015-08-26 06:03:18 +02:00
|
|
|
// No matter what chain thinks, if we are below the block we are syncing
|
|
|
|
// to we are not current.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.chain.BestSnapshot().Height < sm.syncPeer.LastBlock() {
|
2013-10-03 21:48:10 +02:00
|
|
|
return false
|
|
|
|
}
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
2013-08-29 21:44:43 +02:00
|
|
|
// handleBlockMsg handles block messages from all peers.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) handleBlockMsg(bmsg *blockMsg) {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer := bmsg.peer
|
2017-08-24 23:46:59 +02:00
|
|
|
state, exists := sm.peerStates[peer]
|
2017-08-15 07:03:06 +02:00
|
|
|
if !exists {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Received block message from unknown peer %s", peer)
|
2017-08-15 07:03:06 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2013-10-03 18:21:41 +02:00
|
|
|
// If we didn't ask for this block then the peer is misbehaving.
|
2016-08-08 21:04:33 +02:00
|
|
|
blockHash := bmsg.block.Hash()
|
2017-08-15 07:03:06 +02:00
|
|
|
if _, exists = state.requestedBlocks[*blockHash]; !exists {
|
2013-10-03 18:21:41 +02:00
|
|
|
// The regression test intentionally sends some blocks twice
|
|
|
|
// to test duplicate block insertion fails. Don't disconnect
|
|
|
|
// the peer or ignore the block when we're in regression test
|
|
|
|
// mode in this case so the chain code is actually fed the
|
|
|
|
// duplicate blocks.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.chainParams != &chaincfg.RegressionNetParams {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Got unrequested block %v from %s -- "+
|
2017-08-15 07:03:06 +02:00
|
|
|
"disconnecting", blockHash, peer.Addr())
|
|
|
|
peer.Disconnect()
|
2013-10-03 18:21:41 +02:00
|
|
|
return
|
|
|
|
}
|
2013-10-01 22:32:50 +02:00
|
|
|
}
|
2014-01-29 01:53:25 +01:00
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// When in headers-first mode, if the block matches the hash of the
|
|
|
|
// first header in the list of headers that are being fetched, it's
|
|
|
|
// eligible for less validation since the headers have already been
|
|
|
|
// verified to link together and are valid up to the next checkpoint.
|
|
|
|
// Also, remove the list entry for all blocks except the checkpoint
|
|
|
|
// since it is needed to verify the next round of headers links
|
|
|
|
// properly.
|
|
|
|
isCheckpointBlock := false
|
2015-01-30 23:25:42 +01:00
|
|
|
behaviorFlags := blockchain.BFNone
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.headersFirstMode {
|
|
|
|
firstNodeEl := sm.headerList.Front()
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
if firstNodeEl != nil {
|
|
|
|
firstNode := firstNodeEl.Value.(*headerNode)
|
2016-08-08 21:04:33 +02:00
|
|
|
if blockHash.IsEqual(firstNode.hash) {
|
2015-01-30 23:25:42 +01:00
|
|
|
behaviorFlags |= blockchain.BFFastAdd
|
2017-08-24 23:46:59 +02:00
|
|
|
if firstNode.hash.IsEqual(sm.nextCheckpoint.Hash) {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
isCheckpointBlock = true
|
|
|
|
} else {
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.headerList.Remove(firstNodeEl)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
}
|
2013-11-16 00:16:51 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2014-01-29 01:53:25 +01:00
|
|
|
|
|
|
|
// Remove block from request maps. Either chain will know about it and
|
|
|
|
// so we shouldn't have any more instances of trying to fetch it, or we
|
|
|
|
// will fail the insert and thus we'll retry next time we get an inv.
|
2017-08-15 07:03:06 +02:00
|
|
|
delete(state.requestedBlocks, *blockHash)
|
2017-08-24 23:46:59 +02:00
|
|
|
delete(sm.requestedBlocks, *blockHash)
|
2013-11-16 00:16:51 +01:00
|
|
|
|
2014-01-29 01:53:25 +01:00
|
|
|
// Process the block to include validation, best chain selection, orphan
|
|
|
|
// handling, etc.
|
2017-08-24 23:46:59 +02:00
|
|
|
_, isOrphan, err := sm.chain.ProcessBlock(bmsg.block, behaviorFlags)
|
2013-08-06 23:55:22 +02:00
|
|
|
if err != nil {
|
2013-10-04 20:13:16 +02:00
|
|
|
// When the error is a rule error, it means the block was simply
|
|
|
|
// rejected as opposed to something actually going wrong, so log
|
|
|
|
// it as such. Otherwise, something really did go wrong, so log
|
|
|
|
// it as an actual error.
|
2015-01-30 23:25:42 +01:00
|
|
|
if _, ok := err.(blockchain.RuleError); ok {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Rejected block %v from %s: %v", blockHash,
|
2017-08-15 07:03:06 +02:00
|
|
|
peer, err)
|
2013-10-04 20:13:16 +02:00
|
|
|
} else {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Errorf("Failed to process block %v: %v",
|
2016-08-08 21:04:33 +02:00
|
|
|
blockHash, err)
|
2013-10-04 20:13:16 +02:00
|
|
|
}
|
2016-02-19 05:51:18 +01:00
|
|
|
if dbErr, ok := err.(database.Error); ok && dbErr.ErrorCode ==
|
|
|
|
database.ErrCorruption {
|
|
|
|
panic(dbErr)
|
|
|
|
}
|
2014-07-09 05:01:13 +02:00
|
|
|
|
|
|
|
// Convert the error into an appropriate reject message and
|
|
|
|
// send it.
|
2016-08-19 18:08:37 +02:00
|
|
|
code, reason := mempool.ErrToRejectErr(err)
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.PushRejectMsg(wire.CmdBlock, code, reason, blockHash, false)
|
2013-08-06 23:55:22 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2015-02-11 21:39:11 +01:00
|
|
|
// Meta-data about the new block this peer is reporting. We use this
|
2019-01-09 05:07:09 +01:00
|
|
|
// below to update this peer's latest block height and the heights of
|
2016-08-08 21:04:33 +02:00
|
|
|
// other peers based on their last announced block hash. This allows us
|
|
|
|
// to dynamically update the block heights of peers, avoiding stale
|
|
|
|
// heights when looking for a new sync peer. Upon acceptance of a block
|
|
|
|
// or recognition of an orphan, we also use this information to update
|
2015-02-11 21:39:11 +01:00
|
|
|
// the block heights over other peers who's invs may have been ignored
|
|
|
|
// if we are actively syncing while the chain is not yet current or
|
2019-01-09 05:07:09 +01:00
|
|
|
// who may have lost the lock announcement race.
|
2015-02-11 21:39:11 +01:00
|
|
|
var heightUpdate int32
|
2016-08-08 21:04:33 +02:00
|
|
|
var blkHashUpdate *chainhash.Hash
|
2015-02-11 21:39:11 +01:00
|
|
|
|
2014-06-25 22:54:19 +02:00
|
|
|
// Request the parents for the orphan block from the peer that sent it.
|
|
|
|
if isOrphan {
|
2015-02-11 21:39:11 +01:00
|
|
|
// We've just received an orphan block from a peer. In order
|
|
|
|
// to update the height of the peer, we try to extract the
|
|
|
|
// block height from the scriptSig of the coinbase transaction.
|
|
|
|
// Extraction is only attempted if the block's version is
|
|
|
|
// high enough (ver 2+).
|
|
|
|
header := &bmsg.block.MsgBlock().Header
|
|
|
|
if blockchain.ShouldHaveSerializedBlockHeight(header) {
|
|
|
|
coinbaseTx := bmsg.block.Transactions()[0]
|
|
|
|
cbHeight, err := blockchain.ExtractCoinbaseHeight(coinbaseTx)
|
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Unable to extract height from "+
|
2015-02-11 21:39:11 +01:00
|
|
|
"coinbase tx: %v", err)
|
|
|
|
} else {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Debugf("Extracted height of %v from "+
|
2015-02-11 21:39:11 +01:00
|
|
|
"orphan block", cbHeight)
|
2016-11-03 00:18:48 +01:00
|
|
|
heightUpdate = cbHeight
|
2016-08-08 21:04:33 +02:00
|
|
|
blkHashUpdate = blockHash
|
2015-02-11 21:39:11 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
orphanRoot := sm.chain.GetOrphanRoot(blockHash)
|
|
|
|
locator, err := sm.chain.LatestBlockLocator()
|
2014-06-25 22:54:19 +02:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Failed to get block locator for the "+
|
2014-06-25 22:54:19 +02:00
|
|
|
"latest block: %v", err)
|
|
|
|
} else {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.PushGetBlocksMsg(locator, orphanRoot)
|
2014-06-25 22:54:19 +02:00
|
|
|
}
|
|
|
|
} else {
|
2019-04-16 05:35:45 +02:00
|
|
|
if peer == sm.syncPeer {
|
|
|
|
sm.lastProgressTime = time.Now()
|
|
|
|
}
|
|
|
|
|
2014-06-25 22:54:19 +02:00
|
|
|
// When the block is not an orphan, log information about it and
|
|
|
|
// update the chain state.
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.progressLogger.LogBlockHeight(bmsg.block)
|
2014-03-20 20:08:46 +01:00
|
|
|
|
2015-02-11 21:39:11 +01:00
|
|
|
// Update this peer's latest block height, for future
|
2015-09-24 15:12:04 +02:00
|
|
|
// potential sync node candidacy.
|
2017-08-24 23:46:59 +02:00
|
|
|
best := sm.chain.BestSnapshot()
|
2015-08-26 06:03:18 +02:00
|
|
|
heightUpdate = best.Height
|
2017-02-01 22:25:25 +01:00
|
|
|
blkHashUpdate = &best.Hash
|
2015-02-11 21:39:11 +01:00
|
|
|
|
2016-04-10 08:12:57 +02:00
|
|
|
// Clear the rejected transactions.
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.rejectedTxns = make(map[chainhash.Hash]struct{})
|
2013-08-29 21:44:43 +02:00
|
|
|
}
|
|
|
|
|
2015-02-11 21:39:11 +01:00
|
|
|
// Update the block height for this peer. But only send a message to
|
|
|
|
// the server for updating peer heights if this is an orphan or our
|
2015-09-24 15:12:04 +02:00
|
|
|
// chain is "current". This avoids sending a spammy amount of messages
|
2015-02-11 21:39:11 +01:00
|
|
|
// if we're syncing the chain from scratch.
|
2016-08-08 21:04:33 +02:00
|
|
|
if blkHashUpdate != nil && heightUpdate != 0 {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.UpdateLastBlockHeight(heightUpdate)
|
2017-08-24 23:46:59 +02:00
|
|
|
if isOrphan || sm.current() {
|
|
|
|
go sm.peerNotifier.UpdatePeerHeights(blkHashUpdate, heightUpdate,
|
2017-08-15 07:03:06 +02:00
|
|
|
peer)
|
2015-02-11 21:39:11 +01:00
|
|
|
}
|
|
|
|
}
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
|
|
|
|
// Nothing more to do if we aren't in headers-first mode.
|
2017-08-24 23:46:59 +02:00
|
|
|
if !sm.headersFirstMode {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// This is headers-first mode, so if the block is not a checkpoint
|
|
|
|
// request more blocks using the header list when the request queue is
|
|
|
|
// getting short.
|
|
|
|
if !isCheckpointBlock {
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.startHeader != nil &&
|
2017-08-15 07:03:06 +02:00
|
|
|
len(state.requestedBlocks) < minInFlightBlocks {
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.fetchHeaderBlocks()
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
}
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// This is headers-first mode and the block is a checkpoint. When
|
|
|
|
// there is a next checkpoint, get the next round of headers by asking
|
|
|
|
// for headers starting from the block after this one up to the next
|
|
|
|
// checkpoint.
|
2017-08-24 23:46:59 +02:00
|
|
|
prevHeight := sm.nextCheckpoint.Height
|
|
|
|
prevHash := sm.nextCheckpoint.Hash
|
|
|
|
sm.nextCheckpoint = sm.findNextHeaderCheckpoint(prevHeight)
|
|
|
|
if sm.nextCheckpoint != nil {
|
2016-08-08 21:04:33 +02:00
|
|
|
locator := blockchain.BlockLocator([]*chainhash.Hash{prevHash})
|
2017-08-24 23:46:59 +02:00
|
|
|
err := peer.PushGetHeadersMsg(locator, sm.nextCheckpoint.Hash)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Failed to send getheaders message to "+
|
2017-08-15 07:03:06 +02:00
|
|
|
"peer %s: %v", peer.Addr(), err)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Downloading headers for blocks %d to %d from "+
|
2017-08-24 23:46:59 +02:00
|
|
|
"peer %s", prevHeight+1, sm.nextCheckpoint.Height,
|
|
|
|
sm.syncPeer.Addr())
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// This is headers-first mode, the block is a checkpoint, and there are
|
|
|
|
// no more checkpoints, so switch to normal mode by requesting blocks
|
|
|
|
// from the block after this one up to the end of the chain (zero hash).
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.headersFirstMode = false
|
|
|
|
sm.headerList.Init()
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Reached the final checkpoint -- switching to normal mode")
|
2016-08-08 21:04:33 +02:00
|
|
|
locator := blockchain.BlockLocator([]*chainhash.Hash{blockHash})
|
2017-08-15 07:03:06 +02:00
|
|
|
err = peer.PushGetBlocksMsg(locator, &zeroHash)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Failed to send getblocks message to peer %s: %v",
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.Addr(), err)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// fetchHeaderBlocks creates and sends a request to the syncPeer for the next
|
|
|
|
// list of blocks to be downloaded based on the current list of headers.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) fetchHeaderBlocks() {
|
2014-03-13 14:45:41 +01:00
|
|
|
// Nothing to do if there is no start header.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.startHeader == nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("fetchHeaderBlocks called with no start header")
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Build up a getdata request for the list of blocks the headers
|
2015-02-05 22:16:39 +01:00
|
|
|
// describe. The size hint will be limited to wire.MaxInvPerMsg by
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// the function, so no need to double check it here.
|
2017-08-24 23:46:59 +02:00
|
|
|
gdmsg := wire.NewMsgGetDataSizeHint(uint(sm.headerList.Len()))
|
2014-01-29 01:53:25 +01:00
|
|
|
numRequested := 0
|
2017-08-24 23:46:59 +02:00
|
|
|
for e := sm.startHeader; e != nil; e = e.Next() {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
node, ok := e.Value.(*headerNode)
|
2014-01-29 01:53:25 +01:00
|
|
|
if !ok {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warn("Header list node type is not a headerNode")
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
continue
|
2014-01-29 01:53:25 +01:00
|
|
|
}
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
|
2016-08-08 21:04:33 +02:00
|
|
|
iv := wire.NewInvVect(wire.InvTypeBlock, node.hash)
|
2017-08-24 23:46:59 +02:00
|
|
|
haveInv, err := sm.haveInventory(iv)
|
2014-07-07 19:07:33 +02:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Unexpected failure when checking for "+
|
2014-07-07 19:07:33 +02:00
|
|
|
"existing inventory during header block "+
|
|
|
|
"fetch: %v", err)
|
|
|
|
}
|
|
|
|
if !haveInv {
|
2017-08-24 23:46:59 +02:00
|
|
|
syncPeerState := sm.peerStates[sm.syncPeer]
|
2017-08-15 07:03:06 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.requestedBlocks[*node.hash] = struct{}{}
|
2017-08-15 07:03:06 +02:00
|
|
|
syncPeerState.requestedBlocks[*node.hash] = struct{}{}
|
2016-10-19 01:54:55 +02:00
|
|
|
|
|
|
|
// If we're fetching from a witness enabled peer
|
|
|
|
// post-fork, then ensure that we receive all the
|
|
|
|
// witness data in the blocks.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.syncPeer.IsWitnessEnabled() {
|
2016-10-19 01:54:55 +02:00
|
|
|
iv.Type = wire.InvTypeWitnessBlock
|
|
|
|
}
|
|
|
|
|
2014-01-29 01:53:25 +01:00
|
|
|
gdmsg.AddInvVect(iv)
|
|
|
|
numRequested++
|
|
|
|
}
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.startHeader = e.Next()
|
2015-02-05 22:16:39 +01:00
|
|
|
if numRequested >= wire.MaxInvPerMsg {
|
2014-01-29 01:53:25 +01:00
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if len(gdmsg.InvList) > 0 {
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.syncPeer.QueueMessage(gdmsg, nil)
|
2014-01-29 01:53:25 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-21 19:01:49 +01:00
|
|
|
// handleHeadersMsg handles block header messages from all peers. Headers are
|
|
|
|
// requested when performing a headers-first sync.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) handleHeadersMsg(hmsg *headersMsg) {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer := hmsg.peer
|
2017-08-24 23:46:59 +02:00
|
|
|
_, exists := sm.peerStates[peer]
|
2017-08-15 07:03:06 +02:00
|
|
|
if !exists {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Received headers message from unknown peer %s", peer)
|
2017-08-15 07:03:06 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// The remote peer is misbehaving if we didn't request headers.
|
|
|
|
msg := hmsg.headers
|
|
|
|
numHeaders := len(msg.Headers)
|
2017-08-24 23:46:59 +02:00
|
|
|
if !sm.headersFirstMode {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Got %d unrequested headers from %s -- "+
|
2017-08-15 07:03:06 +02:00
|
|
|
"disconnecting", numHeaders, peer.Addr())
|
|
|
|
peer.Disconnect()
|
2014-01-29 01:53:25 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// Nothing to do for an empty headers message.
|
|
|
|
if numHeaders == 0 {
|
|
|
|
return
|
2014-01-29 01:53:25 +01:00
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// Process all of the received headers ensuring each one connects to the
|
|
|
|
// previous and that checkpoints match.
|
|
|
|
receivedCheckpoint := false
|
2016-08-08 21:04:33 +02:00
|
|
|
var finalHash *chainhash.Hash
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
for _, blockHeader := range msg.Headers {
|
2016-08-08 21:04:33 +02:00
|
|
|
blockHash := blockHeader.BlockHash()
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
finalHash = &blockHash
|
|
|
|
|
|
|
|
// Ensure there is a previous header to compare against.
|
2017-08-24 23:46:59 +02:00
|
|
|
prevNodeEl := sm.headerList.Back()
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
if prevNodeEl == nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Header list does not contain a previous" +
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
"element as expected -- disconnecting peer")
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.Disconnect()
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ensure the header properly connects to the previous one and
|
|
|
|
// add it to the list of headers.
|
2016-08-08 21:04:33 +02:00
|
|
|
node := headerNode{hash: &blockHash}
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
prevNode := prevNodeEl.Value.(*headerNode)
|
2016-08-08 21:04:33 +02:00
|
|
|
if prevNode.hash.IsEqual(&blockHeader.PrevBlock) {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
node.height = prevNode.height + 1
|
2017-08-24 23:46:59 +02:00
|
|
|
e := sm.headerList.PushBack(&node)
|
|
|
|
if sm.startHeader == nil {
|
|
|
|
sm.startHeader = e
|
2014-01-29 01:53:25 +01:00
|
|
|
}
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
} else {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Received block header that does not "+
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
"properly connect to the chain from peer %s "+
|
2017-08-15 07:03:06 +02:00
|
|
|
"-- disconnecting", peer.Addr())
|
|
|
|
peer.Disconnect()
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// Verify the header at the next checkpoint height matches.
|
2017-08-24 23:46:59 +02:00
|
|
|
if node.height == sm.nextCheckpoint.Height {
|
|
|
|
if node.hash.IsEqual(sm.nextCheckpoint.Hash) {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
receivedCheckpoint = true
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Verified downloaded block "+
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
"header against checkpoint at height "+
|
2016-08-08 21:04:33 +02:00
|
|
|
"%d/hash %s", node.height, node.hash)
|
2014-01-29 01:53:25 +01:00
|
|
|
} else {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Block header at height %d/hash "+
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
"%s from peer %s does NOT match "+
|
|
|
|
"expected checkpoint hash of %s -- "+
|
|
|
|
"disconnecting", node.height,
|
2017-08-15 07:03:06 +02:00
|
|
|
node.hash, peer.Addr(),
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.nextCheckpoint.Hash)
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.Disconnect()
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
2014-01-29 01:53:25 +01:00
|
|
|
}
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
break
|
2014-01-29 01:53:25 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// When this header is a checkpoint, switch to fetching the blocks for
|
|
|
|
// all of the headers since the last checkpoint.
|
|
|
|
if receivedCheckpoint {
|
|
|
|
// Since the first entry of the list is always the final block
|
|
|
|
// that is already in the database and is only used to ensure
|
|
|
|
// the next header links properly, it must be removed before
|
|
|
|
// fetching the blocks.
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.headerList.Remove(sm.headerList.Front())
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Infof("Received %v block headers: Fetching blocks",
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.headerList.Len())
|
|
|
|
sm.progressLogger.SetLastLogTime(time.Now())
|
|
|
|
sm.fetchHeaderBlocks()
|
2014-01-29 01:53:25 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// This header is not a checkpoint, so request the next batch of
|
|
|
|
// headers starting from the latest known header and ending with the
|
|
|
|
// next checkpoint.
|
2016-08-08 21:04:33 +02:00
|
|
|
locator := blockchain.BlockLocator([]*chainhash.Hash{finalHash})
|
2017-08-24 23:46:59 +02:00
|
|
|
err := peer.PushGetHeadersMsg(locator, sm.nextCheckpoint.Hash)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Failed to send getheaders message to "+
|
2017-08-15 07:03:06 +02:00
|
|
|
"peer %s: %v", peer.Addr(), err)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
return
|
|
|
|
}
|
2014-01-29 01:53:25 +01:00
|
|
|
}
|
|
|
|
|
2020-07-10 09:19:38 +02:00
|
|
|
// handleNotFoundMsg handles notfound messages from all peers.
|
|
|
|
func (sm *SyncManager) handleNotFoundMsg(nfmsg *notFoundMsg) {
|
|
|
|
peer := nfmsg.peer
|
|
|
|
state, exists := sm.peerStates[peer]
|
|
|
|
if !exists {
|
|
|
|
log.Warnf("Received notfound message from unknown peer %s", peer)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
for _, inv := range nfmsg.notFound.InvList {
|
|
|
|
// verify the hash was actually announced by the peer
|
|
|
|
// before deleting from the global requested maps.
|
|
|
|
switch inv.Type {
|
2020-09-03 12:53:16 +02:00
|
|
|
case wire.InvTypeWitnessBlock:
|
|
|
|
fallthrough
|
2020-07-10 09:19:38 +02:00
|
|
|
case wire.InvTypeBlock:
|
|
|
|
if _, exists := state.requestedBlocks[inv.Hash]; exists {
|
|
|
|
delete(state.requestedBlocks, inv.Hash)
|
|
|
|
delete(sm.requestedBlocks, inv.Hash)
|
|
|
|
}
|
2020-09-03 12:53:16 +02:00
|
|
|
|
|
|
|
case wire.InvTypeWitnessTx:
|
|
|
|
fallthrough
|
2020-07-10 09:19:38 +02:00
|
|
|
case wire.InvTypeTx:
|
|
|
|
if _, exists := state.requestedTxns[inv.Hash]; exists {
|
|
|
|
delete(state.requestedTxns, inv.Hash)
|
|
|
|
delete(sm.requestedTxns, inv.Hash)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-10-08 17:47:00 +02:00
|
|
|
// haveInventory returns whether or not the inventory represented by the passed
|
|
|
|
// inventory vector is known. This includes checking all of the various places
|
|
|
|
// inventory can be when it is in different states such as blocks that are part
|
|
|
|
// of the main chain, on a side chain, in the orphan pool, and transactions that
|
2014-01-11 05:32:05 +01:00
|
|
|
// are in the memory pool (either the main pool or orphan pool).
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) haveInventory(invVect *wire.InvVect) (bool, error) {
|
2013-10-08 17:47:00 +02:00
|
|
|
switch invVect.Type {
|
2016-10-19 01:54:55 +02:00
|
|
|
case wire.InvTypeWitnessBlock:
|
|
|
|
fallthrough
|
2015-02-05 22:16:39 +01:00
|
|
|
case wire.InvTypeBlock:
|
2013-10-08 17:47:00 +02:00
|
|
|
// Ask chain if the block is known to it in any form (main
|
|
|
|
// chain, side chain, or orphan).
|
2017-08-24 23:46:59 +02:00
|
|
|
return sm.chain.HaveBlock(&invVect.Hash)
|
2013-10-08 17:47:00 +02:00
|
|
|
|
2016-10-19 01:54:55 +02:00
|
|
|
case wire.InvTypeWitnessTx:
|
|
|
|
fallthrough
|
2015-02-05 22:16:39 +01:00
|
|
|
case wire.InvTypeTx:
|
2013-10-08 17:47:00 +02:00
|
|
|
// Ask the transaction memory pool if the transaction is known
|
|
|
|
// to it in any form (main pool or orphan).
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.txMemPool.HaveTransaction(&invVect.Hash) {
|
2014-07-07 19:07:33 +02:00
|
|
|
return true, nil
|
2013-10-08 17:47:00 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Check if the transaction exists from the point of view of the
|
multi: Rework utxoset/view to use outpoints.
This modifies the utxoset in the database and related UtxoViewpoint to
store and work with unspent transaction outputs on a per-output basis
instead of at a transaction level. This was inspired by similar recent
changes in Bitcoin Core.
The primary motivation is to simplify the code, pave the way for a
utxo cache, and generally focus on optimizing runtime performance.
The tradeoff is that this approach does somewhat increase the size of
the serialized utxoset since it means that the transaction hash is
duplicated for each output as a part of the key and some additional
details such as whether the containing transaction is a coinbase and the
block height it was a part of are duplicated in each output.
However, in practice, the size difference isn't all that large, disk
space is relatively cheap, certainly cheaper than memory, and it is much
more important to provide more efficient runtime operation since that is
the ultimate purpose of the daemon.
While performing this conversion, it also simplifies the code to remove
the transaction version information from the utxoset as well as the
spend journal. The logic for only serializing it under certain
circumstances is complicated and it isn't actually used anywhere aside
from the gettxout RPC where it also isn't used by anything important
either. Consequently, this also removes the version field of the
gettxout RPC result.
The utxos in the database are automatically migrated to the new format
with this commit and it is possible to interrupt and resume the
migration process.
Finally, it also updates the tests for the new format and adds a new
function to the tests to convert the old test data to the new format for
convenience. The data has already been converted and updated in the
commit.
An overview of the changes are as follows:
- Remove transaction version from both spent and unspent output entries
- Update utxo serialization format to exclude the version
- Modify the spend journal serialization format
- The old version field is now reserved and always stores zero and
ignores it when reading
- This allows old entries to be used by new code without having to
migrate the entire spend journal
- Remove version field from gettxout RPC result
- Convert UtxoEntry to represent a specific utxo instead of a
transaction with all remaining utxos
- Optimize for memory usage with an eye towards a utxo cache
- Combine details such as whether the txout was contained in a
coinbase, is spent, and is modified into a single packed field of
bit flags
- Align entry fields to eliminate extra padding since ultimately
there will be a lot of these in memory
- Introduce a free list for serializing an outpoint to the database
key format to significantly reduce pressure on the GC
- Update all related functions that previously dealt with transaction
hashes to accept outpoints instead
- Update all callers accordingly
- Only add individually requested outputs from the mempool when
constructing a mempool view
- Modify the spend journal to always store the block height and coinbase
information with every spent txout
- Introduce code to handle fetching the missing information from
another utxo from the same transaction in the event an old style
entry is encountered
- Make use of a database cursor with seek to do this much more
efficiently than testing every possible output
- Always decompress data loaded from the database now that a utxo entry
only consists of a specific output
- Introduce upgrade code to migrate the utxo set to the new format
- Store versions of the utxoset and spend journal buckets
- Allow migration process to be interrupted and resumed
- Update all tests to expect the correct encodings, remove tests that no
longer apply, and add new ones for the new expected behavior
- Convert old tests for the legacy utxo format deserialization code to
test the new function that is used during upgrade
- Update the utxostore test data and add function that was used to
convert it
- Introduce a few new functions on UtxoViewpoint
- AddTxOut for adding an individual txout versus all of them
- addTxOut to handle the common code between the new AddTxOut and
existing AddTxOuts
- RemoveEntry for removing an individual txout
- fetchEntryByHash for fetching any remaining utxo for a given
transaction hash
2017-09-03 09:59:15 +02:00
|
|
|
// end of the main chain. Note that this is only a best effort
|
|
|
|
// since it is expensive to check existence of every output and
|
|
|
|
// the only purpose of this check is to avoid downloading
|
|
|
|
// already known transactions. Only the first two outputs are
|
|
|
|
// checked because the vast majority of transactions consist of
|
|
|
|
// two outputs where one is some form of "pay-to-somebody-else"
|
|
|
|
// and the other is a change output.
|
|
|
|
prevOut := wire.OutPoint{Hash: invVect.Hash}
|
|
|
|
for i := uint32(0); i < 2; i++ {
|
|
|
|
prevOut.Index = i
|
|
|
|
entry, err := sm.chain.FetchUtxoEntry(prevOut)
|
|
|
|
if err != nil {
|
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
if entry != nil && !entry.IsSpent() {
|
|
|
|
return true, nil
|
|
|
|
}
|
2015-08-26 06:03:18 +02:00
|
|
|
}
|
multi: Rework utxoset/view to use outpoints.
This modifies the utxoset in the database and related UtxoViewpoint to
store and work with unspent transaction outputs on a per-output basis
instead of at a transaction level. This was inspired by similar recent
changes in Bitcoin Core.
The primary motivation is to simplify the code, pave the way for a
utxo cache, and generally focus on optimizing runtime performance.
The tradeoff is that this approach does somewhat increase the size of
the serialized utxoset since it means that the transaction hash is
duplicated for each output as a part of the key and some additional
details such as whether the containing transaction is a coinbase and the
block height it was a part of are duplicated in each output.
However, in practice, the size difference isn't all that large, disk
space is relatively cheap, certainly cheaper than memory, and it is much
more important to provide more efficient runtime operation since that is
the ultimate purpose of the daemon.
While performing this conversion, it also simplifies the code to remove
the transaction version information from the utxoset as well as the
spend journal. The logic for only serializing it under certain
circumstances is complicated and it isn't actually used anywhere aside
from the gettxout RPC where it also isn't used by anything important
either. Consequently, this also removes the version field of the
gettxout RPC result.
The utxos in the database are automatically migrated to the new format
with this commit and it is possible to interrupt and resume the
migration process.
Finally, it also updates the tests for the new format and adds a new
function to the tests to convert the old test data to the new format for
convenience. The data has already been converted and updated in the
commit.
An overview of the changes are as follows:
- Remove transaction version from both spent and unspent output entries
- Update utxo serialization format to exclude the version
- Modify the spend journal serialization format
- The old version field is now reserved and always stores zero and
ignores it when reading
- This allows old entries to be used by new code without having to
migrate the entire spend journal
- Remove version field from gettxout RPC result
- Convert UtxoEntry to represent a specific utxo instead of a
transaction with all remaining utxos
- Optimize for memory usage with an eye towards a utxo cache
- Combine details such as whether the txout was contained in a
coinbase, is spent, and is modified into a single packed field of
bit flags
- Align entry fields to eliminate extra padding since ultimately
there will be a lot of these in memory
- Introduce a free list for serializing an outpoint to the database
key format to significantly reduce pressure on the GC
- Update all related functions that previously dealt with transaction
hashes to accept outpoints instead
- Update all callers accordingly
- Only add individually requested outputs from the mempool when
constructing a mempool view
- Modify the spend journal to always store the block height and coinbase
information with every spent txout
- Introduce code to handle fetching the missing information from
another utxo from the same transaction in the event an old style
entry is encountered
- Make use of a database cursor with seek to do this much more
efficiently than testing every possible output
- Always decompress data loaded from the database now that a utxo entry
only consists of a specific output
- Introduce upgrade code to migrate the utxo set to the new format
- Store versions of the utxoset and spend journal buckets
- Allow migration process to be interrupted and resumed
- Update all tests to expect the correct encodings, remove tests that no
longer apply, and add new ones for the new expected behavior
- Convert old tests for the legacy utxo format deserialization code to
test the new function that is used during upgrade
- Update the utxostore test data and add function that was used to
convert it
- Introduce a few new functions on UtxoViewpoint
- AddTxOut for adding an individual txout versus all of them
- addTxOut to handle the common code between the new AddTxOut and
existing AddTxOuts
- RemoveEntry for removing an individual txout
- fetchEntryByHash for fetching any remaining utxo for a given
transaction hash
2017-09-03 09:59:15 +02:00
|
|
|
|
|
|
|
return false, nil
|
2013-10-08 17:47:00 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// The requested inventory is is an unsupported type, so just claim
|
|
|
|
// it is known to avoid requesting it.
|
2014-07-07 19:07:33 +02:00
|
|
|
return true, nil
|
2013-10-08 17:47:00 +02:00
|
|
|
}
|
|
|
|
|
2013-09-27 02:41:02 +02:00
|
|
|
// handleInvMsg handles inv messages from all peers.
|
|
|
|
// We examine the inventory advertised by the remote peer and act accordingly.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) handleInvMsg(imsg *invMsg) {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer := imsg.peer
|
2017-08-24 23:46:59 +02:00
|
|
|
state, exists := sm.peerStates[peer]
|
2017-08-15 07:03:06 +02:00
|
|
|
if !exists {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Received inv message from unknown peer %s", peer)
|
2017-08-15 07:03:06 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2013-09-27 02:41:02 +02:00
|
|
|
// Attempt to find the final block in the inventory list. There may
|
|
|
|
// not be one.
|
|
|
|
lastBlock := -1
|
|
|
|
invVects := imsg.inv.InvList
|
|
|
|
for i := len(invVects) - 1; i >= 0; i-- {
|
2015-02-05 22:16:39 +01:00
|
|
|
if invVects[i].Type == wire.InvTypeBlock {
|
2013-09-27 02:41:02 +02:00
|
|
|
lastBlock = i
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-24 15:12:04 +02:00
|
|
|
// If this inv contains a block announcement, and this isn't coming from
|
2015-02-11 21:39:11 +01:00
|
|
|
// our current sync peer or we're current, then update the last
|
|
|
|
// announced block for this peer. We'll use this information later to
|
|
|
|
// update the heights of peers based on blocks we've accepted that they
|
|
|
|
// previously announced.
|
2017-08-24 23:46:59 +02:00
|
|
|
if lastBlock != -1 && (peer != sm.syncPeer || sm.current()) {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.UpdateLastAnnouncedBlock(&invVects[lastBlock].Hash)
|
2015-02-11 21:39:11 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// Ignore invs from peers that aren't the sync if we are not current.
|
|
|
|
// Helps prevent fetching a mass of orphans.
|
2017-08-24 23:46:59 +02:00
|
|
|
if peer != sm.syncPeer && !sm.current() {
|
2015-02-11 21:39:11 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// If our chain is current and a peer announces a block we already
|
|
|
|
// know of, then update their current block height.
|
2017-08-24 23:46:59 +02:00
|
|
|
if lastBlock != -1 && sm.current() {
|
|
|
|
blkHeight, err := sm.chain.BlockHeightByHash(&invVects[lastBlock].Hash)
|
2015-08-26 06:03:18 +02:00
|
|
|
if err == nil {
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.UpdateLastBlockHeight(blkHeight)
|
2015-02-11 21:39:11 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-09-27 02:41:02 +02:00
|
|
|
// Request the advertised inventory if we don't already have it. Also,
|
|
|
|
// request parent blocks of orphans if we receive one we already have.
|
|
|
|
// Finally, attempt to detect potential stalls due to long side chains
|
|
|
|
// we already have and request more blocks to prevent them.
|
|
|
|
for i, iv := range invVects {
|
2013-09-20 19:55:27 +02:00
|
|
|
// Ignore unsupported inventory types.
|
2016-10-19 01:54:55 +02:00
|
|
|
switch iv.Type {
|
|
|
|
case wire.InvTypeBlock:
|
|
|
|
case wire.InvTypeTx:
|
|
|
|
case wire.InvTypeWitnessBlock:
|
|
|
|
case wire.InvTypeWitnessTx:
|
|
|
|
default:
|
2013-09-20 19:55:27 +02:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add the inventory to the cache of known inventory
|
|
|
|
// for the peer.
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.AddKnownInventory(iv)
|
2013-09-20 19:55:27 +02:00
|
|
|
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// Ignore inventory when we're in headers-first mode.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.headersFirstMode {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
continue
|
2013-11-16 00:16:51 +01:00
|
|
|
}
|
|
|
|
|
2013-09-20 19:55:27 +02:00
|
|
|
// Request the inventory if we don't already have it.
|
2017-08-24 23:46:59 +02:00
|
|
|
haveInv, err := sm.haveInventory(iv)
|
2014-07-07 19:07:33 +02:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Unexpected failure when checking for "+
|
2014-07-07 19:07:33 +02:00
|
|
|
"existing inventory during inv message "+
|
|
|
|
"processing: %v", err)
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if !haveInv {
|
2016-04-10 08:12:57 +02:00
|
|
|
if iv.Type == wire.InvTypeTx {
|
|
|
|
// Skip the transaction if it has already been
|
|
|
|
// rejected.
|
2017-08-24 23:46:59 +02:00
|
|
|
if _, exists := sm.rejectedTxns[iv.Hash]; exists {
|
2016-04-10 08:12:57 +02:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-10-19 01:54:55 +02:00
|
|
|
// Ignore invs block invs from non-witness enabled
|
|
|
|
// peers, as after segwit activation we only want to
|
|
|
|
// download from peers that can provide us full witness
|
|
|
|
// data for blocks.
|
2017-08-15 07:03:06 +02:00
|
|
|
if !peer.IsWitnessEnabled() && iv.Type == wire.InvTypeBlock {
|
2016-10-19 01:54:55 +02:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2013-09-20 19:55:27 +02:00
|
|
|
// Add it to the request queue.
|
2017-08-15 07:03:06 +02:00
|
|
|
state.requestQueue = append(state.requestQueue, iv)
|
2013-09-20 19:55:27 +02:00
|
|
|
continue
|
|
|
|
}
|
2013-09-27 02:41:02 +02:00
|
|
|
|
2015-02-05 22:16:39 +01:00
|
|
|
if iv.Type == wire.InvTypeBlock {
|
2013-09-27 02:41:02 +02:00
|
|
|
// The block is an orphan block that we already have.
|
|
|
|
// When the existing orphan was processed, it requested
|
|
|
|
// the missing parent blocks. When this scenario
|
|
|
|
// happens, it means there were more blocks missing
|
|
|
|
// than are allowed into a single inventory message. As
|
|
|
|
// a result, once this peer requested the final
|
|
|
|
// advertised block, the remote peer noticed and is now
|
|
|
|
// resending the orphan block as an available block
|
|
|
|
// to signal there are more missing blocks that need to
|
|
|
|
// be requested.
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.chain.IsKnownOrphan(&iv.Hash) {
|
2013-09-27 02:41:02 +02:00
|
|
|
// Request blocks starting at the latest known
|
|
|
|
// up to the root of the orphan that just came
|
|
|
|
// in.
|
2017-08-24 23:46:59 +02:00
|
|
|
orphanRoot := sm.chain.GetOrphanRoot(&iv.Hash)
|
|
|
|
locator, err := sm.chain.LatestBlockLocator()
|
2013-09-27 02:41:02 +02:00
|
|
|
if err != nil {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Errorf("PEER: Failed to get block "+
|
2013-09-27 02:41:02 +02:00
|
|
|
"locator for the latest block: "+
|
|
|
|
"%v", err)
|
|
|
|
continue
|
|
|
|
}
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.PushGetBlocksMsg(locator, orphanRoot)
|
2013-09-27 02:41:02 +02:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
// We already have the final block advertised by this
|
|
|
|
// inventory message, so force a request for more. This
|
2013-09-27 04:06:01 +02:00
|
|
|
// should only happen if we're on a really long side
|
|
|
|
// chain.
|
2013-09-27 02:41:02 +02:00
|
|
|
if i == lastBlock {
|
|
|
|
// Request blocks after this one up to the
|
|
|
|
// final one the remote peer knows about (zero
|
|
|
|
// stop hash).
|
2017-08-24 23:46:59 +02:00
|
|
|
locator := sm.chain.BlockLocatorFromHash(&iv.Hash)
|
2017-08-15 07:03:06 +02:00
|
|
|
peer.PushGetBlocksMsg(locator, &zeroHash)
|
2013-09-27 02:41:02 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Request as much as possible at once. Anything that won't fit into
|
|
|
|
// the request will be requested on the next inv message.
|
|
|
|
numRequested := 0
|
2015-02-05 22:16:39 +01:00
|
|
|
gdmsg := wire.NewMsgGetData()
|
2017-08-15 07:03:06 +02:00
|
|
|
requestQueue := state.requestQueue
|
2015-01-08 07:35:45 +01:00
|
|
|
for len(requestQueue) != 0 {
|
|
|
|
iv := requestQueue[0]
|
|
|
|
requestQueue[0] = nil
|
|
|
|
requestQueue = requestQueue[1:]
|
2013-10-08 17:47:00 +02:00
|
|
|
|
|
|
|
switch iv.Type {
|
2016-10-19 01:54:55 +02:00
|
|
|
case wire.InvTypeWitnessBlock:
|
|
|
|
fallthrough
|
2015-02-05 22:16:39 +01:00
|
|
|
case wire.InvTypeBlock:
|
2013-10-08 17:47:00 +02:00
|
|
|
// Request the block if there is not already a pending
|
|
|
|
// request.
|
2017-08-24 23:46:59 +02:00
|
|
|
if _, exists := sm.requestedBlocks[iv.Hash]; !exists {
|
2020-07-03 14:10:32 +02:00
|
|
|
limitAdd(sm.requestedBlocks, iv.Hash, maxRequestedBlocks)
|
|
|
|
limitAdd(state.requestedBlocks, iv.Hash, maxRequestedBlocks)
|
2016-10-19 01:54:55 +02:00
|
|
|
|
2017-08-15 07:03:06 +02:00
|
|
|
if peer.IsWitnessEnabled() {
|
2016-10-19 01:54:55 +02:00
|
|
|
iv.Type = wire.InvTypeWitnessBlock
|
|
|
|
}
|
|
|
|
|
2013-10-08 17:47:00 +02:00
|
|
|
gdmsg.AddInvVect(iv)
|
|
|
|
numRequested++
|
|
|
|
}
|
|
|
|
|
2016-10-19 01:54:55 +02:00
|
|
|
case wire.InvTypeWitnessTx:
|
|
|
|
fallthrough
|
2015-02-05 22:16:39 +01:00
|
|
|
case wire.InvTypeTx:
|
2013-10-08 17:47:00 +02:00
|
|
|
// Request the transaction if there is not already a
|
|
|
|
// pending request.
|
2017-08-24 23:46:59 +02:00
|
|
|
if _, exists := sm.requestedTxns[iv.Hash]; !exists {
|
2020-07-03 14:10:32 +02:00
|
|
|
limitAdd(sm.requestedTxns, iv.Hash, maxRequestedTxns)
|
|
|
|
limitAdd(state.requestedTxns, iv.Hash, maxRequestedTxns)
|
2016-10-19 01:54:55 +02:00
|
|
|
|
|
|
|
// If the peer is capable, request the txn
|
|
|
|
// including all witness data.
|
2017-08-15 07:03:06 +02:00
|
|
|
if peer.IsWitnessEnabled() {
|
2016-10-19 01:54:55 +02:00
|
|
|
iv.Type = wire.InvTypeWitnessTx
|
|
|
|
}
|
|
|
|
|
2013-10-08 17:47:00 +02:00
|
|
|
gdmsg.AddInvVect(iv)
|
|
|
|
numRequested++
|
|
|
|
}
|
2013-10-01 00:53:21 +02:00
|
|
|
}
|
2013-09-27 02:41:02 +02:00
|
|
|
|
2015-02-05 22:16:39 +01:00
|
|
|
if numRequested >= wire.MaxInvPerMsg {
|
2013-09-27 02:41:02 +02:00
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
2021-08-04 07:10:55 +02:00
|
|
|
|
|
|
|
e := wire.BaseEncoding
|
|
|
|
// we think that the iv.Type set above is sufficient. If not:
|
|
|
|
// if peer.IsWitnessEnabled() {
|
|
|
|
// e = wire.WitnessEncoding
|
|
|
|
//}
|
|
|
|
|
2017-08-15 07:03:06 +02:00
|
|
|
state.requestQueue = requestQueue
|
2013-09-27 02:41:02 +02:00
|
|
|
if len(gdmsg.InvList) > 0 {
|
2021-08-04 07:10:55 +02:00
|
|
|
peer.QueueMessageWithEncoding(gdmsg, nil, e)
|
2013-09-27 02:41:02 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// blockHandler is the main handler for the sync manager. It must be run as a
|
|
|
|
// goroutine. It processes block and inv messages in a separate goroutine
|
2013-10-04 07:34:24 +02:00
|
|
|
// from the peer handlers so the block (MsgBlock) messages are handled by a
|
|
|
|
// single thread without needing to lock memory data structures. This is
|
2017-08-24 23:46:59 +02:00
|
|
|
// important because the sync manager controls which blocks are needed and how
|
2013-10-04 07:34:24 +02:00
|
|
|
// the fetching should proceed.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) blockHandler() {
|
2019-04-16 05:35:45 +02:00
|
|
|
stallTicker := time.NewTicker(stallSampleInterval)
|
|
|
|
defer stallTicker.Stop()
|
|
|
|
|
2013-08-06 23:55:22 +02:00
|
|
|
out:
|
2013-10-03 01:33:42 +02:00
|
|
|
for {
|
2013-08-06 23:55:22 +02:00
|
|
|
select {
|
2017-08-24 23:46:59 +02:00
|
|
|
case m := <-sm.msgChan:
|
2013-10-01 00:53:21 +02:00
|
|
|
switch msg := m.(type) {
|
|
|
|
case *newPeerMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.handleNewPeerMsg(msg.peer)
|
2013-10-01 00:53:21 +02:00
|
|
|
|
2013-10-08 17:47:00 +02:00
|
|
|
case *txMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.handleTxMsg(msg)
|
2017-08-15 04:22:21 +02:00
|
|
|
msg.reply <- struct{}{}
|
2013-10-08 17:47:00 +02:00
|
|
|
|
2013-10-01 00:53:21 +02:00
|
|
|
case *blockMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.handleBlockMsg(msg)
|
2017-08-15 04:22:21 +02:00
|
|
|
msg.reply <- struct{}{}
|
2013-09-27 04:06:01 +02:00
|
|
|
|
2013-10-01 00:53:21 +02:00
|
|
|
case *invMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.handleInvMsg(msg)
|
2013-08-06 23:55:22 +02:00
|
|
|
|
2013-11-16 00:16:51 +01:00
|
|
|
case *headersMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.handleHeadersMsg(msg)
|
2013-11-16 00:16:51 +01:00
|
|
|
|
2020-07-10 09:19:38 +02:00
|
|
|
case *notFoundMsg:
|
|
|
|
sm.handleNotFoundMsg(msg)
|
|
|
|
|
2013-10-01 00:53:21 +02:00
|
|
|
case *donePeerMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.handleDonePeerMsg(msg.peer)
|
2013-10-01 00:53:21 +02:00
|
|
|
|
2014-02-04 16:39:00 +01:00
|
|
|
case getSyncPeerMsg:
|
2017-08-15 07:03:06 +02:00
|
|
|
var peerID int32
|
2017-08-24 23:46:59 +02:00
|
|
|
if sm.syncPeer != nil {
|
|
|
|
peerID = sm.syncPeer.ID()
|
2017-08-15 07:03:06 +02:00
|
|
|
}
|
|
|
|
msg.reply <- peerID
|
2014-02-04 16:39:00 +01:00
|
|
|
|
2014-03-20 08:06:10 +01:00
|
|
|
case processBlockMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
_, isOrphan, err := sm.chain.ProcessBlock(
|
2016-10-13 02:43:01 +02:00
|
|
|
msg.block, msg.flags)
|
2014-03-20 08:06:10 +01:00
|
|
|
if err != nil {
|
|
|
|
msg.reply <- processBlockResponse{
|
|
|
|
isOrphan: false,
|
|
|
|
err: err,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
msg.reply <- processBlockResponse{
|
2014-06-25 22:54:19 +02:00
|
|
|
isOrphan: isOrphan,
|
|
|
|
err: nil,
|
2014-03-20 08:06:10 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
case isCurrentMsg:
|
2017-08-24 23:46:59 +02:00
|
|
|
msg.reply <- sm.current()
|
2014-03-20 08:06:10 +01:00
|
|
|
|
2015-02-19 20:51:44 +01:00
|
|
|
case pauseMsg:
|
|
|
|
// Wait until the sender unpauses the manager.
|
|
|
|
<-msg.unpause
|
|
|
|
|
2014-02-04 16:39:00 +01:00
|
|
|
default:
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Invalid message type in block "+
|
2014-03-12 19:02:38 +01:00
|
|
|
"handler: %T", msg)
|
2014-02-04 16:39:00 +01:00
|
|
|
}
|
2014-02-04 07:24:40 +01:00
|
|
|
|
2019-04-16 05:35:45 +02:00
|
|
|
case <-stallTicker.C:
|
|
|
|
sm.handleStallSample()
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
case <-sm.quit:
|
2013-08-06 23:55:22 +02:00
|
|
|
break out
|
|
|
|
}
|
|
|
|
}
|
2014-06-12 03:09:38 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.wg.Done()
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Trace("Block handler done")
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
|
2017-08-11 02:07:06 +02:00
|
|
|
// handleBlockchainNotification handles notifications from blockchain. It does
|
|
|
|
// things such as request orphan block parents and relay accepted blocks to
|
|
|
|
// connected peers.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) handleBlockchainNotification(notification *blockchain.Notification) {
|
2013-08-06 23:55:22 +02:00
|
|
|
switch notification.Type {
|
2013-09-29 22:26:03 +02:00
|
|
|
// A block has been accepted into the block chain. Relay it to other
|
|
|
|
// peers.
|
2015-01-30 23:25:42 +01:00
|
|
|
case blockchain.NTBlockAccepted:
|
2013-10-03 17:25:13 +02:00
|
|
|
// Don't relay if we are not current. Other peers that are
|
|
|
|
// current should already know about it.
|
2017-08-24 23:46:59 +02:00
|
|
|
if !sm.current() {
|
2013-10-03 17:25:13 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2013-09-09 17:58:56 +02:00
|
|
|
block, ok := notification.Data.(*btcutil.Block)
|
|
|
|
if !ok {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Chain accepted notification is not a block.")
|
2013-09-09 17:58:56 +02:00
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
// Generate the inventory vector and relay it.
|
2016-08-08 21:04:33 +02:00
|
|
|
iv := wire.NewInvVect(wire.InvTypeBlock, block.Hash())
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.peerNotifier.RelayInventory(iv, block.MsgBlock().Header)
|
2013-09-20 19:55:27 +02:00
|
|
|
|
|
|
|
// A block has been connected to the main block chain.
|
2015-01-30 23:25:42 +01:00
|
|
|
case blockchain.NTBlockConnected:
|
2013-09-20 19:55:27 +02:00
|
|
|
block, ok := notification.Data.(*btcutil.Block)
|
|
|
|
if !ok {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Chain connected notification is not a block.")
|
2013-09-20 19:55:27 +02:00
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
// Remove all of the transactions (except the coinbase) in the
|
2014-09-18 16:23:36 +02:00
|
|
|
// connected block from the transaction pool. Secondly, remove any
|
2013-11-15 23:12:23 +01:00
|
|
|
// transactions which are now double spends as a result of these
|
2014-09-18 16:23:36 +02:00
|
|
|
// new transactions. Finally, remove any transaction that is
|
2015-11-05 17:27:42 +01:00
|
|
|
// no longer an orphan. Transactions which depend on a confirmed
|
|
|
|
// transaction are NOT removed recursively because they are still
|
|
|
|
// valid.
|
2013-10-28 21:44:38 +01:00
|
|
|
for _, tx := range block.Transactions()[1:] {
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.txMemPool.RemoveTransaction(tx, false)
|
|
|
|
sm.txMemPool.RemoveDoubleSpends(tx)
|
|
|
|
sm.txMemPool.RemoveOrphan(tx)
|
|
|
|
sm.peerNotifier.TransactionConfirmed(tx)
|
|
|
|
acceptedTxs := sm.txMemPool.ProcessOrphans(tx)
|
|
|
|
sm.peerNotifier.AnnounceNewTransactions(acceptedTxs)
|
2013-09-20 19:55:27 +02:00
|
|
|
}
|
|
|
|
|
2017-11-14 05:37:35 +01:00
|
|
|
// Register block with the fee estimator, if it exists.
|
|
|
|
if sm.feeEstimator != nil {
|
|
|
|
err := sm.feeEstimator.RegisterBlock(block)
|
|
|
|
|
|
|
|
// If an error is somehow generated then the fee estimator
|
|
|
|
// has entered an invalid state. Since it doesn't know how
|
|
|
|
// to recover, create a new one.
|
|
|
|
if err != nil {
|
|
|
|
sm.feeEstimator = mempool.NewFeeEstimator(
|
|
|
|
mempool.DefaultEstimateFeeMaxRollback,
|
|
|
|
mempool.DefaultEstimateFeeMinRegisteredBlocks)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-09-20 19:55:27 +02:00
|
|
|
// A block has been disconnected from the main block chain.
|
2015-01-30 23:25:42 +01:00
|
|
|
case blockchain.NTBlockDisconnected:
|
2013-09-20 19:55:27 +02:00
|
|
|
block, ok := notification.Data.(*btcutil.Block)
|
|
|
|
if !ok {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Warnf("Chain disconnected notification is not a block.")
|
2013-09-20 19:55:27 +02:00
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
// Reinsert all of the transactions (except the coinbase) into
|
|
|
|
// the transaction pool.
|
2013-10-28 21:44:38 +01:00
|
|
|
for _, tx := range block.Transactions()[1:] {
|
2017-08-24 23:46:59 +02:00
|
|
|
_, _, err := sm.txMemPool.MaybeAcceptTransaction(tx,
|
2015-02-26 05:01:20 +01:00
|
|
|
false, false)
|
2013-09-20 19:55:27 +02:00
|
|
|
if err != nil {
|
|
|
|
// Remove the transaction and all transactions
|
|
|
|
// that depend on it if it wasn't accepted into
|
|
|
|
// the transaction pool.
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.txMemPool.RemoveTransaction(tx, true)
|
2013-09-20 19:55:27 +02:00
|
|
|
}
|
|
|
|
}
|
2017-11-14 05:37:35 +01:00
|
|
|
|
|
|
|
// Rollback previous block recorded by the fee estimator.
|
|
|
|
if sm.feeEstimator != nil {
|
|
|
|
sm.feeEstimator.Rollback(block.Hash())
|
|
|
|
}
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// NewPeer informs the sync manager of a newly active peer.
|
|
|
|
func (sm *SyncManager) NewPeer(peer *peerpkg.Peer) {
|
2013-10-01 00:53:21 +02:00
|
|
|
// Ignore if we are shutting down.
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
2013-10-01 00:53:21 +02:00
|
|
|
return
|
|
|
|
}
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- &newPeerMsg{peer: peer}
|
2013-10-01 00:53:21 +02:00
|
|
|
}
|
|
|
|
|
2013-10-08 17:47:00 +02:00
|
|
|
// QueueTx adds the passed transaction message and peer to the block handling
|
2017-08-15 04:22:21 +02:00
|
|
|
// queue. Responds to the done channel argument after the tx message is
|
|
|
|
// processed.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) QueueTx(tx *btcutil.Tx, peer *peerpkg.Peer, done chan struct{}) {
|
2013-10-08 17:47:00 +02:00
|
|
|
// Don't accept more transactions if we're shutting down.
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
2017-08-15 04:22:21 +02:00
|
|
|
done <- struct{}{}
|
2013-10-08 17:47:00 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- &txMsg{tx: tx, peer: peer, reply: done}
|
2013-10-08 17:47:00 +02:00
|
|
|
}
|
|
|
|
|
2017-08-15 04:22:21 +02:00
|
|
|
// QueueBlock adds the passed block message and peer to the block handling
|
|
|
|
// queue. Responds to the done channel argument after the block message is
|
|
|
|
// processed.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) QueueBlock(block *btcutil.Block, peer *peerpkg.Peer, done chan struct{}) {
|
2013-08-06 23:55:22 +02:00
|
|
|
// Don't accept more blocks if we're shutting down.
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
2017-08-15 04:22:21 +02:00
|
|
|
done <- struct{}{}
|
2013-08-06 23:55:22 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- &blockMsg{block: block, peer: peer, reply: done}
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
|
2013-09-27 02:41:02 +02:00
|
|
|
// QueueInv adds the passed inv message and peer to the block handling queue.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) QueueInv(inv *wire.MsgInv, peer *peerpkg.Peer) {
|
2013-09-27 04:06:01 +02:00
|
|
|
// No channel handling here because peers do not need to block on inv
|
2013-09-27 02:41:02 +02:00
|
|
|
// messages.
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
2013-09-27 02:41:02 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- &invMsg{inv: inv, peer: peer}
|
2013-10-01 00:53:21 +02:00
|
|
|
}
|
|
|
|
|
2014-01-29 01:53:25 +01:00
|
|
|
// QueueHeaders adds the passed headers message and peer to the block handling
|
|
|
|
// queue.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) QueueHeaders(headers *wire.MsgHeaders, peer *peerpkg.Peer) {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// No channel handling here because peers do not need to block on
|
|
|
|
// headers messages.
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
2013-11-16 00:16:51 +01:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- &headersMsg{headers: headers, peer: peer}
|
2013-11-16 00:16:51 +01:00
|
|
|
}
|
|
|
|
|
2020-07-10 09:19:38 +02:00
|
|
|
// QueueNotFound adds the passed notfound message and peer to the block handling
|
|
|
|
// queue.
|
|
|
|
func (sm *SyncManager) QueueNotFound(notFound *wire.MsgNotFound, peer *peerpkg.Peer) {
|
|
|
|
// No channel handling here because peers do not need to block on
|
|
|
|
// reject messages.
|
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
sm.msgChan <- ¬FoundMsg{notFound: notFound, peer: peer}
|
|
|
|
}
|
|
|
|
|
2013-10-01 00:53:21 +02:00
|
|
|
// DonePeer informs the blockmanager that a peer has disconnected.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) DonePeer(peer *peerpkg.Peer) {
|
2013-10-01 00:53:21 +02:00
|
|
|
// Ignore if we are shutting down.
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.LoadInt32(&sm.shutdown) != 0 {
|
2013-10-01 00:53:21 +02:00
|
|
|
return
|
|
|
|
}
|
2013-10-08 00:27:59 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- &donePeerMsg{peer: peer}
|
2013-09-27 02:41:02 +02:00
|
|
|
}
|
|
|
|
|
2013-08-06 23:55:22 +02:00
|
|
|
// Start begins the core block handler which processes block and inv messages.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) Start() {
|
2013-08-06 23:55:22 +02:00
|
|
|
// Already started?
|
2017-08-24 23:46:59 +02:00
|
|
|
if atomic.AddInt32(&sm.started, 1) != 1 {
|
2013-08-06 23:55:22 +02:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
log.Trace("Starting sync manager")
|
|
|
|
sm.wg.Add(1)
|
|
|
|
go sm.blockHandler()
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// Stop gracefully shuts down the sync manager by stopping all asynchronous
|
2013-08-06 23:55:22 +02:00
|
|
|
// handlers and waiting for them to finish.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) Stop() error {
|
|
|
|
if atomic.AddInt32(&sm.shutdown, 1) != 1 {
|
|
|
|
log.Warnf("Sync manager is already in the process of " +
|
2013-08-06 23:55:22 +02:00
|
|
|
"shutting down")
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
log.Infof("Sync manager shutting down")
|
|
|
|
close(sm.quit)
|
|
|
|
sm.wg.Wait()
|
2013-08-06 23:55:22 +02:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-08-15 07:03:06 +02:00
|
|
|
// SyncPeerID returns the ID of the current sync peer, or 0 if there is none.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) SyncPeerID() int32 {
|
2017-08-15 07:03:06 +02:00
|
|
|
reply := make(chan int32)
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- getSyncPeerMsg{reply: reply}
|
2014-02-04 16:39:00 +01:00
|
|
|
return <-reply
|
2014-02-04 07:24:40 +01:00
|
|
|
}
|
|
|
|
|
2014-03-20 08:06:10 +01:00
|
|
|
// ProcessBlock makes use of ProcessBlock on an internal instance of a block
|
2017-08-24 23:46:59 +02:00
|
|
|
// chain.
|
|
|
|
func (sm *SyncManager) ProcessBlock(block *btcutil.Block, flags blockchain.BehaviorFlags) (bool, error) {
|
2014-06-12 03:09:38 +02:00
|
|
|
reply := make(chan processBlockResponse, 1)
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- processBlockMsg{block: block, flags: flags, reply: reply}
|
2014-03-20 08:06:10 +01:00
|
|
|
response := <-reply
|
|
|
|
return response.isOrphan, response.err
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// IsCurrent returns whether or not the sync manager believes it is synced with
|
2014-03-20 08:06:10 +01:00
|
|
|
// the connected peers.
|
2017-08-24 23:46:59 +02:00
|
|
|
func (sm *SyncManager) IsCurrent() bool {
|
2014-03-20 08:06:10 +01:00
|
|
|
reply := make(chan bool)
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- isCurrentMsg{reply: reply}
|
2014-03-20 08:06:10 +01:00
|
|
|
return <-reply
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// Pause pauses the sync manager until the returned channel is closed.
|
2015-02-19 20:51:44 +01:00
|
|
|
//
|
|
|
|
// Note that while paused, all peer and block processing is halted. The
|
2017-08-24 23:46:59 +02:00
|
|
|
// message sender should avoid pausing the sync manager for long durations.
|
|
|
|
func (sm *SyncManager) Pause() chan<- struct{} {
|
2015-02-19 20:51:44 +01:00
|
|
|
c := make(chan struct{})
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.msgChan <- pauseMsg{c}
|
2015-02-19 20:51:44 +01:00
|
|
|
return c
|
|
|
|
}
|
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
// New constructs a new SyncManager. Use Start to begin processing asynchronous
|
|
|
|
// block, tx, and inv updates.
|
|
|
|
func New(config *Config) (*SyncManager, error) {
|
|
|
|
sm := SyncManager{
|
2017-08-14 21:39:07 +02:00
|
|
|
peerNotifier: config.PeerNotifier,
|
|
|
|
chain: config.Chain,
|
|
|
|
txMemPool: config.TxMemPool,
|
2017-08-17 03:35:20 +02:00
|
|
|
chainParams: config.ChainParams,
|
2016-08-08 21:04:33 +02:00
|
|
|
rejectedTxns: make(map[chainhash.Hash]struct{}),
|
|
|
|
requestedTxns: make(map[chainhash.Hash]struct{}),
|
|
|
|
requestedBlocks: make(map[chainhash.Hash]struct{}),
|
2017-08-15 07:03:06 +02:00
|
|
|
peerStates: make(map[*peerpkg.Peer]*peerSyncState),
|
2017-08-24 23:09:28 +02:00
|
|
|
progressLogger: newBlockProgressLogger("Processed", log),
|
2017-08-17 03:35:20 +02:00
|
|
|
msgChan: make(chan interface{}, config.MaxPeers*3),
|
2015-01-04 02:42:01 +01:00
|
|
|
headerList: list.New(),
|
|
|
|
quit: make(chan struct{}),
|
2017-11-14 05:37:35 +01:00
|
|
|
feeEstimator: config.FeeEstimator,
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|
2015-08-26 06:03:18 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
best := sm.chain.BestSnapshot()
|
2017-08-17 03:35:20 +02:00
|
|
|
if !config.DisableCheckpoints {
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
// Initialize the next checkpoint based on the current height.
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.nextCheckpoint = sm.findNextHeaderCheckpoint(best.Height)
|
|
|
|
if sm.nextCheckpoint != nil {
|
|
|
|
sm.resetHeaderState(&best.Hash, best.Height)
|
Rework and improve headers-first mode.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered
2014-01-30 18:29:02 +01:00
|
|
|
}
|
|
|
|
} else {
|
2017-08-24 23:09:28 +02:00
|
|
|
log.Info("Checkpoints are disabled")
|
2013-10-10 02:34:02 +02:00
|
|
|
}
|
2013-10-01 00:43:14 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
sm.chain.Subscribe(sm.handleBlockchainNotification)
|
2017-08-14 21:39:07 +02:00
|
|
|
|
2017-08-24 23:46:59 +02:00
|
|
|
return &sm, nil
|
2013-08-06 23:55:22 +02:00
|
|
|
}
|