Merge btcchain repo into blockchain directory.
This commit is contained in:
commit
74ae61f048
36 changed files with 6361 additions and 0 deletions
119
blockchain/README.md
Normal file
119
blockchain/README.md
Normal file
|
@ -0,0 +1,119 @@
|
|||
blockchain
|
||||
==========
|
||||
|
||||
[![Build Status](http://img.shields.io/travis/btcsuite/btcd.svg)]
|
||||
(https://travis-ci.org/btcsuite/btcd) [![ISC License]
|
||||
(http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
|
||||
|
||||
Package blockchain implements bitcoin block handling and chain selection rules.
|
||||
The test coverage is currently only around 60%, but will be increasing over
|
||||
time. See `test_coverage.txt` for the gocov coverage report. Alternatively, if
|
||||
you are running a POSIX OS, you can run the `cov_report.sh` script for a
|
||||
real-time report. Package blockchain is licensed under the liberal ISC license.
|
||||
|
||||
There is an associated blog post about the release of this package
|
||||
[here](https://blog.conformal.com/btcchain-the-bitcoin-chain-package-from-bctd/).
|
||||
|
||||
This package has intentionally been designed so it can be used as a standalone
|
||||
package for any projects needing to handle processing of blocks into the bitcoin
|
||||
block chain.
|
||||
|
||||
## Documentation
|
||||
|
||||
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)]
|
||||
(http://godoc.org/github.com/btcsuite/btcd/blockchain)
|
||||
|
||||
Full `go doc` style documentation for the project can be viewed online without
|
||||
installing this package by using the GoDoc site here:
|
||||
http://godoc.org/github.com/btcsuite/btcd/blockchain
|
||||
|
||||
You can also view the documentation locally once the package is installed with
|
||||
the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to
|
||||
http://localhost:6060/pkg/github.com/btcsuite/btcd/blockchain
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
$ go get github.com/btcsuite/btcd/blockchain
|
||||
```
|
||||
|
||||
## Bitcoin Chain Processing Overview
|
||||
|
||||
Before a block is allowed into the block chain, it must go through an intensive
|
||||
series of validation rules. The following list serves as a general outline of
|
||||
those rules to provide some intuition into what is going on under the hood, but
|
||||
is by no means exhaustive:
|
||||
|
||||
- Reject duplicate blocks
|
||||
- Perform a series of sanity checks on the block and its transactions such as
|
||||
verifying proof of work, timestamps, number and character of transactions,
|
||||
transaction amounts, script complexity, and merkle root calculations
|
||||
- Compare the block against predetermined checkpoints for expected timestamps
|
||||
and difficulty based on elapsed time since the checkpoint
|
||||
- Save the most recent orphan blocks for a limited time in case their parent
|
||||
blocks become available
|
||||
- Stop processing if the block is an orphan as the rest of the processing
|
||||
depends on the block's position within the block chain
|
||||
- Perform a series of more thorough checks that depend on the block's position
|
||||
within the block chain such as verifying block difficulties adhere to
|
||||
difficulty retarget rules, timestamps are after the median of the last
|
||||
several blocks, all transactions are finalized, checkpoint blocks match, and
|
||||
block versions are in line with the previous blocks
|
||||
- Determine how the block fits into the chain and perform different actions
|
||||
accordingly in order to ensure any side chains which have higher difficulty
|
||||
than the main chain become the new main chain
|
||||
- When a block is being connected to the main chain (either through
|
||||
reorganization of a side chain to the main chain or just extending the
|
||||
main chain), perform further checks on the block's transactions such as
|
||||
verifying transaction duplicates, script complexity for the combination of
|
||||
connected scripts, coinbase maturity, double spends, and connected
|
||||
transaction values
|
||||
- Run the transaction scripts to verify the spender is allowed to spend the
|
||||
coins
|
||||
- Insert the block into the block database
|
||||
|
||||
## Examples
|
||||
|
||||
* [ProcessBlock Example]
|
||||
(http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BlockChain-ProcessBlock)
|
||||
Demonstrates how to create a new chain instance and use ProcessBlock to
|
||||
attempt to attempt add a block to the chain. This example intentionally
|
||||
attempts to insert a duplicate genesis block to illustrate how an invalid
|
||||
block is handled.
|
||||
|
||||
* [CompactToBig Example]
|
||||
(http://godoc.org/github.com/btcsuite/btcd/blockchain#example-CompactToBig)
|
||||
Demonstrates how to convert the compact "bits" in a block header which
|
||||
represent the target difficulty to a big integer and display it using the
|
||||
typical hex notation.
|
||||
|
||||
* [BigToCompact Example]
|
||||
(http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BigToCompact)
|
||||
Demonstrates how to convert how to convert a target difficulty into the
|
||||
compact "bits" in a block header which represent that target difficulty.
|
||||
|
||||
## GPG Verification Key
|
||||
|
||||
All official release tags are signed by Conformal so users can ensure the code
|
||||
has not been tampered with and is coming from Conformal. To verify the
|
||||
signature perform the following:
|
||||
|
||||
- Download the public key from the Conformal website at
|
||||
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
|
||||
|
||||
- Import the public key into your GPG keyring:
|
||||
```bash
|
||||
gpg --import GIT-GPG-KEY-conformal.txt
|
||||
```
|
||||
|
||||
- Verify the release tag with the following command where `TAG_NAME` is a
|
||||
placeholder for the specific tag:
|
||||
```bash
|
||||
git tag -v TAG_NAME
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
|
||||
Package blockchain is licensed under the [copyfree](http://copyfree.org) ISC
|
||||
License.
|
182
blockchain/accept.go
Normal file
182
blockchain/accept.go
Normal file
|
@ -0,0 +1,182 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/btcsuite/btcutil"
|
||||
)
|
||||
|
||||
// maybeAcceptBlock potentially accepts a block into the memory block chain.
|
||||
// It performs several validation checks which depend on its position within
|
||||
// the block chain before adding it. The block is expected to have already gone
|
||||
// through ProcessBlock before calling this function with it.
|
||||
//
|
||||
// The flags modify the behavior of this function as follows:
|
||||
// - BFFastAdd: The somewhat expensive BIP0034 validation is not performed.
|
||||
// - BFDryRun: The memory chain index will not be pruned and no accept
|
||||
// notification will be sent since the block is not being accepted.
|
||||
func (b *BlockChain) maybeAcceptBlock(block *btcutil.Block, flags BehaviorFlags) error {
|
||||
fastAdd := flags&BFFastAdd == BFFastAdd
|
||||
dryRun := flags&BFDryRun == BFDryRun
|
||||
|
||||
// Get a block node for the block previous to this one. Will be nil
|
||||
// if this is the genesis block.
|
||||
prevNode, err := b.getPrevNodeFromBlock(block)
|
||||
if err != nil {
|
||||
log.Errorf("getPrevNodeFromBlock: %v", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// The height of this block is one more than the referenced previous
|
||||
// block.
|
||||
blockHeight := int64(0)
|
||||
if prevNode != nil {
|
||||
blockHeight = prevNode.height + 1
|
||||
}
|
||||
block.SetHeight(blockHeight)
|
||||
|
||||
blockHeader := &block.MsgBlock().Header
|
||||
if !fastAdd {
|
||||
// Ensure the difficulty specified in the block header matches
|
||||
// the calculated difficulty based on the previous block and
|
||||
// difficulty retarget rules.
|
||||
expectedDifficulty, err := b.calcNextRequiredDifficulty(prevNode,
|
||||
block.MsgBlock().Header.Timestamp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
blockDifficulty := blockHeader.Bits
|
||||
if blockDifficulty != expectedDifficulty {
|
||||
str := "block difficulty of %d is not the expected value of %d"
|
||||
str = fmt.Sprintf(str, blockDifficulty, expectedDifficulty)
|
||||
return ruleError(ErrUnexpectedDifficulty, str)
|
||||
}
|
||||
|
||||
// Ensure the timestamp for the block header is after the
|
||||
// median time of the last several blocks (medianTimeBlocks).
|
||||
medianTime, err := b.calcPastMedianTime(prevNode)
|
||||
if err != nil {
|
||||
log.Errorf("calcPastMedianTime: %v", err)
|
||||
return err
|
||||
}
|
||||
if !blockHeader.Timestamp.After(medianTime) {
|
||||
str := "block timestamp of %v is not after expected %v"
|
||||
str = fmt.Sprintf(str, blockHeader.Timestamp,
|
||||
medianTime)
|
||||
return ruleError(ErrTimeTooOld, str)
|
||||
}
|
||||
|
||||
// Ensure all transactions in the block are finalized.
|
||||
for _, tx := range block.Transactions() {
|
||||
if !IsFinalizedTransaction(tx, blockHeight,
|
||||
blockHeader.Timestamp) {
|
||||
str := fmt.Sprintf("block contains "+
|
||||
"unfinalized transaction %v", tx.Sha())
|
||||
return ruleError(ErrUnfinalizedTx, str)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Ensure chain matches up to predetermined checkpoints.
|
||||
// It's safe to ignore the error on Sha since it's already cached.
|
||||
blockHash, _ := block.Sha()
|
||||
if !b.verifyCheckpoint(blockHeight, blockHash) {
|
||||
str := fmt.Sprintf("block at height %d does not match "+
|
||||
"checkpoint hash", blockHeight)
|
||||
return ruleError(ErrBadCheckpoint, str)
|
||||
}
|
||||
|
||||
// Find the previous checkpoint and prevent blocks which fork the main
|
||||
// chain before it. This prevents storage of new, otherwise valid,
|
||||
// blocks which build off of old blocks that are likely at a much easier
|
||||
// difficulty and therefore could be used to waste cache and disk space.
|
||||
checkpointBlock, err := b.findPreviousCheckpoint()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if checkpointBlock != nil && blockHeight < checkpointBlock.Height() {
|
||||
str := fmt.Sprintf("block at height %d forks the main chain "+
|
||||
"before the previous checkpoint at height %d",
|
||||
blockHeight, checkpointBlock.Height())
|
||||
return ruleError(ErrForkTooOld, str)
|
||||
}
|
||||
|
||||
if !fastAdd {
|
||||
// Reject version 1 blocks once a majority of the network has
|
||||
// upgraded. This is part of BIP0034.
|
||||
if blockHeader.Version < 2 {
|
||||
if b.isMajorityVersion(2, prevNode,
|
||||
b.netParams.BlockV1RejectNumRequired,
|
||||
b.netParams.BlockV1RejectNumToCheck) {
|
||||
|
||||
str := "new blocks with version %d are no " +
|
||||
"longer valid"
|
||||
str = fmt.Sprintf(str, blockHeader.Version)
|
||||
return ruleError(ErrBlockVersionTooOld, str)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure coinbase starts with serialized block heights for
|
||||
// blocks whose version is the serializedHeightVersion or
|
||||
// newer once a majority of the network has upgraded. This is
|
||||
// part of BIP0034.
|
||||
if blockHeader.Version >= serializedHeightVersion {
|
||||
if b.isMajorityVersion(serializedHeightVersion,
|
||||
prevNode,
|
||||
b.netParams.CoinbaseBlockHeightNumRequired,
|
||||
b.netParams.CoinbaseBlockHeightNumToCheck) {
|
||||
|
||||
expectedHeight := int64(0)
|
||||
if prevNode != nil {
|
||||
expectedHeight = prevNode.height + 1
|
||||
}
|
||||
coinbaseTx := block.Transactions()[0]
|
||||
err := checkSerializedHeight(coinbaseTx,
|
||||
expectedHeight)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Prune block nodes which are no longer needed before creating
|
||||
// a new node.
|
||||
if !dryRun {
|
||||
err = b.pruneBlockNodes()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Create a new block node for the block and add it to the in-memory
|
||||
// block chain (could be either a side chain or the main chain).
|
||||
newNode := newBlockNode(blockHeader, blockHash, blockHeight)
|
||||
if prevNode != nil {
|
||||
newNode.parent = prevNode
|
||||
newNode.height = blockHeight
|
||||
newNode.workSum.Add(prevNode.workSum, newNode.workSum)
|
||||
}
|
||||
|
||||
// Connect the passed block to the chain while respecting proper chain
|
||||
// selection according to the chain with the most proof of work. This
|
||||
// also handles validation of the transaction scripts.
|
||||
err = b.connectBestChain(newNode, block, flags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Notify the caller that the new block was accepted into the block
|
||||
// chain. The caller would typically want to react by relaying the
|
||||
// inventory to other peers.
|
||||
if !dryRun {
|
||||
b.sendNotification(NTBlockAccepted, block)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
149
blockchain/blocklocator.go
Normal file
149
blockchain/blocklocator.go
Normal file
|
@ -0,0 +1,149 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// BlockLocator is used to help locate a specific block. The algorithm for
|
||||
// building the block locator is to add the hashes in reverse order until
|
||||
// the genesis block is reached. In order to keep the list of locator hashes
|
||||
// to a reasonable number of entries, first the most recent previous 10 block
|
||||
// hashes are added, then the step is doubled each loop iteration to
|
||||
// exponentially decrease the number of hashes as a function of the distance
|
||||
// from the block being located.
|
||||
//
|
||||
// For example, assume you have a block chain with a side chain as depicted
|
||||
// below:
|
||||
// genesis -> 1 -> 2 -> ... -> 15 -> 16 -> 17 -> 18
|
||||
// \-> 16a -> 17a
|
||||
//
|
||||
// The block locator for block 17a would be the hashes of blocks:
|
||||
// [17a 16a 15 14 13 12 11 10 9 8 6 2 genesis]
|
||||
type BlockLocator []*btcwire.ShaHash
|
||||
|
||||
// BlockLocatorFromHash returns a block locator for the passed block hash.
|
||||
// See BlockLocator for details on the algotirhm used to create a block locator.
|
||||
//
|
||||
// In addition to the general algorithm referenced above, there are a couple of
|
||||
// special cases which are handled:
|
||||
//
|
||||
// - If the genesis hash is passed, there are no previous hashes to add and
|
||||
// therefore the block locator will only consist of the genesis hash
|
||||
// - If the passed hash is not currently known, the block locator will only
|
||||
// consist of the passed hash
|
||||
func (b *BlockChain) BlockLocatorFromHash(hash *btcwire.ShaHash) BlockLocator {
|
||||
// The locator contains the requested hash at the very least.
|
||||
locator := make(BlockLocator, 0, btcwire.MaxBlockLocatorsPerMsg)
|
||||
locator = append(locator, hash)
|
||||
|
||||
// Nothing more to do if a locator for the genesis hash was requested.
|
||||
if hash.IsEqual(b.netParams.GenesisHash) {
|
||||
return locator
|
||||
}
|
||||
|
||||
// Attempt to find the height of the block that corresponds to the
|
||||
// passed hash, and if it's on a side chain, also find the height at
|
||||
// which it forks from the main chain.
|
||||
blockHeight := int64(-1)
|
||||
forkHeight := int64(-1)
|
||||
node, exists := b.index[*hash]
|
||||
if !exists {
|
||||
// Try to look up the height for passed block hash. Assume an
|
||||
// error means it doesn't exist and just return the locator for
|
||||
// the block itself.
|
||||
block, err := b.db.FetchBlockBySha(hash)
|
||||
if err != nil {
|
||||
return locator
|
||||
}
|
||||
blockHeight = block.Height()
|
||||
|
||||
} else {
|
||||
blockHeight = node.height
|
||||
|
||||
// Find the height at which this node forks from the main chain
|
||||
// if the node is on a side chain.
|
||||
if !node.inMainChain {
|
||||
for n := node; n.parent != nil; n = n.parent {
|
||||
if n.inMainChain {
|
||||
forkHeight = n.height
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Generate the block locators according to the algorithm described in
|
||||
// in the BlockLocator comment and make sure to leave room for the
|
||||
// final genesis hash.
|
||||
iterNode := node
|
||||
increment := int64(1)
|
||||
for len(locator) < btcwire.MaxBlockLocatorsPerMsg-1 {
|
||||
// Once there are 10 locators, exponentially increase the
|
||||
// distance between each block locator.
|
||||
if len(locator) > 10 {
|
||||
increment *= 2
|
||||
}
|
||||
blockHeight -= increment
|
||||
if blockHeight < 1 {
|
||||
break
|
||||
}
|
||||
|
||||
// As long as this is still on the side chain, walk backwards
|
||||
// along the side chain nodes to each block height.
|
||||
if forkHeight != -1 && blockHeight > forkHeight {
|
||||
// Intentionally use parent field instead of the
|
||||
// getPrevNodeFromNode function since we don't want to
|
||||
// dynamically load nodes when building block locators.
|
||||
// Side chain blocks should always be in memory already,
|
||||
// and if they aren't for some reason it's ok to skip
|
||||
// them.
|
||||
for iterNode != nil && blockHeight > iterNode.height {
|
||||
iterNode = iterNode.parent
|
||||
}
|
||||
if iterNode != nil && iterNode.height == blockHeight {
|
||||
locator = append(locator, iterNode.hash)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// The desired block height is in the main chain, so look it up
|
||||
// from the main chain database.
|
||||
h, err := b.db.FetchBlockShaByHeight(blockHeight)
|
||||
if err != nil {
|
||||
// This shouldn't happen and it's ok to ignore block
|
||||
// locators, so just continue to the next one.
|
||||
log.Warnf("Lookup of known valid height failed %v",
|
||||
blockHeight)
|
||||
continue
|
||||
}
|
||||
locator = append(locator, h)
|
||||
}
|
||||
|
||||
// Append the appropriate genesis block.
|
||||
locator = append(locator, b.netParams.GenesisHash)
|
||||
return locator
|
||||
}
|
||||
|
||||
// LatestBlockLocator returns a block locator for the latest known tip of the
|
||||
// main (best) chain.
|
||||
func (b *BlockChain) LatestBlockLocator() (BlockLocator, error) {
|
||||
// Lookup the latest main chain hash if the best chain hasn't been set
|
||||
// yet.
|
||||
if b.bestChain == nil {
|
||||
// Get the latest block hash for the main chain from the
|
||||
// database.
|
||||
hash, _, err := b.db.NewestSha()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return b.BlockLocatorFromHash(hash), nil
|
||||
}
|
||||
|
||||
// The best chain is set, so use its hash.
|
||||
return b.BlockLocatorFromHash(b.bestChain.hash), nil
|
||||
}
|
1097
blockchain/chain.go
Normal file
1097
blockchain/chain.go
Normal file
File diff suppressed because it is too large
Load diff
114
blockchain/chain_test.go
Normal file
114
blockchain/chain_test.go
Normal file
|
@ -0,0 +1,114 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
"github.com/btcsuite/btcnet"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// TestHaveBlock tests the HaveBlock API to ensure proper functionality.
|
||||
func TestHaveBlock(t *testing.T) {
|
||||
// Load up blocks such that there is a side chain.
|
||||
// (genesis block) -> 1 -> 2 -> 3 -> 4
|
||||
// \-> 3a
|
||||
testFiles := []string{
|
||||
"blk_0_to_4.dat.bz2",
|
||||
"blk_3A.dat.bz2",
|
||||
}
|
||||
|
||||
var blocks []*btcutil.Block
|
||||
for _, file := range testFiles {
|
||||
blockTmp, err := loadBlocks(file)
|
||||
if err != nil {
|
||||
t.Errorf("Error loading file: %v\n", err)
|
||||
return
|
||||
}
|
||||
for _, block := range blockTmp {
|
||||
blocks = append(blocks, block)
|
||||
}
|
||||
}
|
||||
|
||||
// Create a new database and chain instance to run tests against.
|
||||
chain, teardownFunc, err := chainSetup("haveblock")
|
||||
if err != nil {
|
||||
t.Errorf("Failed to setup chain instance: %v", err)
|
||||
return
|
||||
}
|
||||
defer teardownFunc()
|
||||
|
||||
// Since we're not dealing with the real block chain, disable
|
||||
// checkpoints and set the coinbase maturity to 1.
|
||||
chain.DisableCheckpoints(true)
|
||||
blockchain.TstSetCoinbaseMaturity(1)
|
||||
|
||||
timeSource := blockchain.NewMedianTime()
|
||||
for i := 1; i < len(blocks); i++ {
|
||||
isOrphan, err := chain.ProcessBlock(blocks[i], timeSource,
|
||||
blockchain.BFNone)
|
||||
if err != nil {
|
||||
t.Errorf("ProcessBlock fail on block %v: %v\n", i, err)
|
||||
return
|
||||
}
|
||||
if isOrphan {
|
||||
t.Errorf("ProcessBlock incorrectly returned block %v "+
|
||||
"is an orphan\n", i)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Insert an orphan block.
|
||||
isOrphan, err := chain.ProcessBlock(btcutil.NewBlock(&Block100000),
|
||||
timeSource, blockchain.BFNone)
|
||||
if err != nil {
|
||||
t.Errorf("Unable to process block: %v", err)
|
||||
return
|
||||
}
|
||||
if !isOrphan {
|
||||
t.Errorf("ProcessBlock indicated block is an not orphan when " +
|
||||
"it should be\n")
|
||||
return
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
hash string
|
||||
want bool
|
||||
}{
|
||||
// Genesis block should be present (in the main chain).
|
||||
{hash: btcnet.MainNetParams.GenesisHash.String(), want: true},
|
||||
|
||||
// Block 3a should be present (on a side chain).
|
||||
{hash: "00000000474284d20067a4d33f6a02284e6ef70764a3a26d6a5b9df52ef663dd", want: true},
|
||||
|
||||
// Block 100000 should be present (as an orphan).
|
||||
{hash: "000000000003ba27aa200b1cecaad478d2b00432346c3f1f3986da1afd33e506", want: true},
|
||||
|
||||
// Random hashes should not be availble.
|
||||
{hash: "123", want: false},
|
||||
}
|
||||
|
||||
for i, test := range tests {
|
||||
hash, err := btcwire.NewShaHashFromStr(test.hash)
|
||||
if err != nil {
|
||||
t.Errorf("NewShaHashFromStr: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
result, err := chain.HaveBlock(hash)
|
||||
if err != nil {
|
||||
t.Errorf("HaveBlock #%d unexpected error: %v", i, err)
|
||||
return
|
||||
}
|
||||
if result != test.want {
|
||||
t.Errorf("HaveBlock #%d got %v want %v", i, result,
|
||||
test.want)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
285
blockchain/checkpoints.go
Normal file
285
blockchain/checkpoints.go
Normal file
|
@ -0,0 +1,285 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/btcsuite/btcd/txscript"
|
||||
"github.com/btcsuite/btcnet"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// CheckpointConfirmations is the number of blocks before the end of the current
|
||||
// best block chain that a good checkpoint candidate must be.
|
||||
const CheckpointConfirmations = 2016
|
||||
|
||||
// newShaHashFromStr converts the passed big-endian hex string into a
|
||||
// btcwire.ShaHash. It only differs from the one available in btcwire in that
|
||||
// it ignores the error since it will only (and must only) be called with
|
||||
// hard-coded, and therefore known good, hashes.
|
||||
func newShaHashFromStr(hexStr string) *btcwire.ShaHash {
|
||||
sha, _ := btcwire.NewShaHashFromStr(hexStr)
|
||||
return sha
|
||||
}
|
||||
|
||||
// DisableCheckpoints provides a mechanism to disable validation against
|
||||
// checkpoints which you DO NOT want to do in production. It is provided only
|
||||
// for debug purposes.
|
||||
func (b *BlockChain) DisableCheckpoints(disable bool) {
|
||||
b.noCheckpoints = disable
|
||||
}
|
||||
|
||||
// Checkpoints returns a slice of checkpoints (regardless of whether they are
|
||||
// already known). When checkpoints are disabled or there are no checkpoints
|
||||
// for the active network, it will return nil.
|
||||
func (b *BlockChain) Checkpoints() []btcnet.Checkpoint {
|
||||
if b.noCheckpoints || len(b.netParams.Checkpoints) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return b.netParams.Checkpoints
|
||||
}
|
||||
|
||||
// LatestCheckpoint returns the most recent checkpoint (regardless of whether it
|
||||
// is already known). When checkpoints are disabled or there are no checkpoints
|
||||
// for the active network, it will return nil.
|
||||
func (b *BlockChain) LatestCheckpoint() *btcnet.Checkpoint {
|
||||
if b.noCheckpoints || len(b.netParams.Checkpoints) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
checkpoints := b.netParams.Checkpoints
|
||||
return &checkpoints[len(checkpoints)-1]
|
||||
}
|
||||
|
||||
// verifyCheckpoint returns whether the passed block height and hash combination
|
||||
// match the hard-coded checkpoint data. It also returns true if there is no
|
||||
// checkpoint data for the passed block height.
|
||||
func (b *BlockChain) verifyCheckpoint(height int64, hash *btcwire.ShaHash) bool {
|
||||
if b.noCheckpoints || len(b.netParams.Checkpoints) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
// Nothing to check if there is no checkpoint data for the block height.
|
||||
checkpoint, exists := b.checkpointsByHeight[height]
|
||||
if !exists {
|
||||
return true
|
||||
}
|
||||
|
||||
if !checkpoint.Hash.IsEqual(hash) {
|
||||
return false
|
||||
}
|
||||
|
||||
log.Infof("Verified checkpoint at height %d/block %s", checkpoint.Height,
|
||||
checkpoint.Hash)
|
||||
return true
|
||||
}
|
||||
|
||||
// findPreviousCheckpoint finds the most recent checkpoint that is already
|
||||
// available in the downloaded portion of the block chain and returns the
|
||||
// associated block. It returns nil if a checkpoint can't be found (this should
|
||||
// really only happen for blocks before the first checkpoint).
|
||||
func (b *BlockChain) findPreviousCheckpoint() (*btcutil.Block, error) {
|
||||
if b.noCheckpoints || len(b.netParams.Checkpoints) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// No checkpoints.
|
||||
checkpoints := b.netParams.Checkpoints
|
||||
numCheckpoints := len(checkpoints)
|
||||
if numCheckpoints == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Perform the initial search to find and cache the latest known
|
||||
// checkpoint if the best chain is not known yet or we haven't already
|
||||
// previously searched.
|
||||
if b.bestChain == nil || (b.checkpointBlock == nil && b.nextCheckpoint == nil) {
|
||||
// Loop backwards through the available checkpoints to find one
|
||||
// that we already have.
|
||||
checkpointIndex := -1
|
||||
for i := numCheckpoints - 1; i >= 0; i-- {
|
||||
exists, err := b.db.ExistsSha(checkpoints[i].Hash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if exists {
|
||||
checkpointIndex = i
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// No known latest checkpoint. This will only happen on blocks
|
||||
// before the first known checkpoint. So, set the next expected
|
||||
// checkpoint to the first checkpoint and return the fact there
|
||||
// is no latest known checkpoint block.
|
||||
if checkpointIndex == -1 {
|
||||
b.nextCheckpoint = &checkpoints[0]
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Cache the latest known checkpoint block for future lookups.
|
||||
checkpoint := checkpoints[checkpointIndex]
|
||||
block, err := b.db.FetchBlockBySha(checkpoint.Hash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b.checkpointBlock = block
|
||||
|
||||
// Set the next expected checkpoint block accordingly.
|
||||
b.nextCheckpoint = nil
|
||||
if checkpointIndex < numCheckpoints-1 {
|
||||
b.nextCheckpoint = &checkpoints[checkpointIndex+1]
|
||||
}
|
||||
|
||||
return block, nil
|
||||
}
|
||||
|
||||
// At this point we've already searched for the latest known checkpoint,
|
||||
// so when there is no next checkpoint, the current checkpoint lockin
|
||||
// will always be the latest known checkpoint.
|
||||
if b.nextCheckpoint == nil {
|
||||
return b.checkpointBlock, nil
|
||||
}
|
||||
|
||||
// When there is a next checkpoint and the height of the current best
|
||||
// chain does not exceed it, the current checkpoint lockin is still
|
||||
// the latest known checkpoint.
|
||||
if b.bestChain.height < b.nextCheckpoint.Height {
|
||||
return b.checkpointBlock, nil
|
||||
}
|
||||
|
||||
// We've reached or exceeded the next checkpoint height. Note that
|
||||
// once a checkpoint lockin has been reached, forks are prevented from
|
||||
// any blocks before the checkpoint, so we don't have to worry about the
|
||||
// checkpoint going away out from under us due to a chain reorganize.
|
||||
|
||||
// Cache the latest known checkpoint block for future lookups. Note
|
||||
// that if this lookup fails something is very wrong since the chain
|
||||
// has already passed the checkpoint which was verified as accurate
|
||||
// before inserting it.
|
||||
block, err := b.db.FetchBlockBySha(b.nextCheckpoint.Hash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b.checkpointBlock = block
|
||||
|
||||
// Set the next expected checkpoint.
|
||||
checkpointIndex := -1
|
||||
for i := numCheckpoints - 1; i >= 0; i-- {
|
||||
if checkpoints[i].Hash.IsEqual(b.nextCheckpoint.Hash) {
|
||||
checkpointIndex = i
|
||||
break
|
||||
}
|
||||
}
|
||||
b.nextCheckpoint = nil
|
||||
if checkpointIndex != -1 && checkpointIndex < numCheckpoints-1 {
|
||||
b.nextCheckpoint = &checkpoints[checkpointIndex+1]
|
||||
}
|
||||
|
||||
return b.checkpointBlock, nil
|
||||
}
|
||||
|
||||
// isNonstandardTransaction determines whether a transaction contains any
|
||||
// scripts which are not one of the standard types.
|
||||
func isNonstandardTransaction(tx *btcutil.Tx) bool {
|
||||
// TODO(davec): Should there be checks for the input signature scripts?
|
||||
|
||||
// Check all of the output public key scripts for non-standard scripts.
|
||||
for _, txOut := range tx.MsgTx().TxOut {
|
||||
scriptClass := txscript.GetScriptClass(txOut.PkScript)
|
||||
if scriptClass == txscript.NonStandardTy {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsCheckpointCandidate returns whether or not the passed block is a good
|
||||
// checkpoint candidate.
|
||||
//
|
||||
// The factors used to determine a good checkpoint are:
|
||||
// - The block must be in the main chain
|
||||
// - The block must be at least 'CheckpointConfirmations' blocks prior to the
|
||||
// current end of the main chain
|
||||
// - The timestamps for the blocks before and after the checkpoint must have
|
||||
// timestamps which are also before and after the checkpoint, respectively
|
||||
// (due to the median time allowance this is not always the case)
|
||||
// - The block must not contain any strange transaction such as those with
|
||||
// nonstandard scripts
|
||||
//
|
||||
// The intent is that candidates are reviewed by a developer to make the final
|
||||
// decision and then manually added to the list of checkpoints for a network.
|
||||
func (b *BlockChain) IsCheckpointCandidate(block *btcutil.Block) (bool, error) {
|
||||
// Checkpoints must be enabled.
|
||||
if b.noCheckpoints {
|
||||
return false, fmt.Errorf("checkpoints are disabled")
|
||||
}
|
||||
|
||||
blockHash, err := block.Sha()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// A checkpoint must be in the main chain.
|
||||
exists, err := b.db.ExistsSha(blockHash)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !exists {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// A checkpoint must be at least CheckpointConfirmations blocks before
|
||||
// the end of the main chain.
|
||||
blockHeight := block.Height()
|
||||
_, mainChainHeight, err := b.db.NewestSha()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if blockHeight > (mainChainHeight - CheckpointConfirmations) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Get the previous block.
|
||||
prevHash := &block.MsgBlock().Header.PrevBlock
|
||||
prevBlock, err := b.db.FetchBlockBySha(prevHash)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Get the next block.
|
||||
nextHash, err := b.db.FetchBlockShaByHeight(blockHeight + 1)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
nextBlock, err := b.db.FetchBlockBySha(nextHash)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// A checkpoint must have timestamps for the block and the blocks on
|
||||
// either side of it in order (due to the median time allowance this is
|
||||
// not always the case).
|
||||
prevTime := prevBlock.MsgBlock().Header.Timestamp
|
||||
curTime := block.MsgBlock().Header.Timestamp
|
||||
nextTime := nextBlock.MsgBlock().Header.Timestamp
|
||||
if prevTime.After(curTime) || nextTime.Before(curTime) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// A checkpoint must have transactions that only contain standard
|
||||
// scripts.
|
||||
for _, tx := range block.Transactions() {
|
||||
if isNonstandardTransaction(tx) {
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
226
blockchain/common_test.go
Normal file
226
blockchain/common_test.go
Normal file
|
@ -0,0 +1,226 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"compress/bzip2"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
"github.com/btcsuite/btcd/database"
|
||||
_ "github.com/btcsuite/btcd/database/ldb"
|
||||
_ "github.com/btcsuite/btcd/database/memdb"
|
||||
"github.com/btcsuite/btcnet"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// testDbType is the database backend type to use for the tests.
|
||||
const testDbType = "memdb"
|
||||
|
||||
// testDbRoot is the root directory used to create all test databases.
|
||||
const testDbRoot = "testdbs"
|
||||
|
||||
// filesExists returns whether or not the named file or directory exists.
|
||||
func fileExists(name string) bool {
|
||||
if _, err := os.Stat(name); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// isSupportedDbType returns whether or not the passed database type is
|
||||
// currently supported.
|
||||
func isSupportedDbType(dbType string) bool {
|
||||
supportedDBs := database.SupportedDBs()
|
||||
for _, sDbType := range supportedDBs {
|
||||
if dbType == sDbType {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// chainSetup is used to create a new db and chain instance with the genesis
|
||||
// block already inserted. In addition to the new chain instnce, it returns
|
||||
// a teardown function the caller should invoke when done testing to clean up.
|
||||
func chainSetup(dbName string) (*blockchain.BlockChain, func(), error) {
|
||||
if !isSupportedDbType(testDbType) {
|
||||
return nil, nil, fmt.Errorf("unsupported db type %v", testDbType)
|
||||
}
|
||||
|
||||
// Handle memory database specially since it doesn't need the disk
|
||||
// specific handling.
|
||||
var db database.Db
|
||||
var teardown func()
|
||||
if testDbType == "memdb" {
|
||||
ndb, err := database.CreateDB(testDbType)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("error creating db: %v", err)
|
||||
}
|
||||
db = ndb
|
||||
|
||||
// Setup a teardown function for cleaning up. This function is
|
||||
// returned to the caller to be invoked when it is done testing.
|
||||
teardown = func() {
|
||||
db.Close()
|
||||
}
|
||||
} else {
|
||||
// Create the root directory for test databases.
|
||||
if !fileExists(testDbRoot) {
|
||||
if err := os.MkdirAll(testDbRoot, 0700); err != nil {
|
||||
err := fmt.Errorf("unable to create test db "+
|
||||
"root: %v", err)
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// Create a new database to store the accepted blocks into.
|
||||
dbPath := filepath.Join(testDbRoot, dbName)
|
||||
_ = os.RemoveAll(dbPath)
|
||||
ndb, err := database.CreateDB(testDbType, dbPath)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("error creating db: %v", err)
|
||||
}
|
||||
db = ndb
|
||||
|
||||
// Setup a teardown function for cleaning up. This function is
|
||||
// returned to the caller to be invoked when it is done testing.
|
||||
teardown = func() {
|
||||
dbVersionPath := filepath.Join(testDbRoot, dbName+".ver")
|
||||
db.Sync()
|
||||
db.Close()
|
||||
os.RemoveAll(dbPath)
|
||||
os.Remove(dbVersionPath)
|
||||
os.RemoveAll(testDbRoot)
|
||||
}
|
||||
}
|
||||
|
||||
// Insert the main network genesis block. This is part of the initial
|
||||
// database setup.
|
||||
genesisBlock := btcutil.NewBlock(btcnet.MainNetParams.GenesisBlock)
|
||||
_, err := db.InsertBlock(genesisBlock)
|
||||
if err != nil {
|
||||
teardown()
|
||||
err := fmt.Errorf("failed to insert genesis block: %v", err)
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
chain := blockchain.New(db, &btcnet.MainNetParams, nil)
|
||||
return chain, teardown, nil
|
||||
}
|
||||
|
||||
// loadTxStore returns a transaction store loaded from a file.
|
||||
func loadTxStore(filename string) (blockchain.TxStore, error) {
|
||||
// The txstore file format is:
|
||||
// <num tx data entries> <tx length> <serialized tx> <blk height>
|
||||
// <num spent bits> <spent bits>
|
||||
//
|
||||
// All num and length fields are little-endian uint32s. The spent bits
|
||||
// field is padded to a byte boundary.
|
||||
|
||||
filename = filepath.Join("testdata/", filename)
|
||||
fi, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Choose read based on whether the file is compressed or not.
|
||||
var r io.Reader
|
||||
if strings.HasSuffix(filename, ".bz2") {
|
||||
r = bzip2.NewReader(fi)
|
||||
} else {
|
||||
r = fi
|
||||
}
|
||||
defer fi.Close()
|
||||
|
||||
// Num of transaction store objects.
|
||||
var numItems uint32
|
||||
if err := binary.Read(r, binary.LittleEndian, &numItems); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
txStore := make(blockchain.TxStore)
|
||||
var uintBuf uint32
|
||||
for height := uint32(0); height < numItems; height++ {
|
||||
txD := blockchain.TxData{}
|
||||
|
||||
// Serialized transaction length.
|
||||
err = binary.Read(r, binary.LittleEndian, &uintBuf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
serializedTxLen := uintBuf
|
||||
if serializedTxLen > btcwire.MaxBlockPayload {
|
||||
return nil, fmt.Errorf("Read serialized transaction "+
|
||||
"length of %d is larger max allowed %d",
|
||||
serializedTxLen, btcwire.MaxBlockPayload)
|
||||
}
|
||||
|
||||
// Transaction.
|
||||
var msgTx btcwire.MsgTx
|
||||
err = msgTx.Deserialize(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
txD.Tx = btcutil.NewTx(&msgTx)
|
||||
|
||||
// Transaction hash.
|
||||
txHash, err := msgTx.TxSha()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
txD.Hash = &txHash
|
||||
|
||||
// Block height the transaction came from.
|
||||
err = binary.Read(r, binary.LittleEndian, &uintBuf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
txD.BlockHeight = int64(uintBuf)
|
||||
|
||||
// Num spent bits.
|
||||
err = binary.Read(r, binary.LittleEndian, &uintBuf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
numSpentBits := uintBuf
|
||||
numSpentBytes := numSpentBits / 8
|
||||
if numSpentBits%8 != 0 {
|
||||
numSpentBytes++
|
||||
}
|
||||
|
||||
// Packed spent bytes.
|
||||
spentBytes := make([]byte, numSpentBytes)
|
||||
_, err = io.ReadFull(r, spentBytes)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Populate spent data based on spent bits.
|
||||
txD.Spent = make([]bool, numSpentBits)
|
||||
for byteNum, spentByte := range spentBytes {
|
||||
for bit := 0; bit < 8; bit++ {
|
||||
if uint32((byteNum*8)+bit) < numSpentBits {
|
||||
if spentByte&(1<<uint(bit)) != 0 {
|
||||
txD.Spent[(byteNum*8)+bit] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
txStore[*txD.Hash] = &txD
|
||||
}
|
||||
|
||||
return txStore, nil
|
||||
}
|
362
blockchain/difficulty.go
Normal file
362
blockchain/difficulty.go
Normal file
|
@ -0,0 +1,362 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/big"
|
||||
"time"
|
||||
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
const (
|
||||
// targetTimespan is the desired amount of time that should elapse
|
||||
// before block difficulty requirement is examined to determine how
|
||||
// it should be changed in order to maintain the desired block
|
||||
// generation rate.
|
||||
targetTimespan = time.Hour * 24 * 14
|
||||
|
||||
// targetSpacing is the desired amount of time to generate each block.
|
||||
targetSpacing = time.Minute * 10
|
||||
|
||||
// BlocksPerRetarget is the number of blocks between each difficulty
|
||||
// retarget. It is calculated based on the desired block generation
|
||||
// rate.
|
||||
BlocksPerRetarget = int64(targetTimespan / targetSpacing)
|
||||
|
||||
// retargetAdjustmentFactor is the adjustment factor used to limit
|
||||
// the minimum and maximum amount of adjustment that can occur between
|
||||
// difficulty retargets.
|
||||
retargetAdjustmentFactor = 4
|
||||
|
||||
// minRetargetTimespan is the minimum amount of adjustment that can
|
||||
// occur between difficulty retargets. It equates to 25% of the
|
||||
// previous difficulty.
|
||||
minRetargetTimespan = int64(targetTimespan / retargetAdjustmentFactor)
|
||||
|
||||
// maxRetargetTimespan is the maximum amount of adjustment that can
|
||||
// occur between difficulty retargets. It equates to 400% of the
|
||||
// previous difficulty.
|
||||
maxRetargetTimespan = int64(targetTimespan * retargetAdjustmentFactor)
|
||||
)
|
||||
|
||||
var (
|
||||
// bigOne is 1 represented as a big.Int. It is defined here to avoid
|
||||
// the overhead of creating it multiple times.
|
||||
bigOne = big.NewInt(1)
|
||||
|
||||
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
|
||||
// the overhead of creating it multiple times.
|
||||
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
|
||||
)
|
||||
|
||||
// ShaHashToBig converts a btcwire.ShaHash into a big.Int that can be used to
|
||||
// perform math comparisons.
|
||||
func ShaHashToBig(hash *btcwire.ShaHash) *big.Int {
|
||||
// A ShaHash is in little-endian, but the big package wants the bytes
|
||||
// in big-endian. Reverse them. ShaHash.Bytes makes a copy, so it
|
||||
// is safe to modify the returned buffer.
|
||||
buf := hash.Bytes()
|
||||
blen := len(buf)
|
||||
for i := 0; i < blen/2; i++ {
|
||||
buf[i], buf[blen-1-i] = buf[blen-1-i], buf[i]
|
||||
}
|
||||
|
||||
return new(big.Int).SetBytes(buf)
|
||||
}
|
||||
|
||||
// CompactToBig converts a compact representation of a whole number N to an
|
||||
// unsigned 32-bit number. The representation is similar to IEEE754 floating
|
||||
// point numbers.
|
||||
//
|
||||
// Like IEEE754 floating point, there are three basic components: the sign,
|
||||
// the exponent, and the mantissa. They are broken out as follows:
|
||||
//
|
||||
// * the most significant 8 bits represent the unsigned base 256 exponent
|
||||
// * bit 23 (the 24th bit) represents the sign bit
|
||||
// * the least significant 23 bits represent the mantissa
|
||||
//
|
||||
// -------------------------------------------------
|
||||
// | Exponent | Sign | Mantissa |
|
||||
// -------------------------------------------------
|
||||
// | 8 bits [31-24] | 1 bit [23] | 23 bits [22-00] |
|
||||
// -------------------------------------------------
|
||||
//
|
||||
// The formula to calculate N is:
|
||||
// N = (-1^sign) * mantissa * 256^(exponent-3)
|
||||
//
|
||||
// This compact form is only used in bitcoin to encode unsigned 256-bit numbers
|
||||
// which represent difficulty targets, thus there really is not a need for a
|
||||
// sign bit, but it is implemented here to stay consistent with bitcoind.
|
||||
func CompactToBig(compact uint32) *big.Int {
|
||||
// Extract the mantissa, sign bit, and exponent.
|
||||
mantissa := compact & 0x007fffff
|
||||
isNegative := compact&0x00800000 != 0
|
||||
exponent := uint(compact >> 24)
|
||||
|
||||
// Since the base for the exponent is 256, the exponent can be treated
|
||||
// as the number of bytes to represent the full 256-bit number. So,
|
||||
// treat the exponent as the number of bytes and shift the mantissa
|
||||
// right or left accordingly. This is equivalent to:
|
||||
// N = mantissa * 256^(exponent-3)
|
||||
var bn *big.Int
|
||||
if exponent <= 3 {
|
||||
mantissa >>= 8 * (3 - exponent)
|
||||
bn = big.NewInt(int64(mantissa))
|
||||
} else {
|
||||
bn = big.NewInt(int64(mantissa))
|
||||
bn.Lsh(bn, 8*(exponent-3))
|
||||
}
|
||||
|
||||
// Make it negative if the sign bit is set.
|
||||
if isNegative {
|
||||
bn = bn.Neg(bn)
|
||||
}
|
||||
|
||||
return bn
|
||||
}
|
||||
|
||||
// BigToCompact converts a whole number N to a compact representation using
|
||||
// an unsigned 32-bit number. The compact representation only provides 23 bits
|
||||
// of precision, so values larger than (2^23 - 1) only encode the most
|
||||
// significant digits of the number. See CompactToBig for details.
|
||||
func BigToCompact(n *big.Int) uint32 {
|
||||
// No need to do any work if it's zero.
|
||||
if n.Sign() == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
// Since the base for the exponent is 256, the exponent can be treated
|
||||
// as the number of bytes. So, shift the number right or left
|
||||
// accordingly. This is equivalent to:
|
||||
// mantissa = mantissa / 256^(exponent-3)
|
||||
var mantissa uint32
|
||||
exponent := uint(len(n.Bytes()))
|
||||
if exponent <= 3 {
|
||||
mantissa = uint32(n.Bits()[0])
|
||||
mantissa <<= 8 * (3 - exponent)
|
||||
} else {
|
||||
// Use a copy to avoid modifying the caller's original number.
|
||||
tn := new(big.Int).Set(n)
|
||||
mantissa = uint32(tn.Rsh(tn, 8*(exponent-3)).Bits()[0])
|
||||
}
|
||||
|
||||
// When the mantissa already has the sign bit set, the number is too
|
||||
// large to fit into the available 23-bits, so divide the number by 256
|
||||
// and increment the exponent accordingly.
|
||||
if mantissa&0x00800000 != 0 {
|
||||
mantissa >>= 8
|
||||
exponent++
|
||||
}
|
||||
|
||||
// Pack the exponent, sign bit, and mantissa into an unsigned 32-bit
|
||||
// int and return it.
|
||||
compact := uint32(exponent<<24) | mantissa
|
||||
if n.Sign() < 0 {
|
||||
compact |= 0x00800000
|
||||
}
|
||||
return compact
|
||||
}
|
||||
|
||||
// CalcWork calculates a work value from difficulty bits. Bitcoin increases
|
||||
// the difficulty for generating a block by decreasing the value which the
|
||||
// generated hash must be less than. This difficulty target is stored in each
|
||||
// block header using a compact representation as described in the documenation
|
||||
// for CompactToBig. The main chain is selected by choosing the chain that has
|
||||
// the most proof of work (highest difficulty). Since a lower target difficulty
|
||||
// value equates to higher actual difficulty, the work value which will be
|
||||
// accumulated must be the inverse of the difficulty. Also, in order to avoid
|
||||
// potential division by zero and really small floating point numbers, the
|
||||
// result adds 1 to the denominator and multiplies the numerator by 2^256.
|
||||
func CalcWork(bits uint32) *big.Int {
|
||||
// Return a work value of zero if the passed difficulty bits represent
|
||||
// a negative number. Note this should not happen in practice with valid
|
||||
// blocks, but an invalid block could trigger it.
|
||||
difficultyNum := CompactToBig(bits)
|
||||
if difficultyNum.Sign() <= 0 {
|
||||
return big.NewInt(0)
|
||||
}
|
||||
|
||||
// (1 << 256) / (difficultyNum + 1)
|
||||
denominator := new(big.Int).Add(difficultyNum, bigOne)
|
||||
return new(big.Int).Div(oneLsh256, denominator)
|
||||
}
|
||||
|
||||
// calcEasiestDifficulty calculates the easiest possible difficulty that a block
|
||||
// can have given starting difficulty bits and a duration. It is mainly used to
|
||||
// verify that claimed proof of work by a block is sane as compared to a
|
||||
// known good checkpoint.
|
||||
func (b *BlockChain) calcEasiestDifficulty(bits uint32, duration time.Duration) uint32 {
|
||||
// Convert types used in the calculations below.
|
||||
durationVal := int64(duration)
|
||||
adjustmentFactor := big.NewInt(retargetAdjustmentFactor)
|
||||
|
||||
// The test network rules allow minimum difficulty blocks after more
|
||||
// than twice the desired amount of time needed to generate a block has
|
||||
// elapsed.
|
||||
if b.netParams.ResetMinDifficulty {
|
||||
if durationVal > int64(targetSpacing)*2 {
|
||||
return b.netParams.PowLimitBits
|
||||
}
|
||||
}
|
||||
|
||||
// Since easier difficulty equates to higher numbers, the easiest
|
||||
// difficulty for a given duration is the largest value possible given
|
||||
// the number of retargets for the duration and starting difficulty
|
||||
// multiplied by the max adjustment factor.
|
||||
newTarget := CompactToBig(bits)
|
||||
for durationVal > 0 && newTarget.Cmp(b.netParams.PowLimit) < 0 {
|
||||
newTarget.Mul(newTarget, adjustmentFactor)
|
||||
durationVal -= maxRetargetTimespan
|
||||
}
|
||||
|
||||
// Limit new value to the proof of work limit.
|
||||
if newTarget.Cmp(b.netParams.PowLimit) > 0 {
|
||||
newTarget.Set(b.netParams.PowLimit)
|
||||
}
|
||||
|
||||
return BigToCompact(newTarget)
|
||||
}
|
||||
|
||||
// findPrevTestNetDifficulty returns the difficulty of the previous block which
|
||||
// did not have the special testnet minimum difficulty rule applied.
|
||||
func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32, error) {
|
||||
// Search backwards through the chain for the last block without
|
||||
// the special rule applied.
|
||||
iterNode := startNode
|
||||
for iterNode != nil && iterNode.height%BlocksPerRetarget != 0 &&
|
||||
iterNode.bits == b.netParams.PowLimitBits {
|
||||
|
||||
// Get the previous block node. This function is used over
|
||||
// simply accessing iterNode.parent directly as it will
|
||||
// dynamically create previous block nodes as needed. This
|
||||
// helps allow only the pieces of the chain that are needed
|
||||
// to remain in memory.
|
||||
var err error
|
||||
iterNode, err = b.getPrevNodeFromNode(iterNode)
|
||||
if err != nil {
|
||||
log.Errorf("getPrevNodeFromNode: %v", err)
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
// Return the found difficulty or the minimum difficulty if no
|
||||
// appropriate block was found.
|
||||
lastBits := b.netParams.PowLimitBits
|
||||
if iterNode != nil {
|
||||
lastBits = iterNode.bits
|
||||
}
|
||||
return lastBits, nil
|
||||
}
|
||||
|
||||
// calcNextRequiredDifficulty calculates the required difficulty for the block
|
||||
// after the passed previous block node based on the difficulty retarget rules.
|
||||
// This function differs from the exported CalcNextRequiredDifficulty in that
|
||||
// the exported version uses the current best chain as the previous block node
|
||||
// while this function accepts any block node.
|
||||
func (b *BlockChain) calcNextRequiredDifficulty(lastNode *blockNode, newBlockTime time.Time) (uint32, error) {
|
||||
// Genesis block.
|
||||
if lastNode == nil {
|
||||
return b.netParams.PowLimitBits, nil
|
||||
}
|
||||
|
||||
// Return the previous block's difficulty requirements if this block
|
||||
// is not at a difficulty retarget interval.
|
||||
if (lastNode.height+1)%BlocksPerRetarget != 0 {
|
||||
// The test network rules allow minimum difficulty blocks after
|
||||
// more than twice the desired amount of time needed to generate
|
||||
// a block has elapsed.
|
||||
if b.netParams.ResetMinDifficulty {
|
||||
// Return minimum difficulty when more than twice the
|
||||
// desired amount of time needed to generate a block has
|
||||
// elapsed.
|
||||
allowMinTime := lastNode.timestamp.Add(targetSpacing * 2)
|
||||
if newBlockTime.After(allowMinTime) {
|
||||
return b.netParams.PowLimitBits, nil
|
||||
}
|
||||
|
||||
// The block was mined within the desired timeframe, so
|
||||
// return the difficulty for the last block which did
|
||||
// not have the special minimum difficulty rule applied.
|
||||
prevBits, err := b.findPrevTestNetDifficulty(lastNode)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return prevBits, nil
|
||||
}
|
||||
|
||||
// For the main network (or any unrecognized networks), simply
|
||||
// return the previous block's difficulty requirements.
|
||||
return lastNode.bits, nil
|
||||
}
|
||||
|
||||
// Get the block node at the previous retarget (targetTimespan days
|
||||
// worth of blocks).
|
||||
firstNode := lastNode
|
||||
for i := int64(0); i < BlocksPerRetarget-1 && firstNode != nil; i++ {
|
||||
// Get the previous block node. This function is used over
|
||||
// simply accessing firstNode.parent directly as it will
|
||||
// dynamically create previous block nodes as needed. This
|
||||
// helps allow only the pieces of the chain that are needed
|
||||
// to remain in memory.
|
||||
var err error
|
||||
firstNode, err = b.getPrevNodeFromNode(firstNode)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
if firstNode == nil {
|
||||
return 0, fmt.Errorf("unable to obtain previous retarget block")
|
||||
}
|
||||
|
||||
// Limit the amount of adjustment that can occur to the previous
|
||||
// difficulty.
|
||||
actualTimespan := lastNode.timestamp.UnixNano() - firstNode.timestamp.UnixNano()
|
||||
adjustedTimespan := actualTimespan
|
||||
if actualTimespan < minRetargetTimespan {
|
||||
adjustedTimespan = minRetargetTimespan
|
||||
} else if actualTimespan > maxRetargetTimespan {
|
||||
adjustedTimespan = maxRetargetTimespan
|
||||
}
|
||||
|
||||
// Calculate new target difficulty as:
|
||||
// currentDifficulty * (adjustedTimespan / targetTimespan)
|
||||
// The result uses integer division which means it will be slightly
|
||||
// rounded down. Bitcoind also uses integer division to calculate this
|
||||
// result.
|
||||
oldTarget := CompactToBig(lastNode.bits)
|
||||
newTarget := new(big.Int).Mul(oldTarget, big.NewInt(adjustedTimespan))
|
||||
newTarget.Div(newTarget, big.NewInt(int64(targetTimespan)))
|
||||
|
||||
// Limit new value to the proof of work limit.
|
||||
if newTarget.Cmp(b.netParams.PowLimit) > 0 {
|
||||
newTarget.Set(b.netParams.PowLimit)
|
||||
}
|
||||
|
||||
// Log new target difficulty and return it. The new target logging is
|
||||
// intentionally converting the bits back to a number instead of using
|
||||
// newTarget since conversion to the compact representation loses
|
||||
// precision.
|
||||
newTargetBits := BigToCompact(newTarget)
|
||||
log.Debugf("Difficulty retarget at block height %d", lastNode.height+1)
|
||||
log.Debugf("Old target %08x (%064x)", lastNode.bits, oldTarget)
|
||||
log.Debugf("New target %08x (%064x)", newTargetBits, CompactToBig(newTargetBits))
|
||||
log.Debugf("Actual timespan %v, adjusted timespan %v, target timespan %v",
|
||||
time.Duration(actualTimespan), time.Duration(adjustedTimespan),
|
||||
targetTimespan)
|
||||
|
||||
return newTargetBits, nil
|
||||
}
|
||||
|
||||
// CalcNextRequiredDifficulty calculates the required difficulty for the block
|
||||
// after the end of the current best chain based on the difficulty retarget
|
||||
// rules.
|
||||
//
|
||||
// This function is NOT safe for concurrent access.
|
||||
func (b *BlockChain) CalcNextRequiredDifficulty(timestamp time.Time) (uint32, error) {
|
||||
return b.calcNextRequiredDifficulty(b.bestChain, timestamp)
|
||||
}
|
71
blockchain/difficulty_test.go
Normal file
71
blockchain/difficulty_test.go
Normal file
|
@ -0,0 +1,71 @@
|
|||
// Copyright (c) 2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
"testing"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
)
|
||||
|
||||
func TestBigToCompact(t *testing.T) {
|
||||
tests := []struct {
|
||||
in int64
|
||||
out uint32
|
||||
}{
|
||||
{0, 0},
|
||||
{-1, 25231360},
|
||||
}
|
||||
|
||||
for x, test := range tests {
|
||||
n := big.NewInt(test.in)
|
||||
r := blockchain.BigToCompact(n)
|
||||
if r != test.out {
|
||||
t.Errorf("TestBigToCompact test #%d failed: got %d want %d\n",
|
||||
x, r, test.out)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCompactToBig(t *testing.T) {
|
||||
tests := []struct {
|
||||
in uint32
|
||||
out int64
|
||||
}{
|
||||
{10000000, 0},
|
||||
}
|
||||
|
||||
for x, test := range tests {
|
||||
n := blockchain.CompactToBig(test.in)
|
||||
want := big.NewInt(test.out)
|
||||
if n.Cmp(want) != 0 {
|
||||
t.Errorf("TestCompactToBig test #%d failed: got %d want %d\n",
|
||||
x, n.Int64(), want.Int64())
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCalcWork(t *testing.T) {
|
||||
tests := []struct {
|
||||
in uint32
|
||||
out int64
|
||||
}{
|
||||
{10000000, 0},
|
||||
}
|
||||
|
||||
for x, test := range tests {
|
||||
bits := uint32(test.in)
|
||||
|
||||
r := blockchain.CalcWork(bits)
|
||||
if r.Int64() != test.out {
|
||||
t.Errorf("TestCalcWork test #%d failed: got %v want %d\n",
|
||||
x, r.Int64(), test.out)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
81
blockchain/doc.go
Normal file
81
blockchain/doc.go
Normal file
|
@ -0,0 +1,81 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
/*
|
||||
Package blockchain implements bitcoin block handling and chain selection rules.
|
||||
|
||||
The bitcoin block handling and chain selection rules are an integral, and quite
|
||||
likely the most important, part of bitcoin. Unfortunately, at the time of
|
||||
this writing, these rules are also largely undocumented and had to be
|
||||
ascertained from the bitcoind source code. At its core, bitcoin is a
|
||||
distributed consensus of which blocks are valid and which ones will comprise the
|
||||
main block chain (public ledger) that ultimately determines accepted
|
||||
transactions, so it is extremely important that fully validating nodes agree on
|
||||
all rules.
|
||||
|
||||
At a high level, this package provides support for inserting new blocks into
|
||||
the block chain according to the aforementioned rules. It includes
|
||||
functionality such as rejecting duplicate blocks, ensuring blocks and
|
||||
transactions follow all rules, orphan handling, and best chain selection along
|
||||
with reorganization.
|
||||
|
||||
Since this package does not deal with other bitcoin specifics such as network
|
||||
communication or wallets, it provides a notification system which gives the
|
||||
caller a high level of flexibility in how they want to react to certain events
|
||||
such as orphan blocks which need their parents requested and newly connected
|
||||
main chain blocks which might result in wallet updates.
|
||||
|
||||
Bitcoin Chain Processing Overview
|
||||
|
||||
Before a block is allowed into the block chain, it must go through an intensive
|
||||
series of validation rules. The following list serves as a general outline of
|
||||
those rules to provide some intuition into what is going on under the hood, but
|
||||
is by no means exhaustive:
|
||||
|
||||
- Reject duplicate blocks
|
||||
- Perform a series of sanity checks on the block and its transactions such as
|
||||
verifying proof of work, timestamps, number and character of transactions,
|
||||
transaction amounts, script complexity, and merkle root calculations
|
||||
- Compare the block against predetermined checkpoints for expected timestamps
|
||||
and difficulty based on elapsed time since the checkpoint
|
||||
- Save the most recent orphan blocks for a limited time in case their parent
|
||||
blocks become available
|
||||
- Stop processing if the block is an orphan as the rest of the processing
|
||||
depends on the block's position within the block chain
|
||||
- Perform a series of more thorough checks that depend on the block's position
|
||||
within the block chain such as verifying block difficulties adhere to
|
||||
difficulty retarget rules, timestamps are after the median of the last
|
||||
several blocks, all transactions are finalized, checkpoint blocks match, and
|
||||
block versions are in line with the previous blocks
|
||||
- Determine how the block fits into the chain and perform different actions
|
||||
accordingly in order to ensure any side chains which have higher difficulty
|
||||
than the main chain become the new main chain
|
||||
- When a block is being connected to the main chain (either through
|
||||
reorganization of a side chain to the main chain or just extending the
|
||||
main chain), perform further checks on the block's transactions such as
|
||||
verifying transaction duplicates, script complexity for the combination of
|
||||
connected scripts, coinbase maturity, double spends, and connected
|
||||
transaction values
|
||||
- Run the transaction scripts to verify the spender is allowed to spend the
|
||||
coins
|
||||
- Insert the block into the block database
|
||||
|
||||
Errors
|
||||
|
||||
Errors returned by this package are either the raw errors provided by underlying
|
||||
calls or of type blockchain.RuleError. This allows the caller to differentiate
|
||||
between unexpected errors, such as database errors, versus errors due to rule
|
||||
violations through type assertions. In addition, callers can programmatically
|
||||
determine the specific rule violation by examining the ErrorCode field of the
|
||||
type asserted blockchain.RuleError.
|
||||
|
||||
Bitcoin Improvement Proposals
|
||||
|
||||
This package includes spec changes outlined by the following BIPs:
|
||||
|
||||
BIP0016 (https://en.bitcoin.it/wiki/BIP_0016)
|
||||
BIP0030 (https://en.bitcoin.it/wiki/BIP_0030)
|
||||
BIP0034 (https://en.bitcoin.it/wiki/BIP_0034)
|
||||
*/
|
||||
package blockchain
|
251
blockchain/error.go
Normal file
251
blockchain/error.go
Normal file
|
@ -0,0 +1,251 @@
|
|||
// Copyright (c) 2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// ErrorCode identifies a kind of error.
|
||||
type ErrorCode int
|
||||
|
||||
// These constants are used to identify a specific RuleError.
|
||||
const (
|
||||
// ErrDuplicateBlock indicates a block with the same hash already
|
||||
// exists.
|
||||
ErrDuplicateBlock ErrorCode = iota
|
||||
|
||||
// ErrBlockTooBig indicates the serialized block size exceeds the
|
||||
// maximum allowed size.
|
||||
ErrBlockTooBig
|
||||
|
||||
// ErrBlockVersionTooOld indicates the block version is too old and is
|
||||
// no longer accepted since the majority of the network has upgraded
|
||||
// to a newer version.
|
||||
ErrBlockVersionTooOld
|
||||
|
||||
// ErrInvalidTime indicates the time in the passed block has a precision
|
||||
// that is more than one second. The chain consensus rules require
|
||||
// timestamps to have a maximum precision of one second.
|
||||
ErrInvalidTime
|
||||
|
||||
// ErrTimeTooOld indicates the time is either before the median time of
|
||||
// the last several blocks per the chain consensus rules or prior to the
|
||||
// most recent checkpoint.
|
||||
ErrTimeTooOld
|
||||
|
||||
// ErrTimeTooNew indicates the time is too far in the future as compared
|
||||
// the current time.
|
||||
ErrTimeTooNew
|
||||
|
||||
// ErrDifficultyTooLow indicates the difficulty for the block is lower
|
||||
// than the difficulty required by the most recent checkpoint.
|
||||
ErrDifficultyTooLow
|
||||
|
||||
// ErrUnexpectedDifficulty indicates specified bits do not align with
|
||||
// the expected value either because it doesn't match the calculated
|
||||
// valued based on difficulty regarted rules or it is out of the valid
|
||||
// range.
|
||||
ErrUnexpectedDifficulty
|
||||
|
||||
// ErrHighHash indicates the block does not hash to a value which is
|
||||
// lower than the required target difficultly.
|
||||
ErrHighHash
|
||||
|
||||
// ErrBadMerkleRoot indicates the calculated merkle root does not match
|
||||
// the expected value.
|
||||
ErrBadMerkleRoot
|
||||
|
||||
// ErrBadCheckpoint indicates a block that is expected to be at a
|
||||
// checkpoint height does not match the expected one.
|
||||
ErrBadCheckpoint
|
||||
|
||||
// ErrForkTooOld indicates a block is attempting to fork the block chain
|
||||
// before the most recent checkpoint.
|
||||
ErrForkTooOld
|
||||
|
||||
// ErrCheckpointTimeTooOld indicates a block has a timestamp before the
|
||||
// most recent checkpoint.
|
||||
ErrCheckpointTimeTooOld
|
||||
|
||||
// ErrNoTransactions indicates the block does not have a least one
|
||||
// transaction. A valid block must have at least the coinbase
|
||||
// transaction.
|
||||
ErrNoTransactions
|
||||
|
||||
// ErrTooManyTransactions indicates the block has more transactions than
|
||||
// are allowed.
|
||||
ErrTooManyTransactions
|
||||
|
||||
// ErrNoTxInputs indicates a transaction does not have any inputs. A
|
||||
// valid transaction must have at least one input.
|
||||
ErrNoTxInputs
|
||||
|
||||
// ErrNoTxOutputs indicates a transaction does not have any outputs. A
|
||||
// valid transaction must have at least one output.
|
||||
ErrNoTxOutputs
|
||||
|
||||
// ErrTxTooBig indicates a transaction exceeds the maximum allowed size
|
||||
// when serialized.
|
||||
ErrTxTooBig
|
||||
|
||||
// ErrBadTxOutValue indicates an output value for a transaction is
|
||||
// invalid in some way such as being out of range.
|
||||
ErrBadTxOutValue
|
||||
|
||||
// ErrDuplicateTxInputs indicates a transaction references the same
|
||||
// input more than once.
|
||||
ErrDuplicateTxInputs
|
||||
|
||||
// ErrBadTxInput indicates a transaction input is invalid in some way
|
||||
// such as referencing a previous transaction outpoint which is out of
|
||||
// range or not referencing one at all.
|
||||
ErrBadTxInput
|
||||
|
||||
// ErrMissingTx indicates a transaction referenced by an input is
|
||||
// missing.
|
||||
ErrMissingTx
|
||||
|
||||
// ErrUnfinalizedTx indicates a transaction has not been finalized.
|
||||
// A valid block may only contain finalized transactions.
|
||||
ErrUnfinalizedTx
|
||||
|
||||
// ErrDuplicateTx indicates a block contains an identical transaction
|
||||
// (or at least two transactions which hash to the same value). A
|
||||
// valid block may only contain unique transactions.
|
||||
ErrDuplicateTx
|
||||
|
||||
// ErrOverwriteTx indicates a block contains a transaction that has
|
||||
// the same hash as a previous transaction which has not been fully
|
||||
// spent.
|
||||
ErrOverwriteTx
|
||||
|
||||
// ErrImmatureSpend indicates a transaction is attempting to spend a
|
||||
// coinbase that has not yet reached the required maturity.
|
||||
ErrImmatureSpend
|
||||
|
||||
// ErrDoubleSpend indicates a transaction is attempting to spend coins
|
||||
// that have already been spent.
|
||||
ErrDoubleSpend
|
||||
|
||||
// ErrSpendTooHigh indicates a transaction is attempting to spend more
|
||||
// value than the sum of all of its inputs.
|
||||
ErrSpendTooHigh
|
||||
|
||||
// ErrBadFees indicates the total fees for a block are invalid due to
|
||||
// exceeding the maximum possible value.
|
||||
ErrBadFees
|
||||
|
||||
// ErrTooManySigOps indicates the total number of signature operations
|
||||
// for a transaction or block exceed the maximum allowed limits.
|
||||
ErrTooManySigOps
|
||||
|
||||
// ErrFirstTxNotCoinbase indicates the first transaction in a block
|
||||
// is not a coinbase transaction.
|
||||
ErrFirstTxNotCoinbase
|
||||
|
||||
// ErrMultipleCoinbases indicates a block contains more than one
|
||||
// coinbase transaction.
|
||||
ErrMultipleCoinbases
|
||||
|
||||
// ErrBadCoinbaseScriptLen indicates the length of the signature script
|
||||
// for a coinbase transaction is not within the valid range.
|
||||
ErrBadCoinbaseScriptLen
|
||||
|
||||
// ErrBadCoinbaseValue indicates the amount of a coinbase value does
|
||||
// not match the expected value of the subsidy plus the sum of all fees.
|
||||
ErrBadCoinbaseValue
|
||||
|
||||
// ErrMissingCoinbaseHeight indicates the coinbase transaction for a
|
||||
// block does not start with the serialized block block height as
|
||||
// required for version 2 and higher blocks.
|
||||
ErrMissingCoinbaseHeight
|
||||
|
||||
// ErrBadCoinbaseHeight indicates the serialized block height in the
|
||||
// coinbase transaction for version 2 and higher blocks does not match
|
||||
// the expected value.
|
||||
ErrBadCoinbaseHeight
|
||||
|
||||
// ErrScriptMalformed indicates a transaction script is malformed in
|
||||
// some way. For example, it might be longer than the maximum allowed
|
||||
// length or fail to parse.
|
||||
ErrScriptMalformed
|
||||
|
||||
// ErrScriptValidation indicates the result of executing transaction
|
||||
// script failed. The error covers any failure when executing scripts
|
||||
// such signature verification failures and execution past the end of
|
||||
// the stack.
|
||||
ErrScriptValidation
|
||||
)
|
||||
|
||||
// Map of ErrorCode values back to their constant names for pretty printing.
|
||||
var errorCodeStrings = map[ErrorCode]string{
|
||||
ErrDuplicateBlock: "ErrDuplicateBlock",
|
||||
ErrBlockTooBig: "ErrBlockTooBig",
|
||||
ErrBlockVersionTooOld: "ErrBlockVersionTooOld",
|
||||
ErrInvalidTime: "ErrInvalidTime",
|
||||
ErrTimeTooOld: "ErrTimeTooOld",
|
||||
ErrTimeTooNew: "ErrTimeTooNew",
|
||||
ErrDifficultyTooLow: "ErrDifficultyTooLow",
|
||||
ErrUnexpectedDifficulty: "ErrUnexpectedDifficulty",
|
||||
ErrHighHash: "ErrHighHash",
|
||||
ErrBadMerkleRoot: "ErrBadMerkleRoot",
|
||||
ErrBadCheckpoint: "ErrBadCheckpoint",
|
||||
ErrForkTooOld: "ErrForkTooOld",
|
||||
ErrCheckpointTimeTooOld: "ErrCheckpointTimeTooOld",
|
||||
ErrNoTransactions: "ErrNoTransactions",
|
||||
ErrTooManyTransactions: "ErrTooManyTransactions",
|
||||
ErrNoTxInputs: "ErrNoTxInputs",
|
||||
ErrNoTxOutputs: "ErrNoTxOutputs",
|
||||
ErrTxTooBig: "ErrTxTooBig",
|
||||
ErrBadTxOutValue: "ErrBadTxOutValue",
|
||||
ErrDuplicateTxInputs: "ErrDuplicateTxInputs",
|
||||
ErrBadTxInput: "ErrBadTxInput",
|
||||
ErrMissingTx: "ErrMissingTx",
|
||||
ErrUnfinalizedTx: "ErrUnfinalizedTx",
|
||||
ErrDuplicateTx: "ErrDuplicateTx",
|
||||
ErrOverwriteTx: "ErrOverwriteTx",
|
||||
ErrImmatureSpend: "ErrImmatureSpend",
|
||||
ErrDoubleSpend: "ErrDoubleSpend",
|
||||
ErrSpendTooHigh: "ErrSpendTooHigh",
|
||||
ErrBadFees: "ErrBadFees",
|
||||
ErrTooManySigOps: "ErrTooManySigOps",
|
||||
ErrFirstTxNotCoinbase: "ErrFirstTxNotCoinbase",
|
||||
ErrMultipleCoinbases: "ErrMultipleCoinbases",
|
||||
ErrBadCoinbaseScriptLen: "ErrBadCoinbaseScriptLen",
|
||||
ErrBadCoinbaseValue: "ErrBadCoinbaseValue",
|
||||
ErrMissingCoinbaseHeight: "ErrMissingCoinbaseHeight",
|
||||
ErrBadCoinbaseHeight: "ErrBadCoinbaseHeight",
|
||||
ErrScriptMalformed: "ErrScriptMalformed",
|
||||
ErrScriptValidation: "ErrScriptValidation",
|
||||
}
|
||||
|
||||
// String returns the ErrorCode as a human-readable name.
|
||||
func (e ErrorCode) String() string {
|
||||
if s := errorCodeStrings[e]; s != "" {
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("Unknown ErrorCode (%d)", int(e))
|
||||
}
|
||||
|
||||
// RuleError identifies a rule violation. It is used to indicate that
|
||||
// processing of a block or transaction failed due to one of the many validation
|
||||
// rules. The caller can use type assertions to determine if a failure was
|
||||
// specifically due to a rule violation and access the ErrorCode field to
|
||||
// ascertain the specific reason for the rule violation.
|
||||
type RuleError struct {
|
||||
ErrorCode ErrorCode // Describes the kind of error
|
||||
Description string // Human readable description of the issue
|
||||
}
|
||||
|
||||
// Error satisfies the error interface and prints human-readable errors.
|
||||
func (e RuleError) Error() string {
|
||||
return e.Description
|
||||
}
|
||||
|
||||
// ruleError creates an RuleError given a set of arguments.
|
||||
func ruleError(c ErrorCode, desc string) RuleError {
|
||||
return RuleError{ErrorCode: c, Description: desc}
|
||||
}
|
97
blockchain/error_test.go
Normal file
97
blockchain/error_test.go
Normal file
|
@ -0,0 +1,97 @@
|
|||
// Copyright (c) 2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
)
|
||||
|
||||
// TestErrorCodeStringer tests the stringized output for the ErrorCode type.
|
||||
func TestErrorCodeStringer(t *testing.T) {
|
||||
tests := []struct {
|
||||
in blockchain.ErrorCode
|
||||
want string
|
||||
}{
|
||||
{blockchain.ErrDuplicateBlock, "ErrDuplicateBlock"},
|
||||
{blockchain.ErrBlockTooBig, "ErrBlockTooBig"},
|
||||
{blockchain.ErrBlockVersionTooOld, "ErrBlockVersionTooOld"},
|
||||
{blockchain.ErrInvalidTime, "ErrInvalidTime"},
|
||||
{blockchain.ErrTimeTooOld, "ErrTimeTooOld"},
|
||||
{blockchain.ErrTimeTooNew, "ErrTimeTooNew"},
|
||||
{blockchain.ErrDifficultyTooLow, "ErrDifficultyTooLow"},
|
||||
{blockchain.ErrUnexpectedDifficulty, "ErrUnexpectedDifficulty"},
|
||||
{blockchain.ErrHighHash, "ErrHighHash"},
|
||||
{blockchain.ErrBadMerkleRoot, "ErrBadMerkleRoot"},
|
||||
{blockchain.ErrBadCheckpoint, "ErrBadCheckpoint"},
|
||||
{blockchain.ErrForkTooOld, "ErrForkTooOld"},
|
||||
{blockchain.ErrCheckpointTimeTooOld, "ErrCheckpointTimeTooOld"},
|
||||
{blockchain.ErrNoTransactions, "ErrNoTransactions"},
|
||||
{blockchain.ErrTooManyTransactions, "ErrTooManyTransactions"},
|
||||
{blockchain.ErrNoTxInputs, "ErrNoTxInputs"},
|
||||
{blockchain.ErrNoTxOutputs, "ErrNoTxOutputs"},
|
||||
{blockchain.ErrTxTooBig, "ErrTxTooBig"},
|
||||
{blockchain.ErrBadTxOutValue, "ErrBadTxOutValue"},
|
||||
{blockchain.ErrDuplicateTxInputs, "ErrDuplicateTxInputs"},
|
||||
{blockchain.ErrBadTxInput, "ErrBadTxInput"},
|
||||
{blockchain.ErrBadCheckpoint, "ErrBadCheckpoint"},
|
||||
{blockchain.ErrMissingTx, "ErrMissingTx"},
|
||||
{blockchain.ErrUnfinalizedTx, "ErrUnfinalizedTx"},
|
||||
{blockchain.ErrDuplicateTx, "ErrDuplicateTx"},
|
||||
{blockchain.ErrOverwriteTx, "ErrOverwriteTx"},
|
||||
{blockchain.ErrImmatureSpend, "ErrImmatureSpend"},
|
||||
{blockchain.ErrDoubleSpend, "ErrDoubleSpend"},
|
||||
{blockchain.ErrSpendTooHigh, "ErrSpendTooHigh"},
|
||||
{blockchain.ErrBadFees, "ErrBadFees"},
|
||||
{blockchain.ErrTooManySigOps, "ErrTooManySigOps"},
|
||||
{blockchain.ErrFirstTxNotCoinbase, "ErrFirstTxNotCoinbase"},
|
||||
{blockchain.ErrMultipleCoinbases, "ErrMultipleCoinbases"},
|
||||
{blockchain.ErrBadCoinbaseScriptLen, "ErrBadCoinbaseScriptLen"},
|
||||
{blockchain.ErrBadCoinbaseValue, "ErrBadCoinbaseValue"},
|
||||
{blockchain.ErrMissingCoinbaseHeight, "ErrMissingCoinbaseHeight"},
|
||||
{blockchain.ErrBadCoinbaseHeight, "ErrBadCoinbaseHeight"},
|
||||
{blockchain.ErrScriptMalformed, "ErrScriptMalformed"},
|
||||
{blockchain.ErrScriptValidation, "ErrScriptValidation"},
|
||||
{0xffff, "Unknown ErrorCode (65535)"},
|
||||
}
|
||||
|
||||
t.Logf("Running %d tests", len(tests))
|
||||
for i, test := range tests {
|
||||
result := test.in.String()
|
||||
if result != test.want {
|
||||
t.Errorf("String #%d\n got: %s want: %s", i, result,
|
||||
test.want)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestRuleError tests the error output for the RuleError type.
|
||||
func TestRuleError(t *testing.T) {
|
||||
tests := []struct {
|
||||
in blockchain.RuleError
|
||||
want string
|
||||
}{
|
||||
{
|
||||
blockchain.RuleError{Description: "duplicate block"},
|
||||
"duplicate block",
|
||||
},
|
||||
{
|
||||
blockchain.RuleError{Description: "human-readable error"},
|
||||
"human-readable error",
|
||||
},
|
||||
}
|
||||
|
||||
t.Logf("Running %d tests", len(tests))
|
||||
for i, test := range tests {
|
||||
result := test.in.Error()
|
||||
if result != test.want {
|
||||
t.Errorf("Error #%d\n got: %s want: %s", i, result,
|
||||
test.want)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
101
blockchain/example_test.go
Normal file
101
blockchain/example_test.go
Normal file
|
@ -0,0 +1,101 @@
|
|||
// Copyright (c) 2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/big"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
"github.com/btcsuite/btcd/database"
|
||||
_ "github.com/btcsuite/btcd/database/memdb"
|
||||
"github.com/btcsuite/btcnet"
|
||||
"github.com/btcsuite/btcutil"
|
||||
)
|
||||
|
||||
// This example demonstrates how to create a new chain instance and use
|
||||
// ProcessBlock to attempt to attempt add a block to the chain. As the package
|
||||
// overview documentation describes, this includes all of the Bitcoin consensus
|
||||
// rules. This example intentionally attempts to insert a duplicate genesis
|
||||
// block to illustrate how an invalid block is handled.
|
||||
func ExampleBlockChain_ProcessBlock() {
|
||||
// Create a new database to store the accepted blocks into. Typically
|
||||
// this would be opening an existing database and would not use memdb
|
||||
// which is a memory-only database backend, but we create a new db
|
||||
// here so this is a complete working example.
|
||||
db, err := database.CreateDB("memdb")
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to create database: %v\n", err)
|
||||
return
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
// Insert the main network genesis block. This is part of the initial
|
||||
// database setup. Like above, this typically would not be needed when
|
||||
// opening an existing database.
|
||||
genesisBlock := btcutil.NewBlock(btcnet.MainNetParams.GenesisBlock)
|
||||
_, err = db.InsertBlock(genesisBlock)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to insert genesis block: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Create a new BlockChain instance using the underlying database for
|
||||
// the main bitcoin network and ignore notifications.
|
||||
chain := blockchain.New(db, &btcnet.MainNetParams, nil)
|
||||
|
||||
// Create a new median time source that is required by the upcoming
|
||||
// call to ProcessBlock. Ordinarily this would also add time values
|
||||
// obtained from other peers on the network so the local time is
|
||||
// adjusted to be in agreement with other peers.
|
||||
timeSource := blockchain.NewMedianTime()
|
||||
|
||||
// Process a block. For this example, we are going to intentionally
|
||||
// cause an error by trying to process the genesis block which already
|
||||
// exists.
|
||||
isOrphan, err := chain.ProcessBlock(genesisBlock, timeSource, blockchain.BFNone)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to process block: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Printf("Block accepted. Is it an orphan?: %v", isOrphan)
|
||||
|
||||
// Output:
|
||||
// Failed to process block: already have block 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
|
||||
}
|
||||
|
||||
// This example demonstrates how to convert the compact "bits" in a block header
|
||||
// which represent the target difficulty to a big integer and display it using
|
||||
// the typical hex notation.
|
||||
func ExampleCompactToBig() {
|
||||
// Convert the bits from block 300000 in the main block chain.
|
||||
bits := uint32(419465580)
|
||||
targetDifficulty := blockchain.CompactToBig(bits)
|
||||
|
||||
// Display it in hex.
|
||||
fmt.Printf("%064x\n", targetDifficulty.Bytes())
|
||||
|
||||
// Output:
|
||||
// 0000000000000000896c00000000000000000000000000000000000000000000
|
||||
}
|
||||
|
||||
// This example demonstrates how to convert a target difficulty into the compact
|
||||
// "bits" in a block header which represent that target difficulty .
|
||||
func ExampleBigToCompact() {
|
||||
// Convert the target difficulty from block 300000 in the main block
|
||||
// chain to compact form.
|
||||
t := "0000000000000000896c00000000000000000000000000000000000000000000"
|
||||
targetDifficulty, success := new(big.Int).SetString(t, 16)
|
||||
if !success {
|
||||
fmt.Println("invalid target difficulty")
|
||||
return
|
||||
}
|
||||
bits := blockchain.BigToCompact(targetDifficulty)
|
||||
|
||||
fmt.Println(bits)
|
||||
|
||||
// Output:
|
||||
// 419465580
|
||||
}
|
44
blockchain/internal_test.go
Normal file
44
blockchain/internal_test.go
Normal file
|
@ -0,0 +1,44 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
/*
|
||||
This test file is part of the blockchain package rather than than the
|
||||
blockchain_test package so it can bridge access to the internals to properly
|
||||
test cases which are either not possible or can't reliably be tested via the
|
||||
public interface. The functions are only exported while the tests are being
|
||||
run.
|
||||
*/
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TstSetCoinbaseMaturity makes the ability to set the coinbase maturity
|
||||
// available to the test package.
|
||||
func TstSetCoinbaseMaturity(maturity int64) {
|
||||
coinbaseMaturity = maturity
|
||||
}
|
||||
|
||||
// TstTimeSorter makes the internal timeSorter type available to the test
|
||||
// package.
|
||||
func TstTimeSorter(times []time.Time) sort.Interface {
|
||||
return timeSorter(times)
|
||||
}
|
||||
|
||||
// TstCheckSerializedHeight makes the internal checkSerializedHeight function
|
||||
// available to the test package.
|
||||
var TstCheckSerializedHeight = checkSerializedHeight
|
||||
|
||||
// TstSetMaxMedianTimeEntries makes the ability to set the maximum number of
|
||||
// median tiem entries available to the test package.
|
||||
func TstSetMaxMedianTimeEntries(val int) {
|
||||
maxMedianTimeEntries = val
|
||||
}
|
||||
|
||||
// TstCheckBlockScripts makes the internal checkBlockScripts function available
|
||||
// to the test package.
|
||||
var TstCheckBlockScripts = checkBlockScripts
|
71
blockchain/log.go
Normal file
71
blockchain/log.go
Normal file
|
@ -0,0 +1,71 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
"github.com/btcsuite/btclog"
|
||||
)
|
||||
|
||||
// log is a logger that is initialized with no output filters. This
|
||||
// means the package will not perform any logging by default until the caller
|
||||
// requests it.
|
||||
var log btclog.Logger
|
||||
|
||||
// The default amount of logging is none.
|
||||
func init() {
|
||||
DisableLog()
|
||||
}
|
||||
|
||||
// DisableLog disables all library log output. Logging output is disabled
|
||||
// by default until either UseLogger or SetLogWriter are called.
|
||||
func DisableLog() {
|
||||
log = btclog.Disabled
|
||||
}
|
||||
|
||||
// UseLogger uses a specified Logger to output package logging info.
|
||||
// This should be used in preference to SetLogWriter if the caller is also
|
||||
// using btclog.
|
||||
func UseLogger(logger btclog.Logger) {
|
||||
log = logger
|
||||
}
|
||||
|
||||
// SetLogWriter uses a specified io.Writer to output package logging info.
|
||||
// This allows a caller to direct package logging output without needing a
|
||||
// dependency on seelog. If the caller is also using btclog, UseLogger should
|
||||
// be used instead.
|
||||
func SetLogWriter(w io.Writer, level string) error {
|
||||
if w == nil {
|
||||
return errors.New("nil writer")
|
||||
}
|
||||
|
||||
lvl, ok := btclog.LogLevelFromString(level)
|
||||
if !ok {
|
||||
return errors.New("invalid log level")
|
||||
}
|
||||
|
||||
l, err := btclog.NewLoggerFromWriter(w, lvl)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
UseLogger(l)
|
||||
return nil
|
||||
}
|
||||
|
||||
// LogClosure is a closure that can be printed with %v to be used to
|
||||
// generate expensive-to-create data for a detailed log level and avoid doing
|
||||
// the work if the data isn't printed.
|
||||
type logClosure func() string
|
||||
|
||||
func (c logClosure) String() string {
|
||||
return c()
|
||||
}
|
||||
|
||||
func newLogClosure(c func() string) logClosure {
|
||||
return logClosure(c)
|
||||
}
|
218
blockchain/mediantime.go
Normal file
218
blockchain/mediantime.go
Normal file
|
@ -0,0 +1,218 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"math"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
// maxAllowedOffsetSeconds is the maximum number of seconds in either
|
||||
// direction that local clock will be adjusted. When the median time
|
||||
// of the network is outside of this range, no offset will be applied.
|
||||
maxAllowedOffsetSecs = 70 * 60 // 1 hour 10 minutes
|
||||
|
||||
// similarTimeSecs is the number of seconds in either direction from the
|
||||
// local clock that is used to determine that it is likley wrong and
|
||||
// hence to show a warning.
|
||||
similarTimeSecs = 5 * 60 // 5 minutes
|
||||
)
|
||||
|
||||
var (
|
||||
// maxMedianTimeEntries is the maximum number of entries allowed in the
|
||||
// median time data. This is a variable as opposed to a constant so the
|
||||
// test code can modify it.
|
||||
maxMedianTimeEntries = 200
|
||||
)
|
||||
|
||||
// MedianTimeSource provides a mechanism to add several time samples which are
|
||||
// used to determine a median time which is then used as an offset to the local
|
||||
// clock.
|
||||
type MedianTimeSource interface {
|
||||
// AdjustedTime returns the current time adjusted by the median time
|
||||
// offset as calculated from the time samples added by AddTimeSample.
|
||||
AdjustedTime() time.Time
|
||||
|
||||
// AddTimeSample adds a time sample that is used when determining the
|
||||
// median time of the added samples.
|
||||
AddTimeSample(id string, timeVal time.Time)
|
||||
|
||||
// Offset returns the number of seconds to adjust the local clock based
|
||||
// upon the median of the time samples added by AddTimeData.
|
||||
Offset() time.Duration
|
||||
}
|
||||
|
||||
// int64Sorter implements sort.Interface to allow a slice of 64-bit integers to
|
||||
// be sorted.
|
||||
type int64Sorter []int64
|
||||
|
||||
// Len returns the number of 64-bit integers in the slice. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s int64Sorter) Len() int {
|
||||
return len(s)
|
||||
}
|
||||
|
||||
// Swap swaps the 64-bit integers at the passed indices. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s int64Sorter) Swap(i, j int) {
|
||||
s[i], s[j] = s[j], s[i]
|
||||
}
|
||||
|
||||
// Less returns whether the 64-bit integer with index i should sort before the
|
||||
// 64-bit integer with index j. It is part of the sort.Interface
|
||||
// implementation.
|
||||
func (s int64Sorter) Less(i, j int) bool {
|
||||
return s[i] < s[j]
|
||||
}
|
||||
|
||||
// medianTime provides an implementation of the MedianTimeSource interface.
|
||||
// It is limited to maxMedianTimeEntries includes the same buggy behavior as
|
||||
// the time offset mechanism in Bitcoin Core. This is necessary because it is
|
||||
// used in the consensus code.
|
||||
type medianTime struct {
|
||||
mtx sync.Mutex
|
||||
knownIDs map[string]struct{}
|
||||
offsets []int64
|
||||
offsetSecs int64
|
||||
invalidTimeChecked bool
|
||||
}
|
||||
|
||||
// Ensure the medianTime type implements the MedianTimeSource interface.
|
||||
var _ MedianTimeSource = (*medianTime)(nil)
|
||||
|
||||
// AdjustedTime returns the current time adjusted by the median time offset as
|
||||
// calculated from the time samples added by AddTimeSample.
|
||||
//
|
||||
// This function is safe for concurrent access and is part of the
|
||||
// MedianTimeSource interface implementation.
|
||||
func (m *medianTime) AdjustedTime() time.Time {
|
||||
m.mtx.Lock()
|
||||
defer m.mtx.Unlock()
|
||||
|
||||
// Limit the adjusted time to 1 second precision.
|
||||
now := time.Unix(time.Now().Unix(), 0)
|
||||
return now.Add(time.Duration(m.offsetSecs) * time.Second)
|
||||
}
|
||||
|
||||
// AddTimeSample adds a time sample that is used when determining the median
|
||||
// time of the added samples.
|
||||
//
|
||||
// This function is safe for concurrent access and is part of the
|
||||
// MedianTimeSource interface implementation.
|
||||
func (m *medianTime) AddTimeSample(sourceID string, timeVal time.Time) {
|
||||
m.mtx.Lock()
|
||||
defer m.mtx.Unlock()
|
||||
|
||||
// Don't add time data from the same source.
|
||||
if _, exists := m.knownIDs[sourceID]; exists {
|
||||
return
|
||||
}
|
||||
m.knownIDs[sourceID] = struct{}{}
|
||||
|
||||
// Truncate the provided offset to seconds and append it to the slice
|
||||
// of offsets while respecting the maximum number of allowed entries by
|
||||
// replacing the oldest entry with the new entry once the maximum number
|
||||
// of entries is reached.
|
||||
now := time.Unix(time.Now().Unix(), 0)
|
||||
offsetSecs := int64(timeVal.Sub(now).Seconds())
|
||||
numOffsets := len(m.offsets)
|
||||
if numOffsets == maxMedianTimeEntries && maxMedianTimeEntries > 0 {
|
||||
m.offsets = m.offsets[1:]
|
||||
numOffsets--
|
||||
}
|
||||
m.offsets = append(m.offsets, offsetSecs)
|
||||
numOffsets++
|
||||
|
||||
// Sort the offsets so the median can be obtained as needed later.
|
||||
sortedOffsets := make([]int64, numOffsets)
|
||||
copy(sortedOffsets, m.offsets)
|
||||
sort.Sort(int64Sorter(sortedOffsets))
|
||||
|
||||
offsetDuration := time.Duration(offsetSecs) * time.Second
|
||||
log.Debugf("Added time sample of %v (total: %v)", offsetDuration,
|
||||
numOffsets)
|
||||
|
||||
// NOTE: The following code intentionally has a bug to mirror the
|
||||
// buggy behavior in Bitcoin Core since the median time is used in the
|
||||
// consensus rules.
|
||||
//
|
||||
// In particular, the offset is only updated when the number of entries
|
||||
// is odd, but the max number of entries is 200, an even number. Thus,
|
||||
// the offset will never be updated again once the max number of entries
|
||||
// is reached.
|
||||
|
||||
// The median offset is only updated when there are enough offsets and
|
||||
// the number of offsets is odd so the middle value is the true median.
|
||||
// Thus, there is nothing to do when those conditions are not met.
|
||||
if numOffsets < 5 || numOffsets&0x01 != 1 {
|
||||
return
|
||||
}
|
||||
|
||||
// At this point the number of offsets in the list is odd, so the
|
||||
// middle value of the sorted offsets is the median.
|
||||
median := sortedOffsets[numOffsets/2]
|
||||
|
||||
// Set the new offset when the median offset is within the allowed
|
||||
// offset range.
|
||||
if math.Abs(float64(median)) < maxAllowedOffsetSecs {
|
||||
m.offsetSecs = median
|
||||
} else {
|
||||
// The median offset of all added time data is larger than the
|
||||
// maximum allowed offset, so don't use an offset. This
|
||||
// effectively limits how far the local clock can be skewed.
|
||||
m.offsetSecs = 0
|
||||
|
||||
if !m.invalidTimeChecked {
|
||||
m.invalidTimeChecked = true
|
||||
|
||||
// Find if any time samples have a time that is close
|
||||
// to the local time.
|
||||
var remoteHasCloseTime bool
|
||||
for _, offset := range sortedOffsets {
|
||||
if math.Abs(float64(offset)) < similarTimeSecs {
|
||||
remoteHasCloseTime = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Warn if none of the time samples are close.
|
||||
if !remoteHasCloseTime {
|
||||
log.Warnf("Please check your date and time " +
|
||||
"are correct! btcd will not work " +
|
||||
"properly with an invalid time")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
medianDuration := time.Duration(m.offsetSecs) * time.Second
|
||||
log.Debugf("New time offset: %v", medianDuration)
|
||||
}
|
||||
|
||||
// Offset returns the number of seconds to adjust the local clock based upon the
|
||||
// median of the time samples added by AddTimeData.
|
||||
//
|
||||
// This function is safe for concurrent access and is part of the
|
||||
// MedianTimeSource interface implementation.
|
||||
func (m *medianTime) Offset() time.Duration {
|
||||
m.mtx.Lock()
|
||||
defer m.mtx.Unlock()
|
||||
|
||||
return time.Duration(m.offsetSecs) * time.Second
|
||||
}
|
||||
|
||||
// NewMedianTime returns a new instance of concurrency-safe implementation of
|
||||
// the MedianTimeSource interface. The returned implementation contains the
|
||||
// rules necessary for proper time handling in the chain consensus rules and
|
||||
// expects the time samples to be added from the timestamp field of the version
|
||||
// message received from remote peers that successfully connect and negotiate.
|
||||
func NewMedianTime() MedianTimeSource {
|
||||
return &medianTime{
|
||||
knownIDs: make(map[string]struct{}),
|
||||
offsets: make([]int64, 0, maxMedianTimeEntries),
|
||||
}
|
||||
}
|
106
blockchain/mediantime_test.go
Normal file
106
blockchain/mediantime_test.go
Normal file
|
@ -0,0 +1,106 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
)
|
||||
|
||||
// TestMedianTime tests the medianTime implementation.
|
||||
func TestMedianTime(t *testing.T) {
|
||||
tests := []struct {
|
||||
in []int64
|
||||
wantOffset int64
|
||||
useDupID bool
|
||||
}{
|
||||
// Not enough samples must result in an offset of 0.
|
||||
{in: []int64{1}, wantOffset: 0},
|
||||
{in: []int64{1, 2}, wantOffset: 0},
|
||||
{in: []int64{1, 2, 3}, wantOffset: 0},
|
||||
{in: []int64{1, 2, 3, 4}, wantOffset: 0},
|
||||
|
||||
// Various number of entries. The expected offset is only
|
||||
// updated on odd number of elements.
|
||||
{in: []int64{-13, 57, -4, -23, -12}, wantOffset: -12},
|
||||
{in: []int64{55, -13, 61, -52, 39, 55}, wantOffset: 39},
|
||||
{in: []int64{-62, -58, -30, -62, 51, -30, 15}, wantOffset: -30},
|
||||
{in: []int64{29, -47, 39, 54, 42, 41, 8, -33}, wantOffset: 39},
|
||||
{in: []int64{37, 54, 9, -21, -56, -36, 5, -11, -39}, wantOffset: -11},
|
||||
{in: []int64{57, -28, 25, -39, 9, 63, -16, 19, -60, 25}, wantOffset: 9},
|
||||
{in: []int64{-5, -4, -3, -2, -1}, wantOffset: -3, useDupID: true},
|
||||
|
||||
// The offset stops being updated once the max number of entries
|
||||
// has been reached. This is actually a bug from Bitcoin Core,
|
||||
// but since the time is ultimately used as a part of the
|
||||
// consensus rules, it must be mirrored.
|
||||
{in: []int64{-67, 67, -50, 24, 63, 17, 58, -14, 5, -32, -52}, wantOffset: 17},
|
||||
{in: []int64{-67, 67, -50, 24, 63, 17, 58, -14, 5, -32, -52, 45}, wantOffset: 17},
|
||||
{in: []int64{-67, 67, -50, 24, 63, 17, 58, -14, 5, -32, -52, 45, 4}, wantOffset: 17},
|
||||
|
||||
// Offsets that are too far away from the local time should
|
||||
// be ignored.
|
||||
{in: []int64{-4201, 4202, -4203, 4204, -4205}, wantOffset: 0},
|
||||
|
||||
// Excerise the condition where the median offset is greater
|
||||
// than the max allowed adjustment, but there is at least one
|
||||
// sample that is close enough to the current time to avoid
|
||||
// triggering a warning about an invalid local clock.
|
||||
{in: []int64{4201, 4202, 4203, 4204, -299}, wantOffset: 0},
|
||||
}
|
||||
|
||||
// Modify the max number of allowed median time entries for these tests.
|
||||
blockchain.TstSetMaxMedianTimeEntries(10)
|
||||
defer blockchain.TstSetMaxMedianTimeEntries(200)
|
||||
|
||||
for i, test := range tests {
|
||||
filter := blockchain.NewMedianTime()
|
||||
for j, offset := range test.in {
|
||||
id := strconv.Itoa(j)
|
||||
now := time.Unix(time.Now().Unix(), 0)
|
||||
tOffset := now.Add(time.Duration(offset) * time.Second)
|
||||
filter.AddTimeSample(id, tOffset)
|
||||
|
||||
// Ensure the duplicate IDs are ignored.
|
||||
if test.useDupID {
|
||||
// Modify the offsets to ensure the final median
|
||||
// would be different if the duplicate is added.
|
||||
tOffset = tOffset.Add(time.Duration(offset) *
|
||||
time.Second)
|
||||
filter.AddTimeSample(id, tOffset)
|
||||
}
|
||||
}
|
||||
|
||||
// Since it is possible that the time.Now call in AddTimeSample
|
||||
// and the time.Now calls here in the tests will be off by one
|
||||
// second, allow a fudge factor to compensate.
|
||||
gotOffset := filter.Offset()
|
||||
wantOffset := time.Duration(test.wantOffset) * time.Second
|
||||
wantOffset2 := time.Duration(test.wantOffset-1) * time.Second
|
||||
if gotOffset != wantOffset && gotOffset != wantOffset2 {
|
||||
t.Errorf("Offset #%d: unexpected offset -- got %v, "+
|
||||
"want %v or %v", i, gotOffset, wantOffset,
|
||||
wantOffset2)
|
||||
continue
|
||||
}
|
||||
|
||||
// Since it is possible that the time.Now call in AdjustedTime
|
||||
// and the time.Now call here in the tests will be off by one
|
||||
// second, allow a fudge factor to compensate.
|
||||
adjustedTime := filter.AdjustedTime()
|
||||
now := time.Unix(time.Now().Unix(), 0)
|
||||
wantTime := now.Add(filter.Offset())
|
||||
wantTime2 := now.Add(filter.Offset() - time.Second)
|
||||
if !adjustedTime.Equal(wantTime) && !adjustedTime.Equal(wantTime2) {
|
||||
t.Errorf("AdjustedTime #%d: unexpected result -- got %v, "+
|
||||
"want %v or %v", i, adjustedTime, wantTime,
|
||||
wantTime2)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
109
blockchain/merkle.go
Normal file
109
blockchain/merkle.go
Normal file
|
@ -0,0 +1,109 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"math"
|
||||
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// nextPowerOfTwo returns the next highest power of two from a given number if
|
||||
// it is not already a power of two. This is a helper function used during the
|
||||
// calculation of a merkle tree.
|
||||
func nextPowerOfTwo(n int) int {
|
||||
// Return the number if it's already a power of 2.
|
||||
if n&(n-1) == 0 {
|
||||
return n
|
||||
}
|
||||
|
||||
// Figure out and return the next power of two.
|
||||
exponent := uint(math.Log2(float64(n))) + 1
|
||||
return 1 << exponent // 2^exponent
|
||||
}
|
||||
|
||||
// HashMerkleBranches takes two hashes, treated as the left and right tree
|
||||
// nodes, and returns the hash of their concatenation. This is a helper
|
||||
// function used to aid in the generation of a merkle tree.
|
||||
func HashMerkleBranches(left *btcwire.ShaHash, right *btcwire.ShaHash) *btcwire.ShaHash {
|
||||
// Concatenate the left and right nodes.
|
||||
var sha [btcwire.HashSize * 2]byte
|
||||
copy(sha[:btcwire.HashSize], left.Bytes())
|
||||
copy(sha[btcwire.HashSize:], right.Bytes())
|
||||
|
||||
// Create a new sha hash from the double sha 256. Ignore the error
|
||||
// here since SetBytes can't fail here due to the fact DoubleSha256
|
||||
// always returns a []byte of the right size regardless of input.
|
||||
newSha, _ := btcwire.NewShaHash(btcwire.DoubleSha256(sha[:]))
|
||||
return newSha
|
||||
}
|
||||
|
||||
// BuildMerkleTreeStore creates a merkle tree from a slice of transactions,
|
||||
// stores it using a linear array, and returns a slice of the backing array. A
|
||||
// linear array was chosen as opposed to an actual tree structure since it uses
|
||||
// about half as much memory. The following describes a merkle tree and how it
|
||||
// is stored in a linear array.
|
||||
//
|
||||
// A merkle tree is a tree in which every non-leaf node is the hash of its
|
||||
// children nodes. A diagram depicting how this works for bitcoin transactions
|
||||
// where h(x) is a double sha256 follows:
|
||||
//
|
||||
// root = h1234 = h(h12 + h34)
|
||||
// / \
|
||||
// h12 = h(h1 + h2) h34 = h(h3 + h4)
|
||||
// / \ / \
|
||||
// h1 = h(tx1) h2 = h(tx2) h3 = h(tx3) h4 = h(tx4)
|
||||
//
|
||||
// The above stored as a linear array is as follows:
|
||||
//
|
||||
// [h1 h2 h3 h4 h12 h34 root]
|
||||
//
|
||||
// As the above shows, the merkle root is always the last element in the array.
|
||||
//
|
||||
// The number of inputs is not always a power of two which results in a
|
||||
// balanced tree structure as above. In that case, parent nodes with no
|
||||
// children are also zero and parent nodes with only a single left node
|
||||
// are calculated by concatenating the left node with itself before hashing.
|
||||
// Since this function uses nodes that are pointers to the hashes, empty nodes
|
||||
// will be nil.
|
||||
func BuildMerkleTreeStore(transactions []*btcutil.Tx) []*btcwire.ShaHash {
|
||||
// Calculate how many entries are required to hold the binary merkle
|
||||
// tree as a linear array and create an array of that size.
|
||||
nextPoT := nextPowerOfTwo(len(transactions))
|
||||
arraySize := nextPoT*2 - 1
|
||||
merkles := make([]*btcwire.ShaHash, arraySize)
|
||||
|
||||
// Create the base transaction shas and populate the array with them.
|
||||
for i, tx := range transactions {
|
||||
merkles[i] = tx.Sha()
|
||||
}
|
||||
|
||||
// Start the array offset after the last transaction and adjusted to the
|
||||
// next power of two.
|
||||
offset := nextPoT
|
||||
for i := 0; i < arraySize-1; i += 2 {
|
||||
switch {
|
||||
// When there is no left child node, the parent is nil too.
|
||||
case merkles[i] == nil:
|
||||
merkles[offset] = nil
|
||||
|
||||
// When there is no right child, the parent is generated by
|
||||
// hashing the concatenation of the left child with itself.
|
||||
case merkles[i+1] == nil:
|
||||
newSha := HashMerkleBranches(merkles[i], merkles[i])
|
||||
merkles[offset] = newSha
|
||||
|
||||
// The normal case sets the parent node to the double sha256
|
||||
// of the concatentation of the left and right children.
|
||||
default:
|
||||
newSha := HashMerkleBranches(merkles[i], merkles[i+1])
|
||||
merkles[offset] = newSha
|
||||
}
|
||||
offset++
|
||||
}
|
||||
|
||||
return merkles
|
||||
}
|
24
blockchain/merkle_test.go
Normal file
24
blockchain/merkle_test.go
Normal file
|
@ -0,0 +1,24 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
"github.com/btcsuite/btcutil"
|
||||
)
|
||||
|
||||
// TestMerkle tests the BuildMerkleTreeStore API.
|
||||
func TestMerkle(t *testing.T) {
|
||||
block := btcutil.NewBlock(&Block100000)
|
||||
merkles := blockchain.BuildMerkleTreeStore(block.Transactions())
|
||||
calculatedMerkleRoot := merkles[len(merkles)-1]
|
||||
wantMerkle := &Block100000.Header.MerkleRoot
|
||||
if !wantMerkle.IsEqual(calculatedMerkleRoot) {
|
||||
t.Errorf("BuildMerkleTreeStore: merkle root mismatch - "+
|
||||
"got %v, want %v", calculatedMerkleRoot, wantMerkle)
|
||||
}
|
||||
}
|
73
blockchain/notifications.go
Normal file
73
blockchain/notifications.go
Normal file
|
@ -0,0 +1,73 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// NotificationType represents the type of a notification message.
|
||||
type NotificationType int
|
||||
|
||||
// NotificationCallback is used for a caller to provide a callback for
|
||||
// notifications about various chain events.
|
||||
type NotificationCallback func(*Notification)
|
||||
|
||||
// Constants for the type of a notification message.
|
||||
const (
|
||||
// NTBlockAccepted indicates the associated block was accepted into
|
||||
// the block chain. Note that this does not necessarily mean it was
|
||||
// added to the main chain. For that, use NTBlockConnected.
|
||||
NTBlockAccepted NotificationType = iota
|
||||
|
||||
// NTBlockConnected indicates the associated block was connected to the
|
||||
// main chain.
|
||||
NTBlockConnected
|
||||
|
||||
// NTBlockDisconnected indicates the associated block was disconnected
|
||||
// from the main chain.
|
||||
NTBlockDisconnected
|
||||
)
|
||||
|
||||
// notificationTypeStrings is a map of notification types back to their constant
|
||||
// names for pretty printing.
|
||||
var notificationTypeStrings = map[NotificationType]string{
|
||||
NTBlockAccepted: "NTBlockAccepted",
|
||||
NTBlockConnected: "NTBlockConnected",
|
||||
NTBlockDisconnected: "NTBlockDisconnected",
|
||||
}
|
||||
|
||||
// String returns the NotificationType in human-readable form.
|
||||
func (n NotificationType) String() string {
|
||||
if s, ok := notificationTypeStrings[n]; ok {
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("Unknown Notification Type (%d)", int(n))
|
||||
}
|
||||
|
||||
// Notification defines notification that is sent to the caller via the callback
|
||||
// function provided during the call to New and consists of a notification type
|
||||
// as well as associated data that depends on the type as follows:
|
||||
// - NTBlockAccepted: *btcutil.Block
|
||||
// - NTBlockConnected: *btcutil.Block
|
||||
// - NTBlockDisconnected: *btcutil.Block
|
||||
type Notification struct {
|
||||
Type NotificationType
|
||||
Data interface{}
|
||||
}
|
||||
|
||||
// sendNotification sends a notification with the passed type and data if the
|
||||
// caller requested notifications by providing a callback function in the call
|
||||
// to New.
|
||||
func (b *BlockChain) sendNotification(typ NotificationType, data interface{}) {
|
||||
// Ignore it if the caller didn't request notifications.
|
||||
if b.notifications == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Generate and send the notification.
|
||||
n := Notification{Type: typ, Data: data}
|
||||
b.notifications(&n)
|
||||
}
|
229
blockchain/process.go
Normal file
229
blockchain/process.go
Normal file
|
@ -0,0 +1,229 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// BehaviorFlags is a bitmask defining tweaks to the normal behavior when
|
||||
// performing chain processing and consensus rules checks.
|
||||
type BehaviorFlags uint32
|
||||
|
||||
const (
|
||||
// BFFastAdd may be set to indicate that several checks can be avoided
|
||||
// for the block since it is already known to fit into the chain due to
|
||||
// already proving it correct links into the chain up to a known
|
||||
// checkpoint. This is primarily used for headers-first mode.
|
||||
BFFastAdd BehaviorFlags = 1 << iota
|
||||
|
||||
// BFNoPoWCheck may be set to indicate the proof of work check which
|
||||
// ensures a block hashes to a value less than the required target will
|
||||
// not be performed.
|
||||
BFNoPoWCheck
|
||||
|
||||
// BFDryRun may be set to indicate the block should not modify the chain
|
||||
// or memory chain index. This is useful to test that a block is valid
|
||||
// without modifying the current state.
|
||||
BFDryRun
|
||||
|
||||
// BFNone is a convenience value to specifically indicate no flags.
|
||||
BFNone BehaviorFlags = 0
|
||||
)
|
||||
|
||||
// blockExists determines whether a block with the given hash exists either in
|
||||
// the main chain or any side chains.
|
||||
func (b *BlockChain) blockExists(hash *btcwire.ShaHash) (bool, error) {
|
||||
// Check memory chain first (could be main chain or side chain blocks).
|
||||
if _, ok := b.index[*hash]; ok {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// Check in database (rest of main chain not in memory).
|
||||
return b.db.ExistsSha(hash)
|
||||
}
|
||||
|
||||
// processOrphans determines if there are any orphans which depend on the passed
|
||||
// block hash (they are no longer orphans if true) and potentially accepts them.
|
||||
// It repeats the process for the newly accepted blocks (to detect further
|
||||
// orphans which may no longer be orphans) until there are no more.
|
||||
//
|
||||
// The flags do not modify the behavior of this function directly, however they
|
||||
// are needed to pass along to maybeAcceptBlock.
|
||||
func (b *BlockChain) processOrphans(hash *btcwire.ShaHash, flags BehaviorFlags) error {
|
||||
// Start with processing at least the passed hash. Leave a little room
|
||||
// for additional orphan blocks that need to be processed without
|
||||
// needing to grow the array in the common case.
|
||||
processHashes := make([]*btcwire.ShaHash, 0, 10)
|
||||
processHashes = append(processHashes, hash)
|
||||
for len(processHashes) > 0 {
|
||||
// Pop the first hash to process from the slice.
|
||||
processHash := processHashes[0]
|
||||
processHashes[0] = nil // Prevent GC leak.
|
||||
processHashes = processHashes[1:]
|
||||
|
||||
// Look up all orphans that are parented by the block we just
|
||||
// accepted. This will typically only be one, but it could
|
||||
// be multiple if multiple blocks are mined and broadcast
|
||||
// around the same time. The one with the most proof of work
|
||||
// will eventually win out. An indexing for loop is
|
||||
// intentionally used over a range here as range does not
|
||||
// reevaluate the slice on each iteration nor does it adjust the
|
||||
// index for the modified slice.
|
||||
for i := 0; i < len(b.prevOrphans[*processHash]); i++ {
|
||||
orphan := b.prevOrphans[*processHash][i]
|
||||
if orphan == nil {
|
||||
log.Warnf("Found a nil entry at index %d in the "+
|
||||
"orphan dependency list for block %v", i,
|
||||
processHash)
|
||||
continue
|
||||
}
|
||||
|
||||
// Remove the orphan from the orphan pool.
|
||||
// It's safe to ignore the error on Sha since the hash
|
||||
// is already cached.
|
||||
orphanHash, _ := orphan.block.Sha()
|
||||
b.removeOrphanBlock(orphan)
|
||||
i--
|
||||
|
||||
// Potentially accept the block into the block chain.
|
||||
err := b.maybeAcceptBlock(orphan.block, flags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Add this block to the list of blocks to process so
|
||||
// any orphan blocks that depend on this block are
|
||||
// handled too.
|
||||
processHashes = append(processHashes, orphanHash)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ProcessBlock is the main workhorse for handling insertion of new blocks into
|
||||
// the block chain. It includes functionality such as rejecting duplicate
|
||||
// blocks, ensuring blocks follow all rules, orphan handling, and insertion into
|
||||
// the block chain along with best chain selection and reorganization.
|
||||
//
|
||||
// It returns a bool which indicates whether or not the block is an orphan and
|
||||
// any errors that occurred during processing. The returned bool is only valid
|
||||
// when the error is nil.
|
||||
func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSource, flags BehaviorFlags) (bool, error) {
|
||||
fastAdd := flags&BFFastAdd == BFFastAdd
|
||||
dryRun := flags&BFDryRun == BFDryRun
|
||||
|
||||
blockHash, err := block.Sha()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
log.Tracef("Processing block %v", blockHash)
|
||||
|
||||
// The block must not already exist in the main chain or side chains.
|
||||
exists, err := b.blockExists(blockHash)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if exists {
|
||||
str := fmt.Sprintf("already have block %v", blockHash)
|
||||
return false, ruleError(ErrDuplicateBlock, str)
|
||||
}
|
||||
|
||||
// The block must not already exist as an orphan.
|
||||
if _, exists := b.orphans[*blockHash]; exists {
|
||||
str := fmt.Sprintf("already have block (orphan) %v", blockHash)
|
||||
return false, ruleError(ErrDuplicateBlock, str)
|
||||
}
|
||||
|
||||
// Perform preliminary sanity checks on the block and its transactions.
|
||||
err = checkBlockSanity(block, b.netParams.PowLimit, timeSource, flags)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Find the previous checkpoint and perform some additional checks based
|
||||
// on the checkpoint. This provides a few nice properties such as
|
||||
// preventing old side chain blocks before the last checkpoint,
|
||||
// rejecting easy to mine, but otherwise bogus, blocks that could be
|
||||
// used to eat memory, and ensuring expected (versus claimed) proof of
|
||||
// work requirements since the previous checkpoint are met.
|
||||
blockHeader := &block.MsgBlock().Header
|
||||
checkpointBlock, err := b.findPreviousCheckpoint()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if checkpointBlock != nil {
|
||||
// Ensure the block timestamp is after the checkpoint timestamp.
|
||||
checkpointHeader := &checkpointBlock.MsgBlock().Header
|
||||
checkpointTime := checkpointHeader.Timestamp
|
||||
if blockHeader.Timestamp.Before(checkpointTime) {
|
||||
str := fmt.Sprintf("block %v has timestamp %v before "+
|
||||
"last checkpoint timestamp %v", blockHash,
|
||||
blockHeader.Timestamp, checkpointTime)
|
||||
return false, ruleError(ErrCheckpointTimeTooOld, str)
|
||||
}
|
||||
if !fastAdd {
|
||||
// Even though the checks prior to now have already ensured the
|
||||
// proof of work exceeds the claimed amount, the claimed amount
|
||||
// is a field in the block header which could be forged. This
|
||||
// check ensures the proof of work is at least the minimum
|
||||
// expected based on elapsed time since the last checkpoint and
|
||||
// maximum adjustment allowed by the retarget rules.
|
||||
duration := blockHeader.Timestamp.Sub(checkpointTime)
|
||||
requiredTarget := CompactToBig(b.calcEasiestDifficulty(
|
||||
checkpointHeader.Bits, duration))
|
||||
currentTarget := CompactToBig(blockHeader.Bits)
|
||||
if currentTarget.Cmp(requiredTarget) > 0 {
|
||||
str := fmt.Sprintf("block target difficulty of %064x "+
|
||||
"is too low when compared to the previous "+
|
||||
"checkpoint", currentTarget)
|
||||
return false, ruleError(ErrDifficultyTooLow, str)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle orphan blocks.
|
||||
prevHash := &blockHeader.PrevBlock
|
||||
if !prevHash.IsEqual(zeroHash) {
|
||||
prevHashExists, err := b.blockExists(prevHash)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !prevHashExists {
|
||||
if !dryRun {
|
||||
log.Infof("Adding orphan block %v with parent %v",
|
||||
blockHash, prevHash)
|
||||
b.addOrphanBlock(block)
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
|
||||
// The block has passed all context independent checks and appears sane
|
||||
// enough to potentially accept it into the block chain.
|
||||
err = b.maybeAcceptBlock(block, flags)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Don't process any orphans or log when the dry run flag is set.
|
||||
if !dryRun {
|
||||
// Accept any orphan blocks that depend on this block (they are
|
||||
// no longer orphans) and repeat for those accepted blocks until
|
||||
// there are no more.
|
||||
err := b.processOrphans(blockHash, flags)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
log.Debugf("Accepted block %v", blockHash)
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
134
blockchain/reorganization_test.go
Normal file
134
blockchain/reorganization_test.go
Normal file
|
@ -0,0 +1,134 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"compress/bzip2"
|
||||
"encoding/binary"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// TestReorganization loads a set of test blocks which force a chain
|
||||
// reorganization to test the block chain handling code.
|
||||
// The test blocks were originally from a post on the bitcoin talk forums:
|
||||
// https://bitcointalk.org/index.php?topic=46370.msg577556#msg577556
|
||||
func TestReorganization(t *testing.T) {
|
||||
// Intentionally load the side chain blocks out of order to ensure
|
||||
// orphans are handled properly along with chain reorganization.
|
||||
testFiles := []string{
|
||||
"blk_0_to_4.dat.bz2",
|
||||
"blk_4A.dat.bz2",
|
||||
"blk_5A.dat.bz2",
|
||||
"blk_3A.dat.bz2",
|
||||
}
|
||||
|
||||
var blocks []*btcutil.Block
|
||||
for _, file := range testFiles {
|
||||
blockTmp, err := loadBlocks(file)
|
||||
if err != nil {
|
||||
t.Errorf("Error loading file: %v\n", err)
|
||||
}
|
||||
for _, block := range blockTmp {
|
||||
blocks = append(blocks, block)
|
||||
}
|
||||
}
|
||||
|
||||
t.Logf("Number of blocks: %v\n", len(blocks))
|
||||
|
||||
// Create a new database and chain instance to run tests against.
|
||||
chain, teardownFunc, err := chainSetup("reorg")
|
||||
if err != nil {
|
||||
t.Errorf("Failed to setup chain instance: %v", err)
|
||||
return
|
||||
}
|
||||
defer teardownFunc()
|
||||
|
||||
// Since we're not dealing with the real block chain, disable
|
||||
// checkpoints and set the coinbase maturity to 1.
|
||||
chain.DisableCheckpoints(true)
|
||||
blockchain.TstSetCoinbaseMaturity(1)
|
||||
|
||||
timeSource := blockchain.NewMedianTime()
|
||||
expectedOrphans := map[int]struct{}{5: struct{}{}, 6: struct{}{}}
|
||||
for i := 1; i < len(blocks); i++ {
|
||||
isOrphan, err := chain.ProcessBlock(blocks[i], timeSource, blockchain.BFNone)
|
||||
if err != nil {
|
||||
t.Errorf("ProcessBlock fail on block %v: %v\n", i, err)
|
||||
return
|
||||
}
|
||||
if _, ok := expectedOrphans[i]; !ok && isOrphan {
|
||||
t.Errorf("ProcessBlock incorrectly returned block %v "+
|
||||
"is an orphan\n", i)
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// loadBlocks reads files containing bitcoin block data (gzipped but otherwise
|
||||
// in the format bitcoind writes) from disk and returns them as an array of
|
||||
// btcutil.Block. This is largely borrowed from the test code in btcdb.
|
||||
func loadBlocks(filename string) (blocks []*btcutil.Block, err error) {
|
||||
filename = filepath.Join("testdata/", filename)
|
||||
|
||||
var network = btcwire.MainNet
|
||||
var dr io.Reader
|
||||
var fi io.ReadCloser
|
||||
|
||||
fi, err = os.Open(filename)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if strings.HasSuffix(filename, ".bz2") {
|
||||
dr = bzip2.NewReader(fi)
|
||||
} else {
|
||||
dr = fi
|
||||
}
|
||||
defer fi.Close()
|
||||
|
||||
var block *btcutil.Block
|
||||
|
||||
err = nil
|
||||
for height := int64(1); err == nil; height++ {
|
||||
var rintbuf uint32
|
||||
err = binary.Read(dr, binary.LittleEndian, &rintbuf)
|
||||
if err == io.EOF {
|
||||
// hit end of file at expected offset: no warning
|
||||
height--
|
||||
err = nil
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
if rintbuf != uint32(network) {
|
||||
break
|
||||
}
|
||||
err = binary.Read(dr, binary.LittleEndian, &rintbuf)
|
||||
blocklen := rintbuf
|
||||
|
||||
rbytes := make([]byte, blocklen)
|
||||
|
||||
// read block
|
||||
dr.Read(rbytes)
|
||||
|
||||
block, err = btcutil.NewBlockFromBytes(rbytes)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
blocks = append(blocks, block)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
262
blockchain/scriptval.go
Normal file
262
blockchain/scriptval.go
Normal file
|
@ -0,0 +1,262 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"runtime"
|
||||
|
||||
"github.com/btcsuite/btcd/txscript"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// txValidateItem holds a transaction along with which input to validate.
|
||||
type txValidateItem struct {
|
||||
txInIndex int
|
||||
txIn *btcwire.TxIn
|
||||
tx *btcutil.Tx
|
||||
}
|
||||
|
||||
// txValidator provides a type which asynchronously validates transaction
|
||||
// inputs. It provides several channels for communication and a processing
|
||||
// function that is intended to be in run multiple goroutines.
|
||||
type txValidator struct {
|
||||
validateChan chan *txValidateItem
|
||||
quitChan chan struct{}
|
||||
resultChan chan error
|
||||
txStore TxStore
|
||||
flags txscript.ScriptFlags
|
||||
}
|
||||
|
||||
// sendResult sends the result of a script pair validation on the internal
|
||||
// result channel while respecting the quit channel. The allows orderly
|
||||
// shutdown when the validation process is aborted early due to a validation
|
||||
// error in one of the other goroutines.
|
||||
func (v *txValidator) sendResult(result error) {
|
||||
select {
|
||||
case v.resultChan <- result:
|
||||
case <-v.quitChan:
|
||||
}
|
||||
}
|
||||
|
||||
// validateHandler consumes items to validate from the internal validate channel
|
||||
// and returns the result of the validation on the internal result channel. It
|
||||
// must be run as a goroutine.
|
||||
func (v *txValidator) validateHandler() {
|
||||
out:
|
||||
for {
|
||||
select {
|
||||
case txVI := <-v.validateChan:
|
||||
// Ensure the referenced input transaction is available.
|
||||
txIn := txVI.txIn
|
||||
originTxHash := &txIn.PreviousOutPoint.Hash
|
||||
originTx, exists := v.txStore[*originTxHash]
|
||||
if !exists || originTx.Err != nil || originTx.Tx == nil {
|
||||
str := fmt.Sprintf("unable to find input "+
|
||||
"transaction %v referenced from "+
|
||||
"transaction %v", originTxHash,
|
||||
txVI.tx.Sha())
|
||||
err := ruleError(ErrMissingTx, str)
|
||||
v.sendResult(err)
|
||||
break out
|
||||
}
|
||||
originMsgTx := originTx.Tx.MsgTx()
|
||||
|
||||
// Ensure the output index in the referenced transaction
|
||||
// is available.
|
||||
originTxIndex := txIn.PreviousOutPoint.Index
|
||||
if originTxIndex >= uint32(len(originMsgTx.TxOut)) {
|
||||
str := fmt.Sprintf("out of bounds "+
|
||||
"input index %d in transaction %v "+
|
||||
"referenced from transaction %v",
|
||||
originTxIndex, originTxHash,
|
||||
txVI.tx.Sha())
|
||||
err := ruleError(ErrBadTxInput, str)
|
||||
v.sendResult(err)
|
||||
break out
|
||||
}
|
||||
|
||||
// Create a new script engine for the script pair.
|
||||
sigScript := txIn.SignatureScript
|
||||
pkScript := originMsgTx.TxOut[originTxIndex].PkScript
|
||||
engine, err := txscript.NewScript(sigScript, pkScript,
|
||||
txVI.txInIndex, txVI.tx.MsgTx(), v.flags)
|
||||
if err != nil {
|
||||
str := fmt.Sprintf("failed to parse input "+
|
||||
"%s:%d which references output %s:%d - "+
|
||||
"%v (input script bytes %x, prev output "+
|
||||
"script bytes %x)", txVI.tx.Sha(),
|
||||
txVI.txInIndex, originTxHash,
|
||||
originTxIndex, err, sigScript, pkScript)
|
||||
err := ruleError(ErrScriptMalformed, str)
|
||||
v.sendResult(err)
|
||||
break out
|
||||
}
|
||||
|
||||
// Execute the script pair.
|
||||
if err := engine.Execute(); err != nil {
|
||||
str := fmt.Sprintf("failed to validate input "+
|
||||
"%s:%d which references output %s:%d - "+
|
||||
"%v (input script bytes %x, prev output "+
|
||||
"script bytes %x)", txVI.tx.Sha(),
|
||||
txVI.txInIndex, originTxHash,
|
||||
originTxIndex, err, sigScript, pkScript)
|
||||
err := ruleError(ErrScriptValidation, str)
|
||||
v.sendResult(err)
|
||||
break out
|
||||
}
|
||||
|
||||
// Validation succeeded.
|
||||
v.sendResult(nil)
|
||||
|
||||
case <-v.quitChan:
|
||||
break out
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Validate validates the scripts for all of the passed transaction inputs using
|
||||
// multiple goroutines.
|
||||
func (v *txValidator) Validate(items []*txValidateItem) error {
|
||||
if len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Limit the number of goroutines to do script validation based on the
|
||||
// number of processor cores. This help ensure the system stays
|
||||
// reasonably responsive under heavy load.
|
||||
maxGoRoutines := runtime.NumCPU() * 3
|
||||
if maxGoRoutines <= 0 {
|
||||
maxGoRoutines = 1
|
||||
}
|
||||
if maxGoRoutines > len(items) {
|
||||
maxGoRoutines = len(items)
|
||||
}
|
||||
|
||||
// Start up validation handlers that are used to asynchronously
|
||||
// validate each transaction input.
|
||||
for i := 0; i < maxGoRoutines; i++ {
|
||||
go v.validateHandler()
|
||||
}
|
||||
|
||||
// Validate each of the inputs. The quit channel is closed when any
|
||||
// errors occur so all processing goroutines exit regardless of which
|
||||
// input had the validation error.
|
||||
numInputs := len(items)
|
||||
currentItem := 0
|
||||
processedItems := 0
|
||||
for processedItems < numInputs {
|
||||
// Only send items while there are still items that need to
|
||||
// be processed. The select statement will never select a nil
|
||||
// channel.
|
||||
var validateChan chan *txValidateItem
|
||||
var item *txValidateItem
|
||||
if currentItem < numInputs {
|
||||
validateChan = v.validateChan
|
||||
item = items[currentItem]
|
||||
}
|
||||
|
||||
select {
|
||||
case validateChan <- item:
|
||||
currentItem++
|
||||
|
||||
case err := <-v.resultChan:
|
||||
processedItems++
|
||||
if err != nil {
|
||||
close(v.quitChan)
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
close(v.quitChan)
|
||||
return nil
|
||||
}
|
||||
|
||||
// newTxValidator returns a new instance of txValidator to be used for
|
||||
// validating transaction scripts asynchronously.
|
||||
func newTxValidator(txStore TxStore, flags txscript.ScriptFlags) *txValidator {
|
||||
return &txValidator{
|
||||
validateChan: make(chan *txValidateItem),
|
||||
quitChan: make(chan struct{}),
|
||||
resultChan: make(chan error),
|
||||
txStore: txStore,
|
||||
flags: flags,
|
||||
}
|
||||
}
|
||||
|
||||
// ValidateTransactionScripts validates the scripts for the passed transaction
|
||||
// using multiple goroutines.
|
||||
func ValidateTransactionScripts(tx *btcutil.Tx, txStore TxStore, flags txscript.ScriptFlags) error {
|
||||
// Collect all of the transaction inputs and required information for
|
||||
// validation.
|
||||
txIns := tx.MsgTx().TxIn
|
||||
txValItems := make([]*txValidateItem, 0, len(txIns))
|
||||
for txInIdx, txIn := range txIns {
|
||||
// Skip coinbases.
|
||||
if txIn.PreviousOutPoint.Index == math.MaxUint32 {
|
||||
continue
|
||||
}
|
||||
|
||||
txVI := &txValidateItem{
|
||||
txInIndex: txInIdx,
|
||||
txIn: txIn,
|
||||
tx: tx,
|
||||
}
|
||||
txValItems = append(txValItems, txVI)
|
||||
}
|
||||
|
||||
// Validate all of the inputs.
|
||||
validator := newTxValidator(txStore, flags)
|
||||
if err := validator.Validate(txValItems); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkBlockScripts executes and validates the scripts for all transactions in
|
||||
// the passed block.
|
||||
func checkBlockScripts(block *btcutil.Block, txStore TxStore) error {
|
||||
// Setup the script validation flags. Blocks created after the BIP0016
|
||||
// activation time need to have the pay-to-script-hash checks enabled.
|
||||
var flags txscript.ScriptFlags
|
||||
if block.MsgBlock().Header.Timestamp.After(txscript.Bip16Activation) {
|
||||
flags |= txscript.ScriptBip16
|
||||
}
|
||||
|
||||
// Collect all of the transaction inputs and required information for
|
||||
// validation for all transactions in the block into a single slice.
|
||||
numInputs := 0
|
||||
for _, tx := range block.Transactions() {
|
||||
numInputs += len(tx.MsgTx().TxIn)
|
||||
}
|
||||
txValItems := make([]*txValidateItem, 0, numInputs)
|
||||
for _, tx := range block.Transactions() {
|
||||
for txInIdx, txIn := range tx.MsgTx().TxIn {
|
||||
// Skip coinbases.
|
||||
if txIn.PreviousOutPoint.Index == math.MaxUint32 {
|
||||
continue
|
||||
}
|
||||
|
||||
txVI := &txValidateItem{
|
||||
txInIndex: txInIdx,
|
||||
txIn: txIn,
|
||||
tx: tx,
|
||||
}
|
||||
txValItems = append(txValItems, txVI)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate all of the inputs.
|
||||
validator := newTxValidator(txStore, flags)
|
||||
if err := validator.Validate(txValItems); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
43
blockchain/scriptval_test.go
Normal file
43
blockchain/scriptval_test.go
Normal file
|
@ -0,0 +1,43 @@
|
|||
// Copyright (c) 2013-2015 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
)
|
||||
|
||||
// TestCheckBlockScripts ensures that validating the all of the scripts in a
|
||||
// known-good block doesn't return an error.
|
||||
func TestCheckBlockScripts(t *testing.T) {
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
|
||||
testBlockNum := 277647
|
||||
blockDataFile := fmt.Sprintf("%d.dat.bz2", testBlockNum)
|
||||
blocks, err := loadBlocks(blockDataFile)
|
||||
if err != nil {
|
||||
t.Errorf("Error loading file: %v\n", err)
|
||||
return
|
||||
}
|
||||
if len(blocks) > 1 {
|
||||
t.Errorf("The test block file must only have one block in it")
|
||||
}
|
||||
|
||||
txStoreDataFile := fmt.Sprintf("%d.txstore.bz2", testBlockNum)
|
||||
txStore, err := loadTxStore(txStoreDataFile)
|
||||
if err != nil {
|
||||
t.Errorf("Error loading txstore: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := blockchain.TstCheckBlockScripts(blocks[0], txStore); err != nil {
|
||||
t.Errorf("Transaction script validation failed: %v\n",
|
||||
err)
|
||||
return
|
||||
}
|
||||
}
|
BIN
blockchain/testdata/277647.dat.bz2
vendored
Normal file
BIN
blockchain/testdata/277647.dat.bz2
vendored
Normal file
Binary file not shown.
BIN
blockchain/testdata/277647.txstore.bz2
vendored
Normal file
BIN
blockchain/testdata/277647.txstore.bz2
vendored
Normal file
Binary file not shown.
BIN
blockchain/testdata/blk_0_to_4.dat.bz2
vendored
Normal file
BIN
blockchain/testdata/blk_0_to_4.dat.bz2
vendored
Normal file
Binary file not shown.
BIN
blockchain/testdata/blk_3A.dat.bz2
vendored
Normal file
BIN
blockchain/testdata/blk_3A.dat.bz2
vendored
Normal file
Binary file not shown.
BIN
blockchain/testdata/blk_4A.dat.bz2
vendored
Normal file
BIN
blockchain/testdata/blk_4A.dat.bz2
vendored
Normal file
Binary file not shown.
BIN
blockchain/testdata/blk_5A.dat.bz2
vendored
Normal file
BIN
blockchain/testdata/blk_5A.dat.bz2
vendored
Normal file
Binary file not shown.
180
blockchain/testdata/reorgtest.hex
vendored
Normal file
180
blockchain/testdata/reorgtest.hex
vendored
Normal file
|
@ -0,0 +1,180 @@
|
|||
File path: reorgTest/blk_0_to_4.dat
|
||||
|
||||
Block 0:
|
||||
f9beb4d9
|
||||
1d010000
|
||||
|
||||
01000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 3ba3edfd 7a7b12b2 7ac72c3e 67768f61 7fc81bc3 888a5132 3a9fb8aa
|
||||
4b1e5e4a 29ab5f49 ffff001d 1dac2b7c
|
||||
01
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff4d04ff ff001d01 04455468 65205469 6d657320 30332f4a
|
||||
616e2f32 30303920 4368616e 63656c6c 6f72206f 6e206272 696e6b20 6f662073
|
||||
65636f6e 64206261 696c6f75 7420666f 72206261 6e6b73ff ffffff01 00f2052a
|
||||
01000000 43410467 8afdb0fe 55482719 67f1a671 30b7105c d6a828e0 3909a679
|
||||
62e0ea1f 61deb649 f6bc3f4c ef38c4f3 5504e51e c112de5c 384df7ba 0b8d578a
|
||||
4c702b6b f11d5fac 00000000
|
||||
Block 1:
|
||||
f9beb4d9
|
||||
d4000000
|
||||
|
||||
01000000 6fe28c0a b6f1b372 c1a6a246 ae63f74f 931e8365 e15a089c 68d61900
|
||||
00000000 3bbd67ad e98fbbb7 0718cd80 f9e9acf9 3b5fae91 7bb2b41d 4c3bb82c
|
||||
77725ca5 81ad5f49 ffff001d 44e69904
|
||||
01
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff04722f 2e2bffff ffff0100 f2052a01 00000043 41046868
|
||||
0737c76d abb801cb 2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02
|
||||
b5ac9e8b 4c9f49be 5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ac00
|
||||
000000
|
||||
Block 2:
|
||||
f9beb4d9
|
||||
95010000
|
||||
|
||||
01000000 13ca7940 4c11c63e ca906bbd f190b751 2872b857 1b5143ae e8cb5737
|
||||
00000000 fc07c983 d7391736 0aeda657 29d0d4d3 2533eb84 76ee9d64 aa27538f
|
||||
9b4fc00a d9af5f49 ffff001d 630bea22
|
||||
02
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff04eb96 14e5ffff ffff0100 f2052a01 00000043 41046868
|
||||
0737c76d abb801cb 2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02
|
||||
b5ac9e8b 4c9f49be 5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ac00
|
||||
000000
|
||||
|
||||
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
|
||||
4fdcb8ee d2000000 004a4930 46022100 3dde52c6 5e339f45 7fe1015e 70eed208
|
||||
872eb71e dd484c07 206b190e cb2ec3f8 02210011 c78dcfd0 3d43fa63 61242a33
|
||||
6291ba2a 8c1ef5bc d5472126 2468f2bf 8dee4d01 ffffffff 0200ca9a 3b000000
|
||||
001976a9 14cb2abd e8bccacc 32e893df 3a054b9e f7f227a4 ce88ac00 286bee00
|
||||
00000019 76a914ee 26c56fc1 d942be8d 7a24b2a1 001dd894 69398088 ac000000
|
||||
00
|
||||
Block 3:
|
||||
f9beb4d9
|
||||
96020000
|
||||
|
||||
01000000 7d338254 0506faab 0d4cf179 45dda023 49db51f9 6233f24c 28002258
|
||||
00000000 4806fe80 bf85931b 882ea645 77ca5a03 22bb8af2 3f277b20 55f160cd
|
||||
972c8e8b 31b25f49 ffff001d e8f0c653
|
||||
03
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff044abd 8159ffff ffff0100 f2052a01 00000043 4104b95c
|
||||
249d84f4 17e3e395 a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c
|
||||
a5e56c90 f340988d 3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ac00
|
||||
000000
|
||||
|
||||
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
|
||||
bad253d3 77010000 008c4930 46022100 96ee0d02 b35fd61e 4960b44f f396f67e
|
||||
01fe17f9 de4e0c17 b6a963bd ab2b50a6 02210034 920d4daa 7e9f8abe 5675c931
|
||||
495809f9 0b9c1189 d05fbaf1 dd6696a5 b0d8f301 41046868 0737c76d abb801cb
|
||||
2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02 b5ac9e8b 4c9f49be
|
||||
5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ffff ffff0100 286bee00
|
||||
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
|
||||
00
|
||||
|
||||
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
|
||||
bad253d3 77000000 008c4930 46022100 b08b922a c4bde411 1c229f92 9fe6eb6a
|
||||
50161f98 1f4cf47e a9214d35 bf74d380 022100d2 f6640327 e677a1e1 cc474991
|
||||
b9a48ba5 bd1e0c94 d1c8df49 f7b0193b 7ea4fa01 4104b95c 249d84f4 17e3e395
|
||||
a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c a5e56c90 f340988d
|
||||
3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ffff ffff0100 ca9a3b00
|
||||
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
|
||||
00
|
||||
|
||||
Block 4:
|
||||
f9beb4d9
|
||||
73010000
|
||||
|
||||
01000000 5da36499 06f35e09 9be42a1d 87b6dd42 11bc1400 6c220694 0807eaae
|
||||
00000000 48eeeaed 2d9d8522 e6201173 743823fd 4b87cd8a ca8e6408 ec75ca38
|
||||
302c2ff0 89b45f49 ffff001d 00530839
|
||||
02
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff04d41d 2213ffff ffff0100 f2052a01 00000043 4104678a
|
||||
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
|
||||
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
|
||||
000000
|
||||
|
||||
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
|
||||
4fdcb8ee d2000000 004a4930 46022100 8c8fd57b 48762135 8d8f3e69 19f33e08
|
||||
804736ff 83db47aa 248512e2 6df9b8ba 022100b0 c59e5ee7 bfcbfcd1 a4d83da9
|
||||
55fb260e fda7f42a 25522625 a3d6f2d9 1174a701 ffffffff 0100f205 2a010000
|
||||
001976a9 14c52266 4fb0e55c dc5c0cea 73b4aad9 7ec83432 3288ac00 000000
|
||||
|
||||
File path: reorgTest/blk_3A.dat
|
||||
Block 3A:
|
||||
f9beb4d9
|
||||
96020000
|
||||
|
||||
01000000 7d338254 0506faab 0d4cf179 45dda023 49db51f9 6233f24c 28002258
|
||||
00000000 5a15f573 1177a353 bdca7aab 20e16624 dfe90adc 70accadc 68016732
|
||||
302c20a7 31b25f49 ffff001d 6a901440
|
||||
03
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff04ad1b e7d5ffff ffff0100 f2052a01 00000043 4104ed83
|
||||
704c95d8 29046f1a c2780621 1132102c 34e9ac7f fa1b7111 0658e5b9 d1bdedc4
|
||||
16f5cefc 1db0625c d0c75de8 192d2b59 2d7e3b00 bcfb4a0e 860d880f d1fcac00
|
||||
000000
|
||||
|
||||
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
|
||||
bad253d3 77010000 008c4930 46022100 96ee0d02 b35fd61e 4960b44f f396f67e
|
||||
01fe17f9 de4e0c17 b6a963bd ab2b50a6 02210034 920d4daa 7e9f8abe 5675c931
|
||||
495809f9 0b9c1189 d05fbaf1 dd6696a5 b0d8f301 41046868 0737c76d abb801cb
|
||||
2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02 b5ac9e8b 4c9f49be
|
||||
5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ffff ffff0100 286bee00
|
||||
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
|
||||
00
|
||||
|
||||
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
|
||||
bad253d3 77000000 008c4930 46022100 9cc67ddd aa6f592a 6b2babd4 d6ff954f
|
||||
25a784cf 4fe4bb13 afb9f49b 08955119 022100a2 d99545b7 94080757 fcf2b563
|
||||
f2e91287 86332f46 0ec6b90f f085fb28 41a69701 4104b95c 249d84f4 17e3e395
|
||||
a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c a5e56c90 f340988d
|
||||
3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ffff ffff0100 ca9a3b00
|
||||
00000019 76a914ee 26c56fc1 d942be8d 7a24b2a1 001dd894 69398088 ac000000
|
||||
00
|
||||
|
||||
File path: reorgTest/blk_4A.dat
|
||||
Block 4A:
|
||||
f9beb4d9
|
||||
d4000000
|
||||
|
||||
01000000 aae77468 2205667d 4f413a58 47cc8fe8 9795f1d5 645d5b24 1daf3c92
|
||||
00000000 361c9cde a09637a0 d0c05c3b 4e7a5d91 9edb184a 0a4c7633 d92e2ddd
|
||||
f04cb854 89b45f49 ffff001d 9e9aa1e8
|
||||
01
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff0401b8 f3eaffff ffff0100 f2052a01 00000043 4104678a
|
||||
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
|
||||
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
|
||||
000000
|
||||
|
||||
File path: reorgTest/blk_5A.dat
|
||||
Block 5A:
|
||||
f9beb4d9
|
||||
73010000
|
||||
|
||||
01000000 ebc7d0de 9c31a71b 7f41d275 2c080ba4 11e1854b d45cb2cf 8c1e4624
|
||||
00000000 a607774b 79b8eb50 b52a5a32 c1754281 ec67f626 9561df28 57d1fe6a
|
||||
ea82c696 e1b65f49 ffff001d 4a263577
|
||||
02
|
||||
|
||||
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
|
||||
00000000 00ffffff ff049971 0c7dffff ffff0100 f2052a01 00000043 4104678a
|
||||
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
|
||||
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
|
||||
000000
|
||||
|
||||
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
|
||||
4fdcb8ee d2000000 004a4930 46022100 8c8fd57b 48762135 8d8f3e69 19f33e08
|
||||
804736ff 83db47aa 248512e2 6df9b8ba 022100b0 c59e5ee7 bfcbfcd1 a4d83da9
|
||||
55fb260e fda7f42a 25522625 a3d6f2d9 1174a701 ffffffff 0100f205 2a010000
|
||||
001976a9 14c52266 4fb0e55c dc5c0cea 73b4aad9 7ec83432 3288ac00 000000
|
||||
|
31
blockchain/timesorter.go
Normal file
31
blockchain/timesorter.go
Normal file
|
@ -0,0 +1,31 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// timeSorter implements sort.Interface to allow a slice of timestamps to
|
||||
// be sorted.
|
||||
type timeSorter []time.Time
|
||||
|
||||
// Len returns the number of timestamps in the slice. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s timeSorter) Len() int {
|
||||
return len(s)
|
||||
}
|
||||
|
||||
// Swap swaps the timestamps at the passed indices. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s timeSorter) Swap(i, j int) {
|
||||
s[i], s[j] = s[j], s[i]
|
||||
}
|
||||
|
||||
// Less returns whether the timstamp with index i should sort before the
|
||||
// timestamp with index j. It is part of the sort.Interface implementation.
|
||||
func (s timeSorter) Less(i, j int) bool {
|
||||
return s[i].Before(s[j])
|
||||
}
|
52
blockchain/timesorter_test.go
Normal file
52
blockchain/timesorter_test.go
Normal file
|
@ -0,0 +1,52 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
)
|
||||
|
||||
// TestTimeSorter tests the timeSorter implementation.
|
||||
func TestTimeSorter(t *testing.T) {
|
||||
tests := []struct {
|
||||
in []time.Time
|
||||
want []time.Time
|
||||
}{
|
||||
{
|
||||
in: []time.Time{
|
||||
time.Unix(1351228575, 0), // Fri Oct 26 05:16:15 UTC 2012 (Block #205000)
|
||||
time.Unix(1351228575, 1), // Fri Oct 26 05:16:15 UTC 2012 (+1 nanosecond)
|
||||
time.Unix(1348310759, 0), // Sat Sep 22 10:45:59 UTC 2012 (Block #200000)
|
||||
time.Unix(1305758502, 0), // Wed May 18 22:41:42 UTC 2011 (Block #125000)
|
||||
time.Unix(1347777156, 0), // Sun Sep 16 06:32:36 UTC 2012 (Block #199000)
|
||||
time.Unix(1349492104, 0), // Sat Oct 6 02:55:04 UTC 2012 (Block #202000)
|
||||
},
|
||||
want: []time.Time{
|
||||
time.Unix(1305758502, 0), // Wed May 18 22:41:42 UTC 2011 (Block #125000)
|
||||
time.Unix(1347777156, 0), // Sun Sep 16 06:32:36 UTC 2012 (Block #199000)
|
||||
time.Unix(1348310759, 0), // Sat Sep 22 10:45:59 UTC 2012 (Block #200000)
|
||||
time.Unix(1349492104, 0), // Sat Oct 6 02:55:04 UTC 2012 (Block #202000)
|
||||
time.Unix(1351228575, 0), // Fri Oct 26 05:16:15 UTC 2012 (Block #205000)
|
||||
time.Unix(1351228575, 1), // Fri Oct 26 05:16:15 UTC 2012 (+1 nanosecond)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for i, test := range tests {
|
||||
result := make([]time.Time, len(test.in))
|
||||
copy(result, test.in)
|
||||
sort.Sort(blockchain.TstTimeSorter(result))
|
||||
if !reflect.DeepEqual(result, test.want) {
|
||||
t.Errorf("timeSorter #%d got %v want %v", i, result,
|
||||
test.want)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
318
blockchain/txlookup.go
Normal file
318
blockchain/txlookup.go
Normal file
|
@ -0,0 +1,318 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/btcsuite/btcd/database"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// TxData contains contextual information about transactions such as which block
|
||||
// they were found in and whether or not the outputs are spent.
|
||||
type TxData struct {
|
||||
Tx *btcutil.Tx
|
||||
Hash *btcwire.ShaHash
|
||||
BlockHeight int64
|
||||
Spent []bool
|
||||
Err error
|
||||
}
|
||||
|
||||
// TxStore is used to store transactions needed by other transactions for things
|
||||
// such as script validation and double spend prevention. This also allows the
|
||||
// transaction data to be treated as a view since it can contain the information
|
||||
// from the point-of-view of different points in the chain.
|
||||
type TxStore map[btcwire.ShaHash]*TxData
|
||||
|
||||
// connectTransactions updates the passed map by applying transaction and
|
||||
// spend information for all the transactions in the passed block. Only
|
||||
// transactions in the passed map are updated.
|
||||
func connectTransactions(txStore TxStore, block *btcutil.Block) error {
|
||||
// Loop through all of the transactions in the block to see if any of
|
||||
// them are ones we need to update and spend based on the results map.
|
||||
for _, tx := range block.Transactions() {
|
||||
// Update the transaction store with the transaction information
|
||||
// if it's one of the requested transactions.
|
||||
msgTx := tx.MsgTx()
|
||||
if txD, exists := txStore[*tx.Sha()]; exists {
|
||||
txD.Tx = tx
|
||||
txD.BlockHeight = block.Height()
|
||||
txD.Spent = make([]bool, len(msgTx.TxOut))
|
||||
txD.Err = nil
|
||||
}
|
||||
|
||||
// Spend the origin transaction output.
|
||||
for _, txIn := range msgTx.TxIn {
|
||||
originHash := &txIn.PreviousOutPoint.Hash
|
||||
originIndex := txIn.PreviousOutPoint.Index
|
||||
if originTx, exists := txStore[*originHash]; exists {
|
||||
if originIndex > uint32(len(originTx.Spent)) {
|
||||
continue
|
||||
}
|
||||
originTx.Spent[originIndex] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// disconnectTransactions updates the passed map by undoing transaction and
|
||||
// spend information for all transactions in the passed block. Only
|
||||
// transactions in the passed map are updated.
|
||||
func disconnectTransactions(txStore TxStore, block *btcutil.Block) error {
|
||||
// Loop through all of the transactions in the block to see if any of
|
||||
// them are ones that need to be undone based on the transaction store.
|
||||
for _, tx := range block.Transactions() {
|
||||
// Clear this transaction from the transaction store if needed.
|
||||
// Only clear it rather than deleting it because the transaction
|
||||
// connect code relies on its presence to decide whether or not
|
||||
// to update the store and any transactions which exist on both
|
||||
// sides of a fork would otherwise not be updated.
|
||||
if txD, exists := txStore[*tx.Sha()]; exists {
|
||||
txD.Tx = nil
|
||||
txD.BlockHeight = 0
|
||||
txD.Spent = nil
|
||||
txD.Err = database.ErrTxShaMissing
|
||||
}
|
||||
|
||||
// Unspend the origin transaction output.
|
||||
for _, txIn := range tx.MsgTx().TxIn {
|
||||
originHash := &txIn.PreviousOutPoint.Hash
|
||||
originIndex := txIn.PreviousOutPoint.Index
|
||||
originTx, exists := txStore[*originHash]
|
||||
if exists && originTx.Tx != nil && originTx.Err == nil {
|
||||
if originIndex > uint32(len(originTx.Spent)) {
|
||||
continue
|
||||
}
|
||||
originTx.Spent[originIndex] = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// fetchTxStoreMain fetches transaction data about the provided set of
|
||||
// transactions from the point of view of the end of the main chain. It takes
|
||||
// a flag which specifies whether or not fully spent transaction should be
|
||||
// included in the results.
|
||||
func fetchTxStoreMain(db database.Db, txSet map[btcwire.ShaHash]struct{}, includeSpent bool) TxStore {
|
||||
// Just return an empty store now if there are no requested hashes.
|
||||
txStore := make(TxStore)
|
||||
if len(txSet) == 0 {
|
||||
return txStore
|
||||
}
|
||||
|
||||
// The transaction store map needs to have an entry for every requested
|
||||
// transaction. By default, all the transactions are marked as missing.
|
||||
// Each entry will be filled in with the appropriate data below.
|
||||
txList := make([]*btcwire.ShaHash, 0, len(txSet))
|
||||
for hash := range txSet {
|
||||
hashCopy := hash
|
||||
txStore[hash] = &TxData{Hash: &hashCopy, Err: database.ErrTxShaMissing}
|
||||
txList = append(txList, &hashCopy)
|
||||
}
|
||||
|
||||
// Ask the database (main chain) for the list of transactions. This
|
||||
// will return the information from the point of view of the end of the
|
||||
// main chain. Choose whether or not to include fully spent
|
||||
// transactions depending on the passed flag.
|
||||
fetchFunc := db.FetchUnSpentTxByShaList
|
||||
if includeSpent {
|
||||
fetchFunc = db.FetchTxByShaList
|
||||
}
|
||||
txReplyList := fetchFunc(txList)
|
||||
for _, txReply := range txReplyList {
|
||||
// Lookup the existing results entry to modify. Skip
|
||||
// this reply if there is no corresponding entry in
|
||||
// the transaction store map which really should not happen, but
|
||||
// be safe.
|
||||
txD, ok := txStore[*txReply.Sha]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
// Fill in the transaction details. A copy is used here since
|
||||
// there is no guarantee the returned data isn't cached and
|
||||
// this code modifies the data. A bug caused by modifying the
|
||||
// cached data would likely be difficult to track down and could
|
||||
// cause subtle errors, so avoid the potential altogether.
|
||||
txD.Err = txReply.Err
|
||||
if txReply.Err == nil {
|
||||
txD.Tx = btcutil.NewTx(txReply.Tx)
|
||||
txD.BlockHeight = txReply.Height
|
||||
txD.Spent = make([]bool, len(txReply.TxSpent))
|
||||
copy(txD.Spent, txReply.TxSpent)
|
||||
}
|
||||
}
|
||||
|
||||
return txStore
|
||||
}
|
||||
|
||||
// fetchTxStore fetches transaction data about the provided set of transactions
|
||||
// from the point of view of the given node. For example, a given node might
|
||||
// be down a side chain where a transaction hasn't been spent from its point of
|
||||
// view even though it might have been spent in the main chain (or another side
|
||||
// chain). Another scenario is where a transaction exists from the point of
|
||||
// view of the main chain, but doesn't exist in a side chain that branches
|
||||
// before the block that contains the transaction on the main chain.
|
||||
func (b *BlockChain) fetchTxStore(node *blockNode, txSet map[btcwire.ShaHash]struct{}) (TxStore, error) {
|
||||
// Get the previous block node. This function is used over simply
|
||||
// accessing node.parent directly as it will dynamically create previous
|
||||
// block nodes as needed. This helps allow only the pieces of the chain
|
||||
// that are needed to remain in memory.
|
||||
prevNode, err := b.getPrevNodeFromNode(node)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If we haven't selected a best chain yet or we are extending the main
|
||||
// (best) chain with a new block, fetch the requested set from the point
|
||||
// of view of the end of the main (best) chain without including fully
|
||||
// spent transactions in the results. This is a little more efficient
|
||||
// since it means less transaction lookups are needed.
|
||||
if b.bestChain == nil || (prevNode != nil && prevNode.hash.IsEqual(b.bestChain.hash)) {
|
||||
txStore := fetchTxStoreMain(b.db, txSet, false)
|
||||
return txStore, nil
|
||||
}
|
||||
|
||||
// Fetch the requested set from the point of view of the end of the
|
||||
// main (best) chain including fully spent transactions. The fully
|
||||
// spent transactions are needed because the following code unspends
|
||||
// them to get the correct point of view.
|
||||
txStore := fetchTxStoreMain(b.db, txSet, true)
|
||||
|
||||
// The requested node is either on a side chain or is a node on the main
|
||||
// chain before the end of it. In either case, we need to undo the
|
||||
// transactions and spend information for the blocks which would be
|
||||
// disconnected during a reorganize to the point of view of the
|
||||
// node just before the requested node.
|
||||
detachNodes, attachNodes := b.getReorganizeNodes(prevNode)
|
||||
for e := detachNodes.Front(); e != nil; e = e.Next() {
|
||||
n := e.Value.(*blockNode)
|
||||
block, err := b.db.FetchBlockBySha(n.hash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
disconnectTransactions(txStore, block)
|
||||
}
|
||||
|
||||
// The transaction store is now accurate to either the node where the
|
||||
// requested node forks off the main chain (in the case where the
|
||||
// requested node is on a side chain), or the requested node itself if
|
||||
// the requested node is an old node on the main chain. Entries in the
|
||||
// attachNodes list indicate the requested node is on a side chain, so
|
||||
// if there are no nodes to attach, we're done.
|
||||
if attachNodes.Len() == 0 {
|
||||
return txStore, nil
|
||||
}
|
||||
|
||||
// The requested node is on a side chain, so we need to apply the
|
||||
// transactions and spend information from each of the nodes to attach.
|
||||
for e := attachNodes.Front(); e != nil; e = e.Next() {
|
||||
n := e.Value.(*blockNode)
|
||||
block, exists := b.blockCache[*n.hash]
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("unable to find block %v in "+
|
||||
"side chain cache for transaction search",
|
||||
n.hash)
|
||||
}
|
||||
|
||||
connectTransactions(txStore, block)
|
||||
}
|
||||
|
||||
return txStore, nil
|
||||
}
|
||||
|
||||
// fetchInputTransactions fetches the input transactions referenced by the
|
||||
// transactions in the given block from its point of view. See fetchTxList
|
||||
// for more details on what the point of view entails.
|
||||
func (b *BlockChain) fetchInputTransactions(node *blockNode, block *btcutil.Block) (TxStore, error) {
|
||||
// Build a map of in-flight transactions because some of the inputs in
|
||||
// this block could be referencing other transactions earlier in this
|
||||
// block which are not yet in the chain.
|
||||
txInFlight := map[btcwire.ShaHash]int{}
|
||||
transactions := block.Transactions()
|
||||
for i, tx := range transactions {
|
||||
txInFlight[*tx.Sha()] = i
|
||||
}
|
||||
|
||||
// Loop through all of the transaction inputs (except for the coinbase
|
||||
// which has no inputs) collecting them into sets of what is needed and
|
||||
// what is already known (in-flight).
|
||||
txNeededSet := make(map[btcwire.ShaHash]struct{})
|
||||
txStore := make(TxStore)
|
||||
for i, tx := range transactions[1:] {
|
||||
for _, txIn := range tx.MsgTx().TxIn {
|
||||
// Add an entry to the transaction store for the needed
|
||||
// transaction with it set to missing by default.
|
||||
originHash := &txIn.PreviousOutPoint.Hash
|
||||
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
|
||||
txStore[*originHash] = txD
|
||||
|
||||
// It is acceptable for a transaction input to reference
|
||||
// the output of another transaction in this block only
|
||||
// if the referenced transaction comes before the
|
||||
// current one in this block. Update the transaction
|
||||
// store acccordingly when this is the case. Otherwise,
|
||||
// we still need the transaction.
|
||||
//
|
||||
// NOTE: The >= is correct here because i is one less
|
||||
// than the actual position of the transaction within
|
||||
// the block due to skipping the coinbase.
|
||||
if inFlightIndex, ok := txInFlight[*originHash]; ok &&
|
||||
i >= inFlightIndex {
|
||||
|
||||
originTx := transactions[inFlightIndex]
|
||||
txD.Tx = originTx
|
||||
txD.BlockHeight = node.height
|
||||
txD.Spent = make([]bool, len(originTx.MsgTx().TxOut))
|
||||
txD.Err = nil
|
||||
} else {
|
||||
txNeededSet[*originHash] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Request the input transactions from the point of view of the node.
|
||||
txNeededStore, err := b.fetchTxStore(node, txNeededSet)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Merge the results of the requested transactions and the in-flight
|
||||
// transactions.
|
||||
for _, txD := range txNeededStore {
|
||||
txStore[*txD.Hash] = txD
|
||||
}
|
||||
|
||||
return txStore, nil
|
||||
}
|
||||
|
||||
// FetchTransactionStore fetches the input transactions referenced by the
|
||||
// passed transaction from the point of view of the end of the main chain. It
|
||||
// also attempts to fetch the transaction itself so the returned TxStore can be
|
||||
// examined for duplicate transactions.
|
||||
func (b *BlockChain) FetchTransactionStore(tx *btcutil.Tx) (TxStore, error) {
|
||||
// Create a set of needed transactions from the transactions referenced
|
||||
// by the inputs of the passed transaction. Also, add the passed
|
||||
// transaction itself as a way for the caller to detect duplicates.
|
||||
txNeededSet := make(map[btcwire.ShaHash]struct{})
|
||||
txNeededSet[*tx.Sha()] = struct{}{}
|
||||
for _, txIn := range tx.MsgTx().TxIn {
|
||||
txNeededSet[txIn.PreviousOutPoint.Hash] = struct{}{}
|
||||
}
|
||||
|
||||
// Request the input transactions from the point of view of the end of
|
||||
// the main chain without including fully spent trasactions in the
|
||||
// results. Fully spent transactions are only needed for chain
|
||||
// reorganization which does not apply here.
|
||||
txStore := fetchTxStoreMain(b.db, txNeededSet, false)
|
||||
return txStore, nil
|
||||
}
|
951
blockchain/validate.go
Normal file
951
blockchain/validate.go
Normal file
|
@ -0,0 +1,951 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/big"
|
||||
"time"
|
||||
|
||||
"github.com/btcsuite/btcd/database"
|
||||
"github.com/btcsuite/btcd/txscript"
|
||||
"github.com/btcsuite/btcnet"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
const (
|
||||
// MaxSigOpsPerBlock is the maximum number of signature operations
|
||||
// allowed for a block. It is a fraction of the max block payload size.
|
||||
MaxSigOpsPerBlock = btcwire.MaxBlockPayload / 50
|
||||
|
||||
// lockTimeThreshold is the number below which a lock time is
|
||||
// interpreted to be a block number. Since an average of one block
|
||||
// is generated per 10 minutes, this allows blocks for about 9,512
|
||||
// years. However, if the field is interpreted as a timestamp, given
|
||||
// the lock time is a uint32, the max is sometime around 2106.
|
||||
lockTimeThreshold uint32 = 5e8 // Tue Nov 5 00:53:20 1985 UTC
|
||||
|
||||
// MaxTimeOffsetSeconds is the maximum number of seconds a block time
|
||||
// is allowed to be ahead of the current time. This is currently 2
|
||||
// hours.
|
||||
MaxTimeOffsetSeconds = 2 * 60 * 60
|
||||
|
||||
// MinCoinbaseScriptLen is the minimum length a coinbase script can be.
|
||||
MinCoinbaseScriptLen = 2
|
||||
|
||||
// MaxCoinbaseScriptLen is the maximum length a coinbase script can be.
|
||||
MaxCoinbaseScriptLen = 100
|
||||
|
||||
// medianTimeBlocks is the number of previous blocks which should be
|
||||
// used to calculate the median time used to validate block timestamps.
|
||||
medianTimeBlocks = 11
|
||||
|
||||
// serializedHeightVersion is the block version which changed block
|
||||
// coinbases to start with the serialized block height.
|
||||
serializedHeightVersion = 2
|
||||
|
||||
// baseSubsidy is the starting subsidy amount for mined blocks. This
|
||||
// value is halved every SubsidyHalvingInterval blocks.
|
||||
baseSubsidy = 50 * btcutil.SatoshiPerBitcoin
|
||||
|
||||
// CoinbaseMaturity is the number of blocks required before newly
|
||||
// mined bitcoins (coinbase transactions) can be spent.
|
||||
CoinbaseMaturity = 100
|
||||
)
|
||||
|
||||
var (
|
||||
// coinbaseMaturity is the internal variable used for validating the
|
||||
// spending of coinbase outputs. A variable rather than the exported
|
||||
// constant is used because the tests need the ability to modify it.
|
||||
coinbaseMaturity = int64(CoinbaseMaturity)
|
||||
|
||||
// zeroHash is the zero value for a btcwire.ShaHash and is defined as
|
||||
// a package level variable to avoid the need to create a new instance
|
||||
// every time a check is needed.
|
||||
zeroHash = &btcwire.ShaHash{}
|
||||
|
||||
// block91842Hash is one of the two nodes which violate the rules
|
||||
// set forth in BIP0030. It is defined as a package level variable to
|
||||
// avoid the need to create a new instance every time a check is needed.
|
||||
block91842Hash = newShaHashFromStr("00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")
|
||||
|
||||
// block91880Hash is one of the two nodes which violate the rules
|
||||
// set forth in BIP0030. It is defined as a package level variable to
|
||||
// avoid the need to create a new instance every time a check is needed.
|
||||
block91880Hash = newShaHashFromStr("00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721")
|
||||
)
|
||||
|
||||
// isNullOutpoint determines whether or not a previous transaction output point
|
||||
// is set.
|
||||
func isNullOutpoint(outpoint *btcwire.OutPoint) bool {
|
||||
if outpoint.Index == math.MaxUint32 && outpoint.Hash.IsEqual(zeroHash) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsCoinBase determines whether or not a transaction is a coinbase. A coinbase
|
||||
// is a special transaction created by miners that has no inputs. This is
|
||||
// represented in the block chain by a transaction with a single input that has
|
||||
// a previous output transaction index set to the maximum value along with a
|
||||
// zero hash.
|
||||
func IsCoinBase(tx *btcutil.Tx) bool {
|
||||
msgTx := tx.MsgTx()
|
||||
|
||||
// A coin base must only have one transaction input.
|
||||
if len(msgTx.TxIn) != 1 {
|
||||
return false
|
||||
}
|
||||
|
||||
// The previous output of a coin base must have a max value index and
|
||||
// a zero hash.
|
||||
prevOut := msgTx.TxIn[0].PreviousOutPoint
|
||||
if prevOut.Index != math.MaxUint32 || !prevOut.Hash.IsEqual(zeroHash) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// IsFinalizedTransaction determines whether or not a transaction is finalized.
|
||||
func IsFinalizedTransaction(tx *btcutil.Tx, blockHeight int64, blockTime time.Time) bool {
|
||||
msgTx := tx.MsgTx()
|
||||
|
||||
// Lock time of zero means the transaction is finalized.
|
||||
lockTime := msgTx.LockTime
|
||||
if lockTime == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
// The lock time field of a transaction is either a block height at
|
||||
// which the transaction is finalized or a timestamp depending on if the
|
||||
// value is before the lockTimeThreshold. When it is under the
|
||||
// threshold it is a block height.
|
||||
blockTimeOrHeight := int64(0)
|
||||
if lockTime < lockTimeThreshold {
|
||||
blockTimeOrHeight = blockHeight
|
||||
} else {
|
||||
blockTimeOrHeight = blockTime.Unix()
|
||||
}
|
||||
if int64(lockTime) < blockTimeOrHeight {
|
||||
return true
|
||||
}
|
||||
|
||||
// At this point, the transaction's lock time hasn't occured yet, but
|
||||
// the transaction might still be finalized if the sequence number
|
||||
// for all transaction inputs is maxed out.
|
||||
for _, txIn := range msgTx.TxIn {
|
||||
if txIn.Sequence != math.MaxUint32 {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// isBIP0030Node returns whether or not the passed node represents one of the
|
||||
// two blocks that violate the BIP0030 rule which prevents transactions from
|
||||
// overwriting old ones.
|
||||
func isBIP0030Node(node *blockNode) bool {
|
||||
if node.height == 91842 && node.hash.IsEqual(block91842Hash) {
|
||||
return true
|
||||
}
|
||||
|
||||
if node.height == 91880 && node.hash.IsEqual(block91880Hash) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// CalcBlockSubsidy returns the subsidy amount a block at the provided height
|
||||
// should have. This is mainly used for determining how much the coinbase for
|
||||
// newly generated blocks awards as well as validating the coinbase for blocks
|
||||
// has the expected value.
|
||||
//
|
||||
// The subsidy is halved every SubsidyHalvingInterval blocks. Mathematically
|
||||
// this is: baseSubsidy / 2^(height/subsidyHalvingInterval)
|
||||
//
|
||||
// At the target block generation rate for the main network, this is
|
||||
// approximately every 4 years.
|
||||
func CalcBlockSubsidy(height int64, netParams *btcnet.Params) int64 {
|
||||
if netParams.SubsidyHalvingInterval == 0 {
|
||||
return baseSubsidy
|
||||
}
|
||||
|
||||
// Equivalent to: baseSubsidy / 2^(height/subsidyHalvingInterval)
|
||||
return baseSubsidy >> uint(height/int64(netParams.SubsidyHalvingInterval))
|
||||
}
|
||||
|
||||
// CheckTransactionSanity performs some preliminary checks on a transaction to
|
||||
// ensure it is sane. These checks are context free.
|
||||
func CheckTransactionSanity(tx *btcutil.Tx) error {
|
||||
// A transaction must have at least one input.
|
||||
msgTx := tx.MsgTx()
|
||||
if len(msgTx.TxIn) == 0 {
|
||||
return ruleError(ErrNoTxInputs, "transaction has no inputs")
|
||||
}
|
||||
|
||||
// A transaction must have at least one output.
|
||||
if len(msgTx.TxOut) == 0 {
|
||||
return ruleError(ErrNoTxOutputs, "transaction has no outputs")
|
||||
}
|
||||
|
||||
// A transaction must not exceed the maximum allowed block payload when
|
||||
// serialized.
|
||||
serializedTxSize := tx.MsgTx().SerializeSize()
|
||||
if serializedTxSize > btcwire.MaxBlockPayload {
|
||||
str := fmt.Sprintf("serialized transaction is too big - got "+
|
||||
"%d, max %d", serializedTxSize, btcwire.MaxBlockPayload)
|
||||
return ruleError(ErrTxTooBig, str)
|
||||
}
|
||||
|
||||
// Ensure the transaction amounts are in range. Each transaction
|
||||
// output must not be negative or more than the max allowed per
|
||||
// transaction. Also, the total of all outputs must abide by the same
|
||||
// restrictions. All amounts in a transaction are in a unit value known
|
||||
// as a satoshi. One bitcoin is a quantity of satoshi as defined by the
|
||||
// SatoshiPerBitcoin constant.
|
||||
var totalSatoshi int64
|
||||
for _, txOut := range msgTx.TxOut {
|
||||
satoshi := txOut.Value
|
||||
if satoshi < 0 {
|
||||
str := fmt.Sprintf("transaction output has negative "+
|
||||
"value of %v", satoshi)
|
||||
return ruleError(ErrBadTxOutValue, str)
|
||||
}
|
||||
if satoshi > btcutil.MaxSatoshi {
|
||||
str := fmt.Sprintf("transaction output value of %v is "+
|
||||
"higher than max allowed value of %v", satoshi,
|
||||
btcutil.MaxSatoshi)
|
||||
return ruleError(ErrBadTxOutValue, str)
|
||||
}
|
||||
|
||||
// TODO(davec): No need to check < 0 here as satoshi is
|
||||
// guaranteed to be positive per the above check. Also need
|
||||
// to add overflow checks.
|
||||
totalSatoshi += satoshi
|
||||
if totalSatoshi < 0 {
|
||||
str := fmt.Sprintf("total value of all transaction "+
|
||||
"outputs has negative value of %v", totalSatoshi)
|
||||
return ruleError(ErrBadTxOutValue, str)
|
||||
}
|
||||
if totalSatoshi > btcutil.MaxSatoshi {
|
||||
str := fmt.Sprintf("total value of all transaction "+
|
||||
"outputs is %v which is higher than max "+
|
||||
"allowed value of %v", totalSatoshi,
|
||||
btcutil.MaxSatoshi)
|
||||
return ruleError(ErrBadTxOutValue, str)
|
||||
}
|
||||
}
|
||||
|
||||
// Check for duplicate transaction inputs.
|
||||
existingTxOut := make(map[btcwire.OutPoint]struct{})
|
||||
for _, txIn := range msgTx.TxIn {
|
||||
if _, exists := existingTxOut[txIn.PreviousOutPoint]; exists {
|
||||
return ruleError(ErrDuplicateTxInputs, "transaction "+
|
||||
"contains duplicate inputs")
|
||||
}
|
||||
existingTxOut[txIn.PreviousOutPoint] = struct{}{}
|
||||
}
|
||||
|
||||
// Coinbase script length must be between min and max length.
|
||||
if IsCoinBase(tx) {
|
||||
slen := len(msgTx.TxIn[0].SignatureScript)
|
||||
if slen < MinCoinbaseScriptLen || slen > MaxCoinbaseScriptLen {
|
||||
str := fmt.Sprintf("coinbase transaction script length "+
|
||||
"of %d is out of range (min: %d, max: %d)",
|
||||
slen, MinCoinbaseScriptLen, MaxCoinbaseScriptLen)
|
||||
return ruleError(ErrBadCoinbaseScriptLen, str)
|
||||
}
|
||||
} else {
|
||||
// Previous transaction outputs referenced by the inputs to this
|
||||
// transaction must not be null.
|
||||
for _, txIn := range msgTx.TxIn {
|
||||
prevOut := &txIn.PreviousOutPoint
|
||||
if isNullOutpoint(prevOut) {
|
||||
return ruleError(ErrBadTxInput, "transaction "+
|
||||
"input refers to previous output that "+
|
||||
"is null")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkProofOfWork ensures the block header bits which indicate the target
|
||||
// difficulty is in min/max range and that the block hash is less than the
|
||||
// target difficulty as claimed.
|
||||
//
|
||||
//
|
||||
// The flags modify the behavior of this function as follows:
|
||||
// - BFNoPoWCheck: The check to ensure the block hash is less than the target
|
||||
// difficulty is not performed.
|
||||
func checkProofOfWork(block *btcutil.Block, powLimit *big.Int, flags BehaviorFlags) error {
|
||||
// The target difficulty must be larger than zero.
|
||||
target := CompactToBig(block.MsgBlock().Header.Bits)
|
||||
if target.Sign() <= 0 {
|
||||
str := fmt.Sprintf("block target difficulty of %064x is too low",
|
||||
target)
|
||||
return ruleError(ErrUnexpectedDifficulty, str)
|
||||
}
|
||||
|
||||
// The target difficulty must be less than the maximum allowed.
|
||||
if target.Cmp(powLimit) > 0 {
|
||||
str := fmt.Sprintf("block target difficulty of %064x is "+
|
||||
"higher than max of %064x", target, powLimit)
|
||||
return ruleError(ErrUnexpectedDifficulty, str)
|
||||
}
|
||||
|
||||
// The block hash must be less than the claimed target unless the flag
|
||||
// to avoid proof of work checks is set.
|
||||
if flags&BFNoPoWCheck != BFNoPoWCheck {
|
||||
// The block hash must be less than the claimed target.
|
||||
blockHash, err := block.Sha()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
hashNum := ShaHashToBig(blockHash)
|
||||
if hashNum.Cmp(target) > 0 {
|
||||
str := fmt.Sprintf("block hash of %064x is higher than "+
|
||||
"expected max of %064x", hashNum, target)
|
||||
return ruleError(ErrHighHash, str)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckProofOfWork ensures the block header bits which indicate the target
|
||||
// difficulty is in min/max range and that the block hash is less than the
|
||||
// target difficulty as claimed.
|
||||
func CheckProofOfWork(block *btcutil.Block, powLimit *big.Int) error {
|
||||
return checkProofOfWork(block, powLimit, BFNone)
|
||||
}
|
||||
|
||||
// CountSigOps returns the number of signature operations for all transaction
|
||||
// input and output scripts in the provided transaction. This uses the
|
||||
// quicker, but imprecise, signature operation counting mechanism from
|
||||
// txscript.
|
||||
func CountSigOps(tx *btcutil.Tx) int {
|
||||
msgTx := tx.MsgTx()
|
||||
|
||||
// Accumulate the number of signature operations in all transaction
|
||||
// inputs.
|
||||
totalSigOps := 0
|
||||
for _, txIn := range msgTx.TxIn {
|
||||
numSigOps := txscript.GetSigOpCount(txIn.SignatureScript)
|
||||
totalSigOps += numSigOps
|
||||
}
|
||||
|
||||
// Accumulate the number of signature operations in all transaction
|
||||
// outputs.
|
||||
for _, txOut := range msgTx.TxOut {
|
||||
numSigOps := txscript.GetSigOpCount(txOut.PkScript)
|
||||
totalSigOps += numSigOps
|
||||
}
|
||||
|
||||
return totalSigOps
|
||||
}
|
||||
|
||||
// CountP2SHSigOps returns the number of signature operations for all input
|
||||
// transactions which are of the pay-to-script-hash type. This uses the
|
||||
// precise, signature operation counting mechanism from the script engine which
|
||||
// requires access to the input transaction scripts.
|
||||
func CountP2SHSigOps(tx *btcutil.Tx, isCoinBaseTx bool, txStore TxStore) (int, error) {
|
||||
// Coinbase transactions have no interesting inputs.
|
||||
if isCoinBaseTx {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
// Accumulate the number of signature operations in all transaction
|
||||
// inputs.
|
||||
msgTx := tx.MsgTx()
|
||||
totalSigOps := 0
|
||||
for _, txIn := range msgTx.TxIn {
|
||||
// Ensure the referenced input transaction is available.
|
||||
txInHash := &txIn.PreviousOutPoint.Hash
|
||||
originTx, exists := txStore[*txInHash]
|
||||
if !exists || originTx.Err != nil || originTx.Tx == nil {
|
||||
str := fmt.Sprintf("unable to find input transaction "+
|
||||
"%v referenced from transaction %v", txInHash,
|
||||
tx.Sha())
|
||||
return 0, ruleError(ErrMissingTx, str)
|
||||
}
|
||||
originMsgTx := originTx.Tx.MsgTx()
|
||||
|
||||
// Ensure the output index in the referenced transaction is
|
||||
// available.
|
||||
originTxIndex := txIn.PreviousOutPoint.Index
|
||||
if originTxIndex >= uint32(len(originMsgTx.TxOut)) {
|
||||
str := fmt.Sprintf("out of bounds input index %d in "+
|
||||
"transaction %v referenced from transaction %v",
|
||||
originTxIndex, txInHash, tx.Sha())
|
||||
return 0, ruleError(ErrBadTxInput, str)
|
||||
}
|
||||
|
||||
// We're only interested in pay-to-script-hash types, so skip
|
||||
// this input if it's not one.
|
||||
pkScript := originMsgTx.TxOut[originTxIndex].PkScript
|
||||
if !txscript.IsPayToScriptHash(pkScript) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Count the precise number of signature operations in the
|
||||
// referenced public key script.
|
||||
sigScript := txIn.SignatureScript
|
||||
numSigOps := txscript.GetPreciseSigOpCount(sigScript, pkScript,
|
||||
true)
|
||||
|
||||
// We could potentially overflow the accumulator so check for
|
||||
// overflow.
|
||||
lastSigOps := totalSigOps
|
||||
totalSigOps += numSigOps
|
||||
if totalSigOps < lastSigOps {
|
||||
str := fmt.Sprintf("the public key script from "+
|
||||
"output index %d in transaction %v contains "+
|
||||
"too many signature operations - overflow",
|
||||
originTxIndex, txInHash)
|
||||
return 0, ruleError(ErrTooManySigOps, str)
|
||||
}
|
||||
}
|
||||
|
||||
return totalSigOps, nil
|
||||
}
|
||||
|
||||
// checkBlockSanity performs some preliminary checks on a block to ensure it is
|
||||
// sane before continuing with block processing. These checks are context free.
|
||||
//
|
||||
// The flags do not modify the behavior of this function directly, however they
|
||||
// are needed to pass along to checkProofOfWork.
|
||||
func checkBlockSanity(block *btcutil.Block, powLimit *big.Int, timeSource MedianTimeSource, flags BehaviorFlags) error {
|
||||
// A block must have at least one transaction.
|
||||
msgBlock := block.MsgBlock()
|
||||
numTx := len(msgBlock.Transactions)
|
||||
if numTx == 0 {
|
||||
return ruleError(ErrNoTransactions, "block does not contain "+
|
||||
"any transactions")
|
||||
}
|
||||
|
||||
// A block must not have more transactions than the max block payload.
|
||||
if numTx > btcwire.MaxBlockPayload {
|
||||
str := fmt.Sprintf("block contains too many transactions - "+
|
||||
"got %d, max %d", numTx, btcwire.MaxBlockPayload)
|
||||
return ruleError(ErrTooManyTransactions, str)
|
||||
}
|
||||
|
||||
// A block must not exceed the maximum allowed block payload when
|
||||
// serialized.
|
||||
serializedSize := msgBlock.SerializeSize()
|
||||
if serializedSize > btcwire.MaxBlockPayload {
|
||||
str := fmt.Sprintf("serialized block is too big - got %d, "+
|
||||
"max %d", serializedSize, btcwire.MaxBlockPayload)
|
||||
return ruleError(ErrBlockTooBig, str)
|
||||
}
|
||||
|
||||
// Ensure the proof of work bits in the block header is in min/max range
|
||||
// and the block hash is less than the target value described by the
|
||||
// bits.
|
||||
err := checkProofOfWork(block, powLimit, flags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// A block timestamp must not have a greater precision than one second.
|
||||
// This check is necessary because Go time.Time values support
|
||||
// nanosecond precision whereas the consensus rules only apply to
|
||||
// seconds and it's much nicer to deal with standard Go time values
|
||||
// instead of converting to seconds everywhere.
|
||||
header := &block.MsgBlock().Header
|
||||
if !header.Timestamp.Equal(time.Unix(header.Timestamp.Unix(), 0)) {
|
||||
str := fmt.Sprintf("block timestamp of %v has a higher "+
|
||||
"precision than one second", header.Timestamp)
|
||||
return ruleError(ErrInvalidTime, str)
|
||||
}
|
||||
|
||||
// Ensure the block time is not too far in the future.
|
||||
maxTimestamp := timeSource.AdjustedTime().Add(time.Second *
|
||||
MaxTimeOffsetSeconds)
|
||||
if header.Timestamp.After(maxTimestamp) {
|
||||
str := fmt.Sprintf("block timestamp of %v is too far in the "+
|
||||
"future", header.Timestamp)
|
||||
return ruleError(ErrTimeTooNew, str)
|
||||
}
|
||||
|
||||
// The first transaction in a block must be a coinbase.
|
||||
transactions := block.Transactions()
|
||||
if !IsCoinBase(transactions[0]) {
|
||||
return ruleError(ErrFirstTxNotCoinbase, "first transaction in "+
|
||||
"block is not a coinbase")
|
||||
}
|
||||
|
||||
// A block must not have more than one coinbase.
|
||||
for i, tx := range transactions[1:] {
|
||||
if IsCoinBase(tx) {
|
||||
str := fmt.Sprintf("block contains second coinbase at "+
|
||||
"index %d", i)
|
||||
return ruleError(ErrMultipleCoinbases, str)
|
||||
}
|
||||
}
|
||||
|
||||
// Do some preliminary checks on each transaction to ensure they are
|
||||
// sane before continuing.
|
||||
for _, tx := range transactions {
|
||||
err := CheckTransactionSanity(tx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Build merkle tree and ensure the calculated merkle root matches the
|
||||
// entry in the block header. This also has the effect of caching all
|
||||
// of the transaction hashes in the block to speed up future hash
|
||||
// checks. Bitcoind builds the tree here and checks the merkle root
|
||||
// after the following checks, but there is no reason not to check the
|
||||
// merkle root matches here.
|
||||
merkles := BuildMerkleTreeStore(block.Transactions())
|
||||
calculatedMerkleRoot := merkles[len(merkles)-1]
|
||||
if !header.MerkleRoot.IsEqual(calculatedMerkleRoot) {
|
||||
str := fmt.Sprintf("block merkle root is invalid - block "+
|
||||
"header indicates %v, but calculated value is %v",
|
||||
header.MerkleRoot, calculatedMerkleRoot)
|
||||
return ruleError(ErrBadMerkleRoot, str)
|
||||
}
|
||||
|
||||
// Check for duplicate transactions. This check will be fairly quick
|
||||
// since the transaction hashes are already cached due to building the
|
||||
// merkle tree above.
|
||||
existingTxHashes := make(map[btcwire.ShaHash]struct{})
|
||||
for _, tx := range transactions {
|
||||
hash := tx.Sha()
|
||||
if _, exists := existingTxHashes[*hash]; exists {
|
||||
str := fmt.Sprintf("block contains duplicate "+
|
||||
"transaction %v", hash)
|
||||
return ruleError(ErrDuplicateTx, str)
|
||||
}
|
||||
existingTxHashes[*hash] = struct{}{}
|
||||
}
|
||||
|
||||
// The number of signature operations must be less than the maximum
|
||||
// allowed per block.
|
||||
totalSigOps := 0
|
||||
for _, tx := range transactions {
|
||||
// We could potentially overflow the accumulator so check for
|
||||
// overflow.
|
||||
lastSigOps := totalSigOps
|
||||
totalSigOps += CountSigOps(tx)
|
||||
if totalSigOps < lastSigOps || totalSigOps > MaxSigOpsPerBlock {
|
||||
str := fmt.Sprintf("block contains too many signature "+
|
||||
"operations - got %v, max %v", totalSigOps,
|
||||
MaxSigOpsPerBlock)
|
||||
return ruleError(ErrTooManySigOps, str)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckBlockSanity performs some preliminary checks on a block to ensure it is
|
||||
// sane before continuing with block processing. These checks are context free.
|
||||
func CheckBlockSanity(block *btcutil.Block, powLimit *big.Int, timeSource MedianTimeSource) error {
|
||||
return checkBlockSanity(block, powLimit, timeSource, BFNone)
|
||||
}
|
||||
|
||||
// checkSerializedHeight checks if the signature script in the passed
|
||||
// transaction starts with the serialized block height of wantHeight.
|
||||
func checkSerializedHeight(coinbaseTx *btcutil.Tx, wantHeight int64) error {
|
||||
sigScript := coinbaseTx.MsgTx().TxIn[0].SignatureScript
|
||||
if len(sigScript) < 1 {
|
||||
str := "the coinbase signature script for blocks of " +
|
||||
"version %d or greater must start with the " +
|
||||
"length of the serialized block height"
|
||||
str = fmt.Sprintf(str, serializedHeightVersion)
|
||||
return ruleError(ErrMissingCoinbaseHeight, str)
|
||||
}
|
||||
|
||||
serializedLen := int(sigScript[0])
|
||||
if len(sigScript[1:]) < serializedLen {
|
||||
str := "the coinbase signature script for blocks of " +
|
||||
"version %d or greater must start with the " +
|
||||
"serialized block height"
|
||||
str = fmt.Sprintf(str, serializedLen)
|
||||
return ruleError(ErrMissingCoinbaseHeight, str)
|
||||
}
|
||||
|
||||
serializedHeightBytes := make([]byte, 8, 8)
|
||||
copy(serializedHeightBytes, sigScript[1:serializedLen+1])
|
||||
serializedHeight := binary.LittleEndian.Uint64(serializedHeightBytes)
|
||||
if int64(serializedHeight) != wantHeight {
|
||||
str := fmt.Sprintf("the coinbase signature script serialized "+
|
||||
"block height is %d when %d was expected",
|
||||
serializedHeight, wantHeight)
|
||||
return ruleError(ErrBadCoinbaseHeight, str)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// isTransactionSpent returns whether or not the provided transaction data
|
||||
// describes a fully spent transaction. A fully spent transaction is one where
|
||||
// all outputs have been spent.
|
||||
func isTransactionSpent(txD *TxData) bool {
|
||||
for _, isOutputSpent := range txD.Spent {
|
||||
if !isOutputSpent {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// checkBIP0030 ensures blocks do not contain duplicate transactions which
|
||||
// 'overwrite' older transactions that are not fully spent. This prevents an
|
||||
// attack where a coinbase and all of its dependent transactions could be
|
||||
// duplicated to effectively revert the overwritten transactions to a single
|
||||
// confirmation thereby making them vulnerable to a double spend.
|
||||
//
|
||||
// For more details, see https://en.bitcoin.it/wiki/BIP_0030 and
|
||||
// http://r6.ca/blog/20120206T005236Z.html.
|
||||
func (b *BlockChain) checkBIP0030(node *blockNode, block *btcutil.Block) error {
|
||||
// Attempt to fetch duplicate transactions for all of the transactions
|
||||
// in this block from the point of view of the parent node.
|
||||
fetchSet := make(map[btcwire.ShaHash]struct{})
|
||||
for _, tx := range block.Transactions() {
|
||||
fetchSet[*tx.Sha()] = struct{}{}
|
||||
}
|
||||
txResults, err := b.fetchTxStore(node, fetchSet)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Examine the resulting data about the requested transactions.
|
||||
for _, txD := range txResults {
|
||||
switch txD.Err {
|
||||
// A duplicate transaction was not found. This is the most
|
||||
// common case.
|
||||
case database.ErrTxShaMissing:
|
||||
continue
|
||||
|
||||
// A duplicate transaction was found. This is only allowed if
|
||||
// the duplicate transaction is fully spent.
|
||||
case nil:
|
||||
if !isTransactionSpent(txD) {
|
||||
str := fmt.Sprintf("tried to overwrite "+
|
||||
"transaction %v at block height %d "+
|
||||
"that is not fully spent", txD.Hash,
|
||||
txD.BlockHeight)
|
||||
return ruleError(ErrOverwriteTx, str)
|
||||
}
|
||||
|
||||
// Some other unexpected error occurred. Return it now.
|
||||
default:
|
||||
return txD.Err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckTransactionInputs performs a series of checks on the inputs to a
|
||||
// transaction to ensure they are valid. An example of some of the checks
|
||||
// include verifying all inputs exist, ensuring the coinbase seasoning
|
||||
// requirements are met, detecting double spends, validating all values and fees
|
||||
// are in the legal range and the total output amount doesn't exceed the input
|
||||
// amount, and verifying the signatures to prove the spender was the owner of
|
||||
// the bitcoins and therefore allowed to spend them. As it checks the inputs,
|
||||
// it also calculates the total fees for the transaction and returns that value.
|
||||
func CheckTransactionInputs(tx *btcutil.Tx, txHeight int64, txStore TxStore) (int64, error) {
|
||||
// Coinbase transactions have no inputs.
|
||||
if IsCoinBase(tx) {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
txHash := tx.Sha()
|
||||
var totalSatoshiIn int64
|
||||
for _, txIn := range tx.MsgTx().TxIn {
|
||||
// Ensure the input is available.
|
||||
txInHash := &txIn.PreviousOutPoint.Hash
|
||||
originTx, exists := txStore[*txInHash]
|
||||
if !exists || originTx.Err != nil || originTx.Tx == nil {
|
||||
str := fmt.Sprintf("unable to find input transaction "+
|
||||
"%v for transaction %v", txInHash, txHash)
|
||||
return 0, ruleError(ErrMissingTx, str)
|
||||
}
|
||||
|
||||
// Ensure the transaction is not spending coins which have not
|
||||
// yet reached the required coinbase maturity.
|
||||
if IsCoinBase(originTx.Tx) {
|
||||
originHeight := originTx.BlockHeight
|
||||
blocksSincePrev := txHeight - originHeight
|
||||
if blocksSincePrev < coinbaseMaturity {
|
||||
str := fmt.Sprintf("tried to spend coinbase "+
|
||||
"transaction %v from height %v at "+
|
||||
"height %v before required maturity "+
|
||||
"of %v blocks", txInHash, originHeight,
|
||||
txHeight, coinbaseMaturity)
|
||||
return 0, ruleError(ErrImmatureSpend, str)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure the transaction is not double spending coins.
|
||||
originTxIndex := txIn.PreviousOutPoint.Index
|
||||
if originTxIndex >= uint32(len(originTx.Spent)) {
|
||||
str := fmt.Sprintf("out of bounds input index %d in "+
|
||||
"transaction %v referenced from transaction %v",
|
||||
originTxIndex, txInHash, txHash)
|
||||
return 0, ruleError(ErrBadTxInput, str)
|
||||
}
|
||||
if originTx.Spent[originTxIndex] {
|
||||
str := fmt.Sprintf("transaction %v tried to double "+
|
||||
"spend output %v", txHash, txIn.PreviousOutPoint)
|
||||
return 0, ruleError(ErrDoubleSpend, str)
|
||||
}
|
||||
|
||||
// Ensure the transaction amounts are in range. Each of the
|
||||
// output values of the input transactions must not be negative
|
||||
// or more than the max allowed per transaction. All amounts in
|
||||
// a transaction are in a unit value known as a satoshi. One
|
||||
// bitcoin is a quantity of satoshi as defined by the
|
||||
// SatoshiPerBitcoin constant.
|
||||
originTxSatoshi := originTx.Tx.MsgTx().TxOut[originTxIndex].Value
|
||||
if originTxSatoshi < 0 {
|
||||
str := fmt.Sprintf("transaction output has negative "+
|
||||
"value of %v", originTxSatoshi)
|
||||
return 0, ruleError(ErrBadTxOutValue, str)
|
||||
}
|
||||
if originTxSatoshi > btcutil.MaxSatoshi {
|
||||
str := fmt.Sprintf("transaction output value of %v is "+
|
||||
"higher than max allowed value of %v",
|
||||
originTxSatoshi, btcutil.MaxSatoshi)
|
||||
return 0, ruleError(ErrBadTxOutValue, str)
|
||||
}
|
||||
|
||||
// The total of all outputs must not be more than the max
|
||||
// allowed per transaction. Also, we could potentially overflow
|
||||
// the accumulator so check for overflow.
|
||||
lastSatoshiIn := totalSatoshiIn
|
||||
totalSatoshiIn += originTxSatoshi
|
||||
if totalSatoshiIn < lastSatoshiIn ||
|
||||
totalSatoshiIn > btcutil.MaxSatoshi {
|
||||
str := fmt.Sprintf("total value of all transaction "+
|
||||
"inputs is %v which is higher than max "+
|
||||
"allowed value of %v", totalSatoshiIn,
|
||||
btcutil.MaxSatoshi)
|
||||
return 0, ruleError(ErrBadTxOutValue, str)
|
||||
}
|
||||
|
||||
// Mark the referenced output as spent.
|
||||
originTx.Spent[originTxIndex] = true
|
||||
}
|
||||
|
||||
// Calculate the total output amount for this transaction. It is safe
|
||||
// to ignore overflow and out of range errors here because those error
|
||||
// conditions would have already been caught by checkTransactionSanity.
|
||||
var totalSatoshiOut int64
|
||||
for _, txOut := range tx.MsgTx().TxOut {
|
||||
totalSatoshiOut += txOut.Value
|
||||
}
|
||||
|
||||
// Ensure the transaction does not spend more than its inputs.
|
||||
if totalSatoshiIn < totalSatoshiOut {
|
||||
str := fmt.Sprintf("total value of all transaction inputs for "+
|
||||
"transaction %v is %v which is less than the amount "+
|
||||
"spent of %v", txHash, totalSatoshiIn, totalSatoshiOut)
|
||||
return 0, ruleError(ErrSpendTooHigh, str)
|
||||
}
|
||||
|
||||
// NOTE: bitcoind checks if the transaction fees are < 0 here, but that
|
||||
// is an impossible condition because of the check above that ensures
|
||||
// the inputs are >= the outputs.
|
||||
txFeeInSatoshi := totalSatoshiIn - totalSatoshiOut
|
||||
return txFeeInSatoshi, nil
|
||||
}
|
||||
|
||||
// checkConnectBlock performs several checks to confirm connecting the passed
|
||||
// block to the main chain (including whatever reorganization might be necessary
|
||||
// to get this node to the main chain) does not violate any rules.
|
||||
//
|
||||
// The CheckConnectBlock function makes use of this function to perform the
|
||||
// bulk of its work. The only difference is this function accepts a node which
|
||||
// may or may not require reorganization to connect it to the main chain whereas
|
||||
// CheckConnectBlock creates a new node which specifically connects to the end
|
||||
// of the current main chain and then calls this function with that node.
|
||||
//
|
||||
// See the comments for CheckConnectBlock for some examples of the type of
|
||||
// checks performed by this function.
|
||||
func (b *BlockChain) checkConnectBlock(node *blockNode, block *btcutil.Block) error {
|
||||
// If the side chain blocks end up in the database, a call to
|
||||
// CheckBlockSanity should be done here in case a previous version
|
||||
// allowed a block that is no longer valid. However, since the
|
||||
// implementation only currently uses memory for the side chain blocks,
|
||||
// it isn't currently necessary.
|
||||
|
||||
// The coinbase for the Genesis block is not spendable, so just return
|
||||
// now.
|
||||
if node.hash.IsEqual(b.netParams.GenesisHash) && b.bestChain == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// BIP0030 added a rule to prevent blocks which contain duplicate
|
||||
// transactions that 'overwrite' older transactions which are not fully
|
||||
// spent. See the documentation for checkBIP0030 for more details.
|
||||
//
|
||||
// There are two blocks in the chain which violate this
|
||||
// rule, so the check must be skipped for those blocks. The
|
||||
// isBIP0030Node function is used to determine if this block is one
|
||||
// of the two blocks that must be skipped.
|
||||
enforceBIP0030 := !isBIP0030Node(node)
|
||||
if enforceBIP0030 {
|
||||
err := b.checkBIP0030(node, block)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Request a map that contains all input transactions for the block from
|
||||
// the point of view of its position within the block chain. These
|
||||
// transactions are needed for verification of things such as
|
||||
// transaction inputs, counting pay-to-script-hashes, and scripts.
|
||||
txInputStore, err := b.fetchInputTransactions(node, block)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// BIP0016 describes a pay-to-script-hash type that is considered a
|
||||
// "standard" type. The rules for this BIP only apply to transactions
|
||||
// after the timestamp defined by txscript.Bip16Activation. See
|
||||
// https://en.bitcoin.it/wiki/BIP_0016 for more details.
|
||||
enforceBIP0016 := false
|
||||
if node.timestamp.After(txscript.Bip16Activation) {
|
||||
enforceBIP0016 = true
|
||||
}
|
||||
|
||||
// The number of signature operations must be less than the maximum
|
||||
// allowed per block. Note that the preliminary sanity checks on a
|
||||
// block also include a check similar to this one, but this check
|
||||
// expands the count to include a precise count of pay-to-script-hash
|
||||
// signature operations in each of the input transaction public key
|
||||
// scripts.
|
||||
transactions := block.Transactions()
|
||||
totalSigOps := 0
|
||||
for i, tx := range transactions {
|
||||
numsigOps := CountSigOps(tx)
|
||||
if enforceBIP0016 {
|
||||
// Since the first (and only the first) transaction has
|
||||
// already been verified to be a coinbase transaction,
|
||||
// use i == 0 as an optimization for the flag to
|
||||
// countP2SHSigOps for whether or not the transaction is
|
||||
// a coinbase transaction rather than having to do a
|
||||
// full coinbase check again.
|
||||
numP2SHSigOps, err := CountP2SHSigOps(tx, i == 0,
|
||||
txInputStore)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
numsigOps += numP2SHSigOps
|
||||
}
|
||||
|
||||
// Check for overflow or going over the limits. We have to do
|
||||
// this on every loop iteration to avoid overflow.
|
||||
lastSigops := totalSigOps
|
||||
totalSigOps += numsigOps
|
||||
if totalSigOps < lastSigops || totalSigOps > MaxSigOpsPerBlock {
|
||||
str := fmt.Sprintf("block contains too many "+
|
||||
"signature operations - got %v, max %v",
|
||||
totalSigOps, MaxSigOpsPerBlock)
|
||||
return ruleError(ErrTooManySigOps, str)
|
||||
}
|
||||
}
|
||||
|
||||
// Perform several checks on the inputs for each transaction. Also
|
||||
// accumulate the total fees. This could technically be combined with
|
||||
// the loop above instead of running another loop over the transactions,
|
||||
// but by separating it we can avoid running the more expensive (though
|
||||
// still relatively cheap as compared to running the scripts) checks
|
||||
// against all the inputs when the signature operations are out of
|
||||
// bounds.
|
||||
var totalFees int64
|
||||
for _, tx := range transactions {
|
||||
txFee, err := CheckTransactionInputs(tx, node.height, txInputStore)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Sum the total fees and ensure we don't overflow the
|
||||
// accumulator.
|
||||
lastTotalFees := totalFees
|
||||
totalFees += txFee
|
||||
if totalFees < lastTotalFees {
|
||||
return ruleError(ErrBadFees, "total fees for block "+
|
||||
"overflows accumulator")
|
||||
}
|
||||
}
|
||||
|
||||
// The total output values of the coinbase transaction must not exceed
|
||||
// the expected subsidy value plus total transaction fees gained from
|
||||
// mining the block. It is safe to ignore overflow and out of range
|
||||
// errors here because those error conditions would have already been
|
||||
// caught by checkTransactionSanity.
|
||||
var totalSatoshiOut int64
|
||||
for _, txOut := range transactions[0].MsgTx().TxOut {
|
||||
totalSatoshiOut += txOut.Value
|
||||
}
|
||||
expectedSatoshiOut := CalcBlockSubsidy(node.height, b.netParams) +
|
||||
totalFees
|
||||
if totalSatoshiOut > expectedSatoshiOut {
|
||||
str := fmt.Sprintf("coinbase transaction for block pays %v "+
|
||||
"which is more than expected value of %v",
|
||||
totalSatoshiOut, expectedSatoshiOut)
|
||||
return ruleError(ErrBadCoinbaseValue, str)
|
||||
}
|
||||
|
||||
// Don't run scripts if this node is before the latest known good
|
||||
// checkpoint since the validity is verified via the checkpoints (all
|
||||
// transactions are included in the merkle root hash and any changes
|
||||
// will therefore be detected by the next checkpoint). This is a huge
|
||||
// optimization because running the scripts is the most time consuming
|
||||
// portion of block handling.
|
||||
checkpoint := b.LatestCheckpoint()
|
||||
runScripts := !b.noVerify
|
||||
if checkpoint != nil && node.height <= checkpoint.Height {
|
||||
runScripts = false
|
||||
}
|
||||
|
||||
// Now that the inexpensive checks are done and have passed, verify the
|
||||
// transactions are actually allowed to spend the coins by running the
|
||||
// expensive ECDSA signature check scripts. Doing this last helps
|
||||
// prevent CPU exhaustion attacks.
|
||||
if runScripts {
|
||||
err := checkBlockScripts(block, txInputStore)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckConnectBlock performs several checks to confirm connecting the passed
|
||||
// block to the main chain does not violate any rules. An example of some of
|
||||
// the checks performed are ensuring connecting the block would not cause any
|
||||
// duplicate transaction hashes for old transactions that aren't already fully
|
||||
// spent, double spends, exceeding the maximum allowed signature operations
|
||||
// per block, invalid values in relation to the expected block subsidy, or fail
|
||||
// transaction script validation.
|
||||
//
|
||||
// This function is NOT safe for concurrent access.
|
||||
func (b *BlockChain) CheckConnectBlock(block *btcutil.Block) error {
|
||||
prevNode := b.bestChain
|
||||
blockSha, _ := block.Sha()
|
||||
newNode := newBlockNode(&block.MsgBlock().Header, blockSha, block.Height())
|
||||
if prevNode != nil {
|
||||
newNode.parent = prevNode
|
||||
newNode.workSum.Add(prevNode.workSum, newNode.workSum)
|
||||
}
|
||||
|
||||
return b.checkConnectBlock(newNode, block)
|
||||
}
|
381
blockchain/validate_test.go
Normal file
381
blockchain/validate_test.go
Normal file
|
@ -0,0 +1,381 @@
|
|||
// Copyright (c) 2013-2014 Conformal Systems LLC.
|
||||
// Use of this source code is governed by an ISC
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package blockchain_test
|
||||
|
||||
import (
|
||||
"math"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/btcsuite/btcd/blockchain"
|
||||
"github.com/btcsuite/btcnet"
|
||||
"github.com/btcsuite/btcutil"
|
||||
"github.com/btcsuite/btcwire"
|
||||
)
|
||||
|
||||
// TestCheckConnectBlock tests the CheckConnectBlock function to ensure it
|
||||
// fails
|
||||
func TestCheckConnectBlock(t *testing.T) {
|
||||
// Create a new database and chain instance to run tests against.
|
||||
chain, teardownFunc, err := chainSetup("checkconnectblock")
|
||||
if err != nil {
|
||||
t.Errorf("Failed to setup chain instance: %v", err)
|
||||
return
|
||||
}
|
||||
defer teardownFunc()
|
||||
|
||||
err = chain.GenerateInitialIndex()
|
||||
if err != nil {
|
||||
t.Errorf("GenerateInitialIndex: %v", err)
|
||||
}
|
||||
|
||||
// The genesis block should fail to connect since it's already
|
||||
// inserted.
|
||||
genesisBlock := btcnet.MainNetParams.GenesisBlock
|
||||
err = chain.CheckConnectBlock(btcutil.NewBlock(genesisBlock))
|
||||
if err == nil {
|
||||
t.Errorf("CheckConnectBlock: Did not received expected error")
|
||||
}
|
||||
}
|
||||
|
||||
// TestCheckBlockSanity tests the CheckBlockSanity function to ensure it works
|
||||
// as expected.
|
||||
func TestCheckBlockSanity(t *testing.T) {
|
||||
powLimit := btcnet.MainNetParams.PowLimit
|
||||
block := btcutil.NewBlock(&Block100000)
|
||||
timeSource := blockchain.NewMedianTime()
|
||||
err := blockchain.CheckBlockSanity(block, powLimit, timeSource)
|
||||
if err != nil {
|
||||
t.Errorf("CheckBlockSanity: %v", err)
|
||||
}
|
||||
|
||||
// Ensure a block that has a timestamp with a precision higher than one
|
||||
// second fails.
|
||||
timestamp := block.MsgBlock().Header.Timestamp
|
||||
block.MsgBlock().Header.Timestamp = timestamp.Add(time.Nanosecond)
|
||||
err = blockchain.CheckBlockSanity(block, powLimit, timeSource)
|
||||
if err == nil {
|
||||
t.Errorf("CheckBlockSanity: error is nil when it shouldn't be")
|
||||
}
|
||||
}
|
||||
|
||||
// TestCheckSerializedHeight tests the checkSerializedHeight function with
|
||||
// various serialized heights and also does negative tests to ensure errors
|
||||
// and handled properly.
|
||||
func TestCheckSerializedHeight(t *testing.T) {
|
||||
// Create an empty coinbase template to be used in the tests below.
|
||||
coinbaseOutpoint := btcwire.NewOutPoint(&btcwire.ShaHash{}, math.MaxUint32)
|
||||
coinbaseTx := btcwire.NewMsgTx()
|
||||
coinbaseTx.Version = 2
|
||||
coinbaseTx.AddTxIn(btcwire.NewTxIn(coinbaseOutpoint, nil))
|
||||
|
||||
// Expected rule errors.
|
||||
missingHeightError := blockchain.RuleError{
|
||||
ErrorCode: blockchain.ErrMissingCoinbaseHeight,
|
||||
}
|
||||
badHeightError := blockchain.RuleError{
|
||||
ErrorCode: blockchain.ErrBadCoinbaseHeight,
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
sigScript []byte // Serialized data
|
||||
wantHeight int64 // Expected height
|
||||
err error // Expected error type
|
||||
}{
|
||||
// No serialized height length.
|
||||
{[]byte{}, 0, missingHeightError},
|
||||
// Serialized height length with no height bytes.
|
||||
{[]byte{0x02}, 0, missingHeightError},
|
||||
// Serialized height length with too few height bytes.
|
||||
{[]byte{0x02, 0x4a}, 0, missingHeightError},
|
||||
// Serialized height that needs 2 bytes to encode.
|
||||
{[]byte{0x02, 0x4a, 0x52}, 21066, nil},
|
||||
// Serialized height that needs 2 bytes to encode, but backwards
|
||||
// endianness.
|
||||
{[]byte{0x02, 0x4a, 0x52}, 19026, badHeightError},
|
||||
// Serialized height that needs 3 bytes to encode.
|
||||
{[]byte{0x03, 0x40, 0x0d, 0x03}, 200000, nil},
|
||||
// Serialized height that needs 3 bytes to encode, but backwards
|
||||
// endianness.
|
||||
{[]byte{0x03, 0x40, 0x0d, 0x03}, 1074594560, badHeightError},
|
||||
}
|
||||
|
||||
t.Logf("Running %d tests", len(tests))
|
||||
for i, test := range tests {
|
||||
msgTx := coinbaseTx.Copy()
|
||||
msgTx.TxIn[0].SignatureScript = test.sigScript
|
||||
tx := btcutil.NewTx(msgTx)
|
||||
|
||||
err := blockchain.TstCheckSerializedHeight(tx, test.wantHeight)
|
||||
if reflect.TypeOf(err) != reflect.TypeOf(test.err) {
|
||||
t.Errorf("checkSerializedHeight #%d wrong error type "+
|
||||
"got: %v <%T>, want: %T", i, err, err, test.err)
|
||||
continue
|
||||
}
|
||||
|
||||
if rerr, ok := err.(blockchain.RuleError); ok {
|
||||
trerr := test.err.(blockchain.RuleError)
|
||||
if rerr.ErrorCode != trerr.ErrorCode {
|
||||
t.Errorf("checkSerializedHeight #%d wrong "+
|
||||
"error code got: %v, want: %v", i,
|
||||
rerr.ErrorCode, trerr.ErrorCode)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Block100000 defines block 100,000 of the block chain. It is used to
|
||||
// test Block operations.
|
||||
var Block100000 = btcwire.MsgBlock{
|
||||
Header: btcwire.BlockHeader{
|
||||
Version: 1,
|
||||
PrevBlock: btcwire.ShaHash([32]byte{ // Make go vet happy.
|
||||
0x50, 0x12, 0x01, 0x19, 0x17, 0x2a, 0x61, 0x04,
|
||||
0x21, 0xa6, 0xc3, 0x01, 0x1d, 0xd3, 0x30, 0xd9,
|
||||
0xdf, 0x07, 0xb6, 0x36, 0x16, 0xc2, 0xcc, 0x1f,
|
||||
0x1c, 0xd0, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
|
||||
}), // 000000000002d01c1fccc21636b607dfd930d31d01c3a62104612a1719011250
|
||||
MerkleRoot: btcwire.ShaHash([32]byte{ // Make go vet happy.
|
||||
0x66, 0x57, 0xa9, 0x25, 0x2a, 0xac, 0xd5, 0xc0,
|
||||
0xb2, 0x94, 0x09, 0x96, 0xec, 0xff, 0x95, 0x22,
|
||||
0x28, 0xc3, 0x06, 0x7c, 0xc3, 0x8d, 0x48, 0x85,
|
||||
0xef, 0xb5, 0xa4, 0xac, 0x42, 0x47, 0xe9, 0xf3,
|
||||
}), // f3e94742aca4b5ef85488dc37c06c3282295ffec960994b2c0d5ac2a25a95766
|
||||
Timestamp: time.Unix(1293623863, 0), // 2010-12-29 11:57:43 +0000 UTC
|
||||
Bits: 0x1b04864c, // 453281356
|
||||
Nonce: 0x10572b0f, // 274148111
|
||||
},
|
||||
Transactions: []*btcwire.MsgTx{
|
||||
{
|
||||
Version: 1,
|
||||
TxIn: []*btcwire.TxIn{
|
||||
{
|
||||
PreviousOutPoint: btcwire.OutPoint{
|
||||
Hash: btcwire.ShaHash{},
|
||||
Index: 0xffffffff,
|
||||
},
|
||||
SignatureScript: []byte{
|
||||
0x04, 0x4c, 0x86, 0x04, 0x1b, 0x02, 0x06, 0x02,
|
||||
},
|
||||
Sequence: 0xffffffff,
|
||||
},
|
||||
},
|
||||
TxOut: []*btcwire.TxOut{
|
||||
{
|
||||
Value: 0x12a05f200, // 5000000000
|
||||
PkScript: []byte{
|
||||
0x41, // OP_DATA_65
|
||||
0x04, 0x1b, 0x0e, 0x8c, 0x25, 0x67, 0xc1, 0x25,
|
||||
0x36, 0xaa, 0x13, 0x35, 0x7b, 0x79, 0xa0, 0x73,
|
||||
0xdc, 0x44, 0x44, 0xac, 0xb8, 0x3c, 0x4e, 0xc7,
|
||||
0xa0, 0xe2, 0xf9, 0x9d, 0xd7, 0x45, 0x75, 0x16,
|
||||
0xc5, 0x81, 0x72, 0x42, 0xda, 0x79, 0x69, 0x24,
|
||||
0xca, 0x4e, 0x99, 0x94, 0x7d, 0x08, 0x7f, 0xed,
|
||||
0xf9, 0xce, 0x46, 0x7c, 0xb9, 0xf7, 0xc6, 0x28,
|
||||
0x70, 0x78, 0xf8, 0x01, 0xdf, 0x27, 0x6f, 0xdf,
|
||||
0x84, // 65-byte signature
|
||||
0xac, // OP_CHECKSIG
|
||||
},
|
||||
},
|
||||
},
|
||||
LockTime: 0,
|
||||
},
|
||||
{
|
||||
Version: 1,
|
||||
TxIn: []*btcwire.TxIn{
|
||||
{
|
||||
PreviousOutPoint: btcwire.OutPoint{
|
||||
Hash: btcwire.ShaHash([32]byte{ // Make go vet happy.
|
||||
0x03, 0x2e, 0x38, 0xe9, 0xc0, 0xa8, 0x4c, 0x60,
|
||||
0x46, 0xd6, 0x87, 0xd1, 0x05, 0x56, 0xdc, 0xac,
|
||||
0xc4, 0x1d, 0x27, 0x5e, 0xc5, 0x5f, 0xc0, 0x07,
|
||||
0x79, 0xac, 0x88, 0xfd, 0xf3, 0x57, 0xa1, 0x87,
|
||||
}), // 87a157f3fd88ac7907c05fc55e271dc4acdc5605d187d646604ca8c0e9382e03
|
||||
Index: 0,
|
||||
},
|
||||
SignatureScript: []byte{
|
||||
0x49, // OP_DATA_73
|
||||
0x30, 0x46, 0x02, 0x21, 0x00, 0xc3, 0x52, 0xd3,
|
||||
0xdd, 0x99, 0x3a, 0x98, 0x1b, 0xeb, 0xa4, 0xa6,
|
||||
0x3a, 0xd1, 0x5c, 0x20, 0x92, 0x75, 0xca, 0x94,
|
||||
0x70, 0xab, 0xfc, 0xd5, 0x7d, 0xa9, 0x3b, 0x58,
|
||||
0xe4, 0xeb, 0x5d, 0xce, 0x82, 0x02, 0x21, 0x00,
|
||||
0x84, 0x07, 0x92, 0xbc, 0x1f, 0x45, 0x60, 0x62,
|
||||
0x81, 0x9f, 0x15, 0xd3, 0x3e, 0xe7, 0x05, 0x5c,
|
||||
0xf7, 0xb5, 0xee, 0x1a, 0xf1, 0xeb, 0xcc, 0x60,
|
||||
0x28, 0xd9, 0xcd, 0xb1, 0xc3, 0xaf, 0x77, 0x48,
|
||||
0x01, // 73-byte signature
|
||||
0x41, // OP_DATA_65
|
||||
0x04, 0xf4, 0x6d, 0xb5, 0xe9, 0xd6, 0x1a, 0x9d,
|
||||
0xc2, 0x7b, 0x8d, 0x64, 0xad, 0x23, 0xe7, 0x38,
|
||||
0x3a, 0x4e, 0x6c, 0xa1, 0x64, 0x59, 0x3c, 0x25,
|
||||
0x27, 0xc0, 0x38, 0xc0, 0x85, 0x7e, 0xb6, 0x7e,
|
||||
0xe8, 0xe8, 0x25, 0xdc, 0xa6, 0x50, 0x46, 0xb8,
|
||||
0x2c, 0x93, 0x31, 0x58, 0x6c, 0x82, 0xe0, 0xfd,
|
||||
0x1f, 0x63, 0x3f, 0x25, 0xf8, 0x7c, 0x16, 0x1b,
|
||||
0xc6, 0xf8, 0xa6, 0x30, 0x12, 0x1d, 0xf2, 0xb3,
|
||||
0xd3, // 65-byte pubkey
|
||||
},
|
||||
Sequence: 0xffffffff,
|
||||
},
|
||||
},
|
||||
TxOut: []*btcwire.TxOut{
|
||||
{
|
||||
Value: 0x2123e300, // 556000000
|
||||
PkScript: []byte{
|
||||
0x76, // OP_DUP
|
||||
0xa9, // OP_HASH160
|
||||
0x14, // OP_DATA_20
|
||||
0xc3, 0x98, 0xef, 0xa9, 0xc3, 0x92, 0xba, 0x60,
|
||||
0x13, 0xc5, 0xe0, 0x4e, 0xe7, 0x29, 0x75, 0x5e,
|
||||
0xf7, 0xf5, 0x8b, 0x32,
|
||||
0x88, // OP_EQUALVERIFY
|
||||
0xac, // OP_CHECKSIG
|
||||
},
|
||||
},
|
||||
{
|
||||
Value: 0x108e20f00, // 4444000000
|
||||
PkScript: []byte{
|
||||
0x76, // OP_DUP
|
||||
0xa9, // OP_HASH160
|
||||
0x14, // OP_DATA_20
|
||||
0x94, 0x8c, 0x76, 0x5a, 0x69, 0x14, 0xd4, 0x3f,
|
||||
0x2a, 0x7a, 0xc1, 0x77, 0xda, 0x2c, 0x2f, 0x6b,
|
||||
0x52, 0xde, 0x3d, 0x7c,
|
||||
0x88, // OP_EQUALVERIFY
|
||||
0xac, // OP_CHECKSIG
|
||||
},
|
||||
},
|
||||
},
|
||||
LockTime: 0,
|
||||
},
|
||||
{
|
||||
Version: 1,
|
||||
TxIn: []*btcwire.TxIn{
|
||||
{
|
||||
PreviousOutPoint: btcwire.OutPoint{
|
||||
Hash: btcwire.ShaHash([32]byte{ // Make go vet happy.
|
||||
0xc3, 0x3e, 0xbf, 0xf2, 0xa7, 0x09, 0xf1, 0x3d,
|
||||
0x9f, 0x9a, 0x75, 0x69, 0xab, 0x16, 0xa3, 0x27,
|
||||
0x86, 0xaf, 0x7d, 0x7e, 0x2d, 0xe0, 0x92, 0x65,
|
||||
0xe4, 0x1c, 0x61, 0xd0, 0x78, 0x29, 0x4e, 0xcf,
|
||||
}), // cf4e2978d0611ce46592e02d7e7daf8627a316ab69759a9f3df109a7f2bf3ec3
|
||||
Index: 1,
|
||||
},
|
||||
SignatureScript: []byte{
|
||||
0x47, // OP_DATA_71
|
||||
0x30, 0x44, 0x02, 0x20, 0x03, 0x2d, 0x30, 0xdf,
|
||||
0x5e, 0xe6, 0xf5, 0x7f, 0xa4, 0x6c, 0xdd, 0xb5,
|
||||
0xeb, 0x8d, 0x0d, 0x9f, 0xe8, 0xde, 0x6b, 0x34,
|
||||
0x2d, 0x27, 0x94, 0x2a, 0xe9, 0x0a, 0x32, 0x31,
|
||||
0xe0, 0xba, 0x33, 0x3e, 0x02, 0x20, 0x3d, 0xee,
|
||||
0xe8, 0x06, 0x0f, 0xdc, 0x70, 0x23, 0x0a, 0x7f,
|
||||
0x5b, 0x4a, 0xd7, 0xd7, 0xbc, 0x3e, 0x62, 0x8c,
|
||||
0xbe, 0x21, 0x9a, 0x88, 0x6b, 0x84, 0x26, 0x9e,
|
||||
0xae, 0xb8, 0x1e, 0x26, 0xb4, 0xfe, 0x01,
|
||||
0x41, // OP_DATA_65
|
||||
0x04, 0xae, 0x31, 0xc3, 0x1b, 0xf9, 0x12, 0x78,
|
||||
0xd9, 0x9b, 0x83, 0x77, 0xa3, 0x5b, 0xbc, 0xe5,
|
||||
0xb2, 0x7d, 0x9f, 0xff, 0x15, 0x45, 0x68, 0x39,
|
||||
0xe9, 0x19, 0x45, 0x3f, 0xc7, 0xb3, 0xf7, 0x21,
|
||||
0xf0, 0xba, 0x40, 0x3f, 0xf9, 0x6c, 0x9d, 0xee,
|
||||
0xb6, 0x80, 0xe5, 0xfd, 0x34, 0x1c, 0x0f, 0xc3,
|
||||
0xa7, 0xb9, 0x0d, 0xa4, 0x63, 0x1e, 0xe3, 0x95,
|
||||
0x60, 0x63, 0x9d, 0xb4, 0x62, 0xe9, 0xcb, 0x85,
|
||||
0x0f, // 65-byte pubkey
|
||||
},
|
||||
Sequence: 0xffffffff,
|
||||
},
|
||||
},
|
||||
TxOut: []*btcwire.TxOut{
|
||||
{
|
||||
Value: 0xf4240, // 1000000
|
||||
PkScript: []byte{
|
||||
0x76, // OP_DUP
|
||||
0xa9, // OP_HASH160
|
||||
0x14, // OP_DATA_20
|
||||
0xb0, 0xdc, 0xbf, 0x97, 0xea, 0xbf, 0x44, 0x04,
|
||||
0xe3, 0x1d, 0x95, 0x24, 0x77, 0xce, 0x82, 0x2d,
|
||||
0xad, 0xbe, 0x7e, 0x10,
|
||||
0x88, // OP_EQUALVERIFY
|
||||
0xac, // OP_CHECKSIG
|
||||
},
|
||||
},
|
||||
{
|
||||
Value: 0x11d260c0, // 299000000
|
||||
PkScript: []byte{
|
||||
0x76, // OP_DUP
|
||||
0xa9, // OP_HASH160
|
||||
0x14, // OP_DATA_20
|
||||
0x6b, 0x12, 0x81, 0xee, 0xc2, 0x5a, 0xb4, 0xe1,
|
||||
0xe0, 0x79, 0x3f, 0xf4, 0xe0, 0x8a, 0xb1, 0xab,
|
||||
0xb3, 0x40, 0x9c, 0xd9,
|
||||
0x88, // OP_EQUALVERIFY
|
||||
0xac, // OP_CHECKSIG
|
||||
},
|
||||
},
|
||||
},
|
||||
LockTime: 0,
|
||||
},
|
||||
{
|
||||
Version: 1,
|
||||
TxIn: []*btcwire.TxIn{
|
||||
{
|
||||
PreviousOutPoint: btcwire.OutPoint{
|
||||
Hash: btcwire.ShaHash([32]byte{ // Make go vet happy.
|
||||
0x0b, 0x60, 0x72, 0xb3, 0x86, 0xd4, 0xa7, 0x73,
|
||||
0x23, 0x52, 0x37, 0xf6, 0x4c, 0x11, 0x26, 0xac,
|
||||
0x3b, 0x24, 0x0c, 0x84, 0xb9, 0x17, 0xa3, 0x90,
|
||||
0x9b, 0xa1, 0xc4, 0x3d, 0xed, 0x5f, 0x51, 0xf4,
|
||||
}), // f4515fed3dc4a19b90a317b9840c243bac26114cf637522373a7d486b372600b
|
||||
Index: 0,
|
||||
},
|
||||
SignatureScript: []byte{
|
||||
0x49, // OP_DATA_73
|
||||
0x30, 0x46, 0x02, 0x21, 0x00, 0xbb, 0x1a, 0xd2,
|
||||
0x6d, 0xf9, 0x30, 0xa5, 0x1c, 0xce, 0x11, 0x0c,
|
||||
0xf4, 0x4f, 0x7a, 0x48, 0xc3, 0xc5, 0x61, 0xfd,
|
||||
0x97, 0x75, 0x00, 0xb1, 0xae, 0x5d, 0x6b, 0x6f,
|
||||
0xd1, 0x3d, 0x0b, 0x3f, 0x4a, 0x02, 0x21, 0x00,
|
||||
0xc5, 0xb4, 0x29, 0x51, 0xac, 0xed, 0xff, 0x14,
|
||||
0xab, 0xba, 0x27, 0x36, 0xfd, 0x57, 0x4b, 0xdb,
|
||||
0x46, 0x5f, 0x3e, 0x6f, 0x8d, 0xa1, 0x2e, 0x2c,
|
||||
0x53, 0x03, 0x95, 0x4a, 0xca, 0x7f, 0x78, 0xf3,
|
||||
0x01, // 73-byte signature
|
||||
0x41, // OP_DATA_65
|
||||
0x04, 0xa7, 0x13, 0x5b, 0xfe, 0x82, 0x4c, 0x97,
|
||||
0xec, 0xc0, 0x1e, 0xc7, 0xd7, 0xe3, 0x36, 0x18,
|
||||
0x5c, 0x81, 0xe2, 0xaa, 0x2c, 0x41, 0xab, 0x17,
|
||||
0x54, 0x07, 0xc0, 0x94, 0x84, 0xce, 0x96, 0x94,
|
||||
0xb4, 0x49, 0x53, 0xfc, 0xb7, 0x51, 0x20, 0x65,
|
||||
0x64, 0xa9, 0xc2, 0x4d, 0xd0, 0x94, 0xd4, 0x2f,
|
||||
0xdb, 0xfd, 0xd5, 0xaa, 0xd3, 0xe0, 0x63, 0xce,
|
||||
0x6a, 0xf4, 0xcf, 0xaa, 0xea, 0x4e, 0xa1, 0x4f,
|
||||
0xbb, // 65-byte pubkey
|
||||
},
|
||||
Sequence: 0xffffffff,
|
||||
},
|
||||
},
|
||||
TxOut: []*btcwire.TxOut{
|
||||
{
|
||||
Value: 0xf4240, // 1000000
|
||||
PkScript: []byte{
|
||||
0x76, // OP_DUP
|
||||
0xa9, // OP_HASH160
|
||||
0x14, // OP_DATA_20
|
||||
0x39, 0xaa, 0x3d, 0x56, 0x9e, 0x06, 0xa1, 0xd7,
|
||||
0x92, 0x6d, 0xc4, 0xbe, 0x11, 0x93, 0xc9, 0x9b,
|
||||
0xf2, 0xeb, 0x9e, 0xe0,
|
||||
0x88, // OP_EQUALVERIFY
|
||||
0xac, // OP_CHECKSIG
|
||||
},
|
||||
},
|
||||
},
|
||||
LockTime: 0,
|
||||
},
|
||||
},
|
||||
}
|
Loading…
Reference in a new issue