lbcd/mempool/policy.go

383 lines
15 KiB
Go
Raw Normal View History

blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
mempool: Refactor mempool code to its own package. (#737) This does the minimum work necessary to refactor the mempool code into its own package. The idea is that separating this code into its own package will greatly improve its testability, allow independent benchmarking and profiling, and open up some interesting opportunities for future development related to the memory pool. There are likely some areas related to policy that could be further refactored, however it is better to do that in future commits in order to keep the changeset as small as possible during this refactor. Overview of the major changes: - Create the new package - Move several files into the new package: - mempool.go -> mempool/mempool.go - mempoolerror.go -> mempool/error.go - policy.go -> mempool/policy.go - policy_test.go -> mempool/policy_test.go - Update mempool logging to use the new mempool package logger - Rename mempoolPolicy to Policy (so it's now mempool.Policy) - Rename mempoolConfig to Config (so it's now mempool.Config) - Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc) - Rename txMemPool to TxPool (so it's now mempool.TxPool) - Move defaultBlockPrioritySize to the new package and export it - Export DefaultMinRelayTxFee from the mempool package - Export the CalcPriority function from the mempool package - Introduce a new RawMempoolVerbose function on the TxPool and update the RPC server to use it - Update all references to the mempool to use the package. - Add a skeleton README.md
2016-08-19 18:08:37 +02:00
package mempool
import (
"fmt"
"time"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
)
const (
// maxStandardP2SHSigOps is the maximum number of signature operations
// that are considered standard in a pay-to-script-hash script.
maxStandardP2SHSigOps = 15
// maxStandardTxCost is the max weight permitted by any transaction
// according to the current default policy.
maxStandardTxWeight = 400000
// maxStandardSigScriptSize is the maximum size allowed for a
// transaction input signature script to be considered standard. This
// value allows for a 15-of-15 CHECKMULTISIG pay-to-script-hash with
// compressed keys.
//
// The form of the overall script is: OP_0 <15 signatures> OP_PUSHDATA2
// <2 bytes len> [OP_15 <15 pubkeys> OP_15 OP_CHECKMULTISIG]
//
// For the p2sh script portion, each of the 15 compressed pubkeys are
// 33 bytes (plus one for the OP_DATA_33 opcode), and the thus it totals
// to (15*34)+3 = 513 bytes. Next, each of the 15 signatures is a max
// of 73 bytes (plus one for the OP_DATA_73 opcode). Also, there is one
// extra byte for the initial extra OP_0 push and 3 bytes for the
// OP_PUSHDATA2 needed to specify the 513 bytes for the script push.
// That brings the total to 1+(15*74)+3+513 = 1627. This value also
// adds a few extra bytes to provide a little buffer.
// (1 + 15*74 + 3) + (15*34 + 3) + 23 = 1650
maxStandardSigScriptSize = 1650
mempool: Refactor mempool code to its own package. (#737) This does the minimum work necessary to refactor the mempool code into its own package. The idea is that separating this code into its own package will greatly improve its testability, allow independent benchmarking and profiling, and open up some interesting opportunities for future development related to the memory pool. There are likely some areas related to policy that could be further refactored, however it is better to do that in future commits in order to keep the changeset as small as possible during this refactor. Overview of the major changes: - Create the new package - Move several files into the new package: - mempool.go -> mempool/mempool.go - mempoolerror.go -> mempool/error.go - policy.go -> mempool/policy.go - policy_test.go -> mempool/policy_test.go - Update mempool logging to use the new mempool package logger - Rename mempoolPolicy to Policy (so it's now mempool.Policy) - Rename mempoolConfig to Config (so it's now mempool.Config) - Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc) - Rename txMemPool to TxPool (so it's now mempool.TxPool) - Move defaultBlockPrioritySize to the new package and export it - Export DefaultMinRelayTxFee from the mempool package - Export the CalcPriority function from the mempool package - Introduce a new RawMempoolVerbose function on the TxPool and update the RPC server to use it - Update all references to the mempool to use the package. - Add a skeleton README.md
2016-08-19 18:08:37 +02:00
// DefaultMinRelayTxFee is the minimum fee in satoshi that is required
// for a transaction to be treated as free for relay and mining
// purposes. It is also used to help determine if a transaction is
// considered dust and as a base for calculating minimum required fees
// for larger transactions. This value is in Satoshi/1000 bytes.
mempool: Refactor mempool code to its own package. (#737) This does the minimum work necessary to refactor the mempool code into its own package. The idea is that separating this code into its own package will greatly improve its testability, allow independent benchmarking and profiling, and open up some interesting opportunities for future development related to the memory pool. There are likely some areas related to policy that could be further refactored, however it is better to do that in future commits in order to keep the changeset as small as possible during this refactor. Overview of the major changes: - Create the new package - Move several files into the new package: - mempool.go -> mempool/mempool.go - mempoolerror.go -> mempool/error.go - policy.go -> mempool/policy.go - policy_test.go -> mempool/policy_test.go - Update mempool logging to use the new mempool package logger - Rename mempoolPolicy to Policy (so it's now mempool.Policy) - Rename mempoolConfig to Config (so it's now mempool.Config) - Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc) - Rename txMemPool to TxPool (so it's now mempool.TxPool) - Move defaultBlockPrioritySize to the new package and export it - Export DefaultMinRelayTxFee from the mempool package - Export the CalcPriority function from the mempool package - Introduce a new RawMempoolVerbose function on the TxPool and update the RPC server to use it - Update all references to the mempool to use the package. - Add a skeleton README.md
2016-08-19 18:08:37 +02:00
DefaultMinRelayTxFee = btcutil.Amount(1000)
// maxStandardMultiSigKeys is the maximum number of public keys allowed
// in a multi-signature transaction output script for it to be
// considered standard.
maxStandardMultiSigKeys = 3
)
// calcMinRequiredTxRelayFee returns the minimum transaction fee required for a
// transaction with the passed serialized size to be accepted into the memory
// pool and relayed.
func calcMinRequiredTxRelayFee(serializedSize int64, minRelayTxFee btcutil.Amount) int64 {
// Calculate the minimum fee for a transaction to be allowed into the
// mempool and relayed by scaling the base fee (which is the minimum
2018-07-09 23:41:43 +02:00
// free transaction relay fee). minRelayTxFee is in Satoshi/kB so
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// multiply by serializedSize (which is in bytes) and divide by 1000 to
// get minimum Satoshis.
minFee := (serializedSize * int64(minRelayTxFee)) / 1000
if minFee == 0 && minRelayTxFee > 0 {
minFee = int64(minRelayTxFee)
}
// Set the minimum fee to the maximum possible value if the calculated
// fee is not in the valid range for monetary amounts.
if minFee < 0 || minFee > btcutil.MaxSatoshi {
minFee = btcutil.MaxSatoshi
}
return minFee
}
// checkInputsStandard performs a series of checks on a transaction's inputs
// to ensure they are "standard". A standard transaction input within the
// context of this function is one whose referenced public key script is of a
// standard form and, for pay-to-script-hash, does not have more than
// maxStandardP2SHSigOps signature operations. However, it should also be noted
// that standard inputs also are those which have a clean stack after execution
// and only contain pushed data in their signature scripts. This function does
// not perform those checks because the script engine already does this more
// accurately and concisely via the txscript.ScriptVerifyCleanStack and
// txscript.ScriptVerifySigPushOnly flags.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
func checkInputsStandard(tx *btcutil.Tx, utxoView *blockchain.UtxoViewpoint) error {
// NOTE: The reference implementation also does a coinbase check here,
// but coinbases have already been rejected prior to calling this
// function so no need to recheck.
for i, txIn := range tx.MsgTx().TxIn {
// It is safe to elide existence and index checks here since
// they have already been checked prior to calling this
// function.
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
entry := utxoView.LookupEntry(txIn.PreviousOutPoint)
originPkScript := entry.PkScript()
switch txscript.GetScriptClass(originPkScript) {
case txscript.ScriptHashTy:
numSigOps := txscript.GetPreciseSigOpCount(
txIn.SignatureScript, originPkScript, true)
if numSigOps > maxStandardP2SHSigOps {
str := fmt.Sprintf("transaction input #%d has "+
"%d signature operations which is more "+
"than the allowed max amount of %d",
i, numSigOps, maxStandardP2SHSigOps)
return txRuleError(wire.RejectNonstandard, str)
}
case txscript.NonStandardTy:
str := fmt.Sprintf("transaction input #%d has a "+
"non-standard script form", i)
return txRuleError(wire.RejectNonstandard, str)
}
}
return nil
}
// checkPkScriptStandard performs a series of checks on a transaction output
// script (public key script) to ensure it is a "standard" public key script.
// A standard public key script is one that is a recognized form, and for
// multi-signature scripts, only contains from 1 to maxStandardMultiSigKeys
// public keys.
func checkPkScriptStandard(pkScript []byte, scriptClass txscript.ScriptClass) error {
switch scriptClass {
case txscript.MultiSigTy:
numPubKeys, numSigs, err := txscript.CalcMultiSigStats(pkScript)
if err != nil {
str := fmt.Sprintf("multi-signature script parse "+
"failure: %v", err)
return txRuleError(wire.RejectNonstandard, str)
}
// A standard multi-signature public key script must contain
// from 1 to maxStandardMultiSigKeys public keys.
if numPubKeys < 1 {
str := "multi-signature script with no pubkeys"
return txRuleError(wire.RejectNonstandard, str)
}
if numPubKeys > maxStandardMultiSigKeys {
str := fmt.Sprintf("multi-signature script with %d "+
"public keys which is more than the allowed "+
"max of %d", numPubKeys, maxStandardMultiSigKeys)
return txRuleError(wire.RejectNonstandard, str)
}
// A standard multi-signature public key script must have at
// least 1 signature and no more signatures than available
// public keys.
if numSigs < 1 {
return txRuleError(wire.RejectNonstandard,
"multi-signature script with no signatures")
}
if numSigs > numPubKeys {
str := fmt.Sprintf("multi-signature script with %d "+
"signatures which is more than the available "+
"%d public keys", numSigs, numPubKeys)
return txRuleError(wire.RejectNonstandard, str)
}
case txscript.NonStandardTy:
return txRuleError(wire.RejectNonstandard,
"non-standard script form")
}
return nil
}
// isDust returns whether or not the passed transaction output amount is
// considered dust or not based on the passed minimum transaction relay fee.
// Dust is defined in terms of the minimum transaction relay fee. In
// particular, if the cost to the network to spend coins is more than 1/3 of the
// minimum transaction relay fee, it is considered dust.
func isDust(txOut *wire.TxOut, minRelayTxFee btcutil.Amount) bool {
// Unspendable outputs are considered dust.
if txscript.IsUnspendable(txOut.PkScript) {
return true
}
// The total serialized size consists of the output and the associated
// input script to redeem it. Since there is no input script
// to redeem it yet, use the minimum size of a typical input script.
//
// Pay-to-pubkey-hash bytes breakdown:
//
// Output to hash (34 bytes):
// 8 value, 1 script len, 25 script [1 OP_DUP, 1 OP_HASH_160,
// 1 OP_DATA_20, 20 hash, 1 OP_EQUALVERIFY, 1 OP_CHECKSIG]
//
// Input with compressed pubkey (148 bytes):
// 36 prev outpoint, 1 script len, 107 script [1 OP_DATA_72, 72 sig,
// 1 OP_DATA_33, 33 compressed pubkey], 4 sequence
//
// Input with uncompressed pubkey (180 bytes):
// 36 prev outpoint, 1 script len, 139 script [1 OP_DATA_72, 72 sig,
// 1 OP_DATA_65, 65 compressed pubkey], 4 sequence
//
// Pay-to-pubkey bytes breakdown:
//
// Output to compressed pubkey (44 bytes):
// 8 value, 1 script len, 35 script [1 OP_DATA_33,
// 33 compressed pubkey, 1 OP_CHECKSIG]
//
// Output to uncompressed pubkey (76 bytes):
// 8 value, 1 script len, 67 script [1 OP_DATA_65, 65 pubkey,
// 1 OP_CHECKSIG]
//
// Input (114 bytes):
// 36 prev outpoint, 1 script len, 73 script [1 OP_DATA_72,
// 72 sig], 4 sequence
//
// Pay-to-witness-pubkey-hash bytes breakdown:
//
// Output to witness key hash (31 bytes);
// 8 value, 1 script len, 22 script [1 OP_0, 1 OP_DATA_20,
// 20 bytes hash160]
//
// Input (67 bytes as the 107 witness stack is discounted):
// 36 prev outpoint, 1 script len, 0 script (not sigScript), 107
// witness stack bytes [1 element length, 33 compressed pubkey,
// element length 72 sig], 4 sequence
//
//
// Theoretically this could examine the script type of the output script
// and use a different size for the typical input script size for
// pay-to-pubkey vs pay-to-pubkey-hash inputs per the above breakdowns,
// but the only combination which is less than the value chosen is
// a pay-to-pubkey script with a compressed pubkey, which is not very
// common.
//
// The most common scripts are pay-to-pubkey-hash, and as per the above
// breakdown, the minimum size of a p2pkh input script is 148 bytes. So
// that figure is used. If the output being spent is a witness program,
// then we apply the witness discount to the size of the signature.
//
// The segwit analogue to p2pkh is a p2wkh output. This is the smallest
// output possible using the new segwit features. The 107 bytes of
// witness data is discounted by a factor of 4, leading to a computed
// value of 67 bytes of witness data.
//
// Both cases share a 41 byte preamble required to reference the input
// being spent and the sequence number of the input.
totalSize := txOut.SerializeSize() + 41
if txscript.IsWitnessProgram(txOut.PkScript) {
totalSize += (107 / blockchain.WitnessScaleFactor)
} else {
totalSize += 107
}
// The output is considered dust if the cost to the network to spend the
// coins is more than 1/3 of the minimum free transaction relay fee.
// minFreeTxRelayFee is in Satoshi/KB, so multiply by 1000 to
// convert to bytes.
//
// Using the typical values for a pay-to-pubkey-hash transaction from
// the breakdown above and the default minimum free transaction relay
// fee of 1000, this equates to values less than 546 satoshi being
// considered dust.
//
// The following is equivalent to (value/totalSize) * (1/3) * 1000
// without needing to do floating point math.
return txOut.Value*1000/(3*int64(totalSize)) < int64(minRelayTxFee)
}
// checkTransactionStandard performs a series of checks on a transaction to
// ensure it is a "standard" transaction. A standard transaction is one that
// conforms to several additional limiting cases over what is considered a
// "sane" transaction such as having a version in the supported range, being
// finalized, conforming to more stringent size constraints, having scripts
// of recognized forms, and not containing "dust" outputs (those that are
// so small it costs more to process them than they are worth).
func checkTransactionStandard(tx *btcutil.Tx, height int32,
medianTimePast time.Time, minRelayTxFee btcutil.Amount,
maxTxVersion int32) error {
// The transaction must be a currently supported version.
msgTx := tx.MsgTx()
if msgTx.Version > maxTxVersion || msgTx.Version < 1 {
str := fmt.Sprintf("transaction version %d is not in the "+
"valid range of %d-%d", msgTx.Version, 1,
maxTxVersion)
return txRuleError(wire.RejectNonstandard, str)
}
// The transaction must be finalized to be standard and therefore
// considered for inclusion in a block.
if !blockchain.IsFinalizedTransaction(tx, height, medianTimePast) {
return txRuleError(wire.RejectNonstandard,
"transaction is not finalized")
}
// Since extremely large transactions with a lot of inputs can cost
// almost as much to process as the sender fees, limit the maximum
// size of a transaction. This also helps mitigate CPU exhaustion
// attacks.
txWeight := blockchain.GetTransactionWeight(tx)
if txWeight > maxStandardTxWeight {
str := fmt.Sprintf("weight of transaction %v is larger than max "+
"allowed weight of %v", txWeight, maxStandardTxWeight)
return txRuleError(wire.RejectNonstandard, str)
}
for i, txIn := range msgTx.TxIn {
// Each transaction input signature script must not exceed the
// maximum size allowed for a standard transaction. See
// the comment on maxStandardSigScriptSize for more details.
sigScriptLen := len(txIn.SignatureScript)
if sigScriptLen > maxStandardSigScriptSize {
str := fmt.Sprintf("transaction input %d: signature "+
"script size of %d bytes is large than max "+
"allowed size of %d bytes", i, sigScriptLen,
maxStandardSigScriptSize)
return txRuleError(wire.RejectNonstandard, str)
}
// Each transaction input signature script must only contain
// opcodes which push data onto the stack.
if !txscript.IsPushOnlyScript(txIn.SignatureScript) {
str := fmt.Sprintf("transaction input %d: signature "+
"script is not push only", i)
return txRuleError(wire.RejectNonstandard, str)
}
}
// None of the output public key scripts can be a non-standard script or
// be "dust" (except when the script is a null data script).
numNullDataOutputs := 0
for i, txOut := range msgTx.TxOut {
scriptClass := txscript.GetScriptClass(txOut.PkScript)
err := checkPkScriptStandard(txOut.PkScript, scriptClass)
if err != nil {
// Attempt to extract a reject code from the error so
// it can be retained. When not possible, fall back to
// a non standard error.
rejectCode := wire.RejectNonstandard
if rejCode, found := extractRejectCode(err); found {
rejectCode = rejCode
}
str := fmt.Sprintf("transaction output %d: %v", i, err)
return txRuleError(rejectCode, str)
}
// Accumulate the number of outputs which only carry data. For
// all other script types, ensure the output value is not
// "dust".
if scriptClass == txscript.NullDataTy {
numNullDataOutputs++
} else if isDust(txOut, minRelayTxFee) {
str := fmt.Sprintf("transaction output %d: payment "+
"of %d is dust", i, txOut.Value)
return txRuleError(wire.RejectDust, str)
}
}
// A standard transaction must not have more than one output script that
// only carries data.
if numNullDataOutputs > 1 {
str := "more than one transaction output in a nulldata script"
return txRuleError(wire.RejectNonstandard, str)
}
return nil
}
// GetTxVirtualSize computes the virtual size of a given transaction. A
// transaction's virtual size is based off its weight, creating a discount for
// any witness data it contains, proportional to the current
// blockchain.WitnessScaleFactor value.
func GetTxVirtualSize(tx *btcutil.Tx) int64 {
// vSize := (weight(tx) + 3) / 4
// := (((baseSize * 3) + totalSize) + 3) / 4
// We add 3 here as a way to compute the ceiling of the prior arithmetic
// to 4. The division by 4 creates a discount for wit witness data.
return (blockchain.GetTransactionWeight(tx) + (blockchain.WitnessScaleFactor - 1)) /
blockchain.WitnessScaleFactor
}