lbcd/blockchain/scriptval.go

320 lines
9.4 KiB
Go
Raw Permalink Normal View History

blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Copyright (c) 2013-2016 The btcsuite developers
2013-07-18 16:49:28 +02:00
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
2013-07-18 16:49:28 +02:00
import (
"fmt"
2014-07-02 18:04:59 +02:00
"math"
"runtime"
"time"
2014-07-02 18:04:59 +02:00
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
btcutil "github.com/lbryio/lbcutil"
2013-07-18 16:49:28 +02:00
)
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// txValidateItem holds a transaction along with which input to validate.
type txValidateItem struct {
txInIndex int
txIn *wire.TxIn
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
tx *btcutil.Tx
sigHashes *txscript.TxSigHashes
2013-07-18 16:49:28 +02:00
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// txValidator provides a type which asynchronously validates transaction
// inputs. It provides several channels for communication and a processing
// function that is intended to be in run multiple goroutines.
type txValidator struct {
validateChan chan *txValidateItem
quitChan chan struct{}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
resultChan chan error
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
utxoView *UtxoViewpoint
flags txscript.ScriptFlags
Integrate a valid ECDSA signature cache into btcd Introduce an ECDSA signature verification into btcd in order to mitigate a certain DoS attack and as a performance optimization. The benefits of SigCache are two fold. Firstly, usage of SigCache mitigates a DoS attack wherein an attacker causes a victim's client to hang due to worst-case behavior triggered while processing attacker crafted invalid transactions. A detailed description of the mitigated DoS attack can be found here: https://bitslog.wordpress.com/2013/01/23/fixed-bitcoin-vulnerability-explanation-why-the-signature-cache-is-a-dos-protection/ Secondly, usage of the SigCache introduces a signature verification optimization which speeds up the validation of transactions within a block, if they've already been seen and verified within the mempool. The server itself manages the sigCache instance. The blockManager and txMempool respectively now receive pointers to the created sigCache instance. All read (sig triplet existence) operations on the sigCache will not block unless a separate goroutine is adding an entry (writing) to the sigCache. GetBlockTemplate generation now also utilizes the sigCache in order to avoid unnecessarily double checking signatures when generating a template after previously accepting a txn to the mempool. Consequently, the CPU miner now also employs the same optimization. The maximum number of entries for the sigCache has been introduced as a config parameter in order to allow users to configure the amount of memory consumed by this new additional caching.
2015-09-25 01:22:00 +02:00
sigCache *txscript.SigCache
hashCache *txscript.HashCache
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
}
// sendResult sends the result of a script pair validation on the internal
// result channel while respecting the quit channel. This allows orderly
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// shutdown when the validation process is aborted early due to a validation
// error in one of the other goroutines.
func (v *txValidator) sendResult(result error) {
select {
case v.resultChan <- result:
case <-v.quitChan:
}
}
// validateHandler consumes items to validate from the internal validate channel
// and returns the result of the validation on the internal result channel. It
// must be run as a goroutine.
func (v *txValidator) validateHandler() {
out:
for {
select {
case txVI := <-v.validateChan:
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
// Ensure the referenced input utxo is available.
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
txIn := txVI.txIn
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
utxo := v.utxoView.LookupEntry(txIn.PreviousOutPoint)
if utxo == nil {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
str := fmt.Sprintf("unable to find unspent "+
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
"output %v referenced from "+
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
"transaction %s:%d",
txIn.PreviousOutPoint, txVI.tx.Hash(),
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
txVI.txInIndex)
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
err := ruleError(ErrMissingTxOut, str)
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
v.sendResult(err)
break out
}
// Create a new script engine for the script pair.
sigScript := txIn.SignatureScript
witness := txIn.Witness
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
pkScript := utxo.PkScript()
inputAmount := utxo.Amount()
vm, err := txscript.NewEngine(pkScript, txVI.tx.MsgTx(),
txVI.txInIndex, v.flags, v.sigCache, txVI.sigHashes,
inputAmount)
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
if err != nil {
str := fmt.Sprintf("failed to parse input "+
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
"%s:%d which references output %v - "+
"%v (input witness %x, input script "+
"bytes %x, prev output script bytes %x)",
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
txVI.tx.Hash(), txVI.txInIndex,
txIn.PreviousOutPoint, err, witness,
sigScript, pkScript)
err := ruleError(ErrScriptMalformed, str)
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
v.sendResult(err)
break out
}
// Execute the script pair.
if err := vm.Execute(); err != nil {
str := fmt.Sprintf("failed to validate input "+
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
"%s:%d which references output %v - "+
"%v (input witness %x, input script "+
"bytes %x, prev output script bytes %x)",
txVI.tx.Hash(), txVI.txInIndex,
multi: Rework utxoset/view to use outpoints. This modifies the utxoset in the database and related UtxoViewpoint to store and work with unspent transaction outputs on a per-output basis instead of at a transaction level. This was inspired by similar recent changes in Bitcoin Core. The primary motivation is to simplify the code, pave the way for a utxo cache, and generally focus on optimizing runtime performance. The tradeoff is that this approach does somewhat increase the size of the serialized utxoset since it means that the transaction hash is duplicated for each output as a part of the key and some additional details such as whether the containing transaction is a coinbase and the block height it was a part of are duplicated in each output. However, in practice, the size difference isn't all that large, disk space is relatively cheap, certainly cheaper than memory, and it is much more important to provide more efficient runtime operation since that is the ultimate purpose of the daemon. While performing this conversion, it also simplifies the code to remove the transaction version information from the utxoset as well as the spend journal. The logic for only serializing it under certain circumstances is complicated and it isn't actually used anywhere aside from the gettxout RPC where it also isn't used by anything important either. Consequently, this also removes the version field of the gettxout RPC result. The utxos in the database are automatically migrated to the new format with this commit and it is possible to interrupt and resume the migration process. Finally, it also updates the tests for the new format and adds a new function to the tests to convert the old test data to the new format for convenience. The data has already been converted and updated in the commit. An overview of the changes are as follows: - Remove transaction version from both spent and unspent output entries - Update utxo serialization format to exclude the version - Modify the spend journal serialization format - The old version field is now reserved and always stores zero and ignores it when reading - This allows old entries to be used by new code without having to migrate the entire spend journal - Remove version field from gettxout RPC result - Convert UtxoEntry to represent a specific utxo instead of a transaction with all remaining utxos - Optimize for memory usage with an eye towards a utxo cache - Combine details such as whether the txout was contained in a coinbase, is spent, and is modified into a single packed field of bit flags - Align entry fields to eliminate extra padding since ultimately there will be a lot of these in memory - Introduce a free list for serializing an outpoint to the database key format to significantly reduce pressure on the GC - Update all related functions that previously dealt with transaction hashes to accept outpoints instead - Update all callers accordingly - Only add individually requested outputs from the mempool when constructing a mempool view - Modify the spend journal to always store the block height and coinbase information with every spent txout - Introduce code to handle fetching the missing information from another utxo from the same transaction in the event an old style entry is encountered - Make use of a database cursor with seek to do this much more efficiently than testing every possible output - Always decompress data loaded from the database now that a utxo entry only consists of a specific output - Introduce upgrade code to migrate the utxo set to the new format - Store versions of the utxoset and spend journal buckets - Allow migration process to be interrupted and resumed - Update all tests to expect the correct encodings, remove tests that no longer apply, and add new ones for the new expected behavior - Convert old tests for the legacy utxo format deserialization code to test the new function that is used during upgrade - Update the utxostore test data and add function that was used to convert it - Introduce a few new functions on UtxoViewpoint - AddTxOut for adding an individual txout versus all of them - addTxOut to handle the common code between the new AddTxOut and existing AddTxOuts - RemoveEntry for removing an individual txout - fetchEntryByHash for fetching any remaining utxo for a given transaction hash
2017-09-03 09:59:15 +02:00
txIn.PreviousOutPoint, err, witness,
sigScript, pkScript)
err := ruleError(ErrScriptValidation, str)
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
v.sendResult(err)
break out
}
// Validation succeeded.
v.sendResult(nil)
case <-v.quitChan:
break out
}
}
}
// Validate validates the scripts for all of the passed transaction inputs using
// multiple goroutines.
func (v *txValidator) Validate(items []*txValidateItem) error {
if len(items) == 0 {
2013-07-18 16:49:28 +02:00
return nil
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// Limit the number of goroutines to do script validation based on the
// number of processor cores. This helps ensure the system stays
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// reasonably responsive under heavy load.
maxGoRoutines := runtime.NumCPU() * 3
if maxGoRoutines <= 0 {
maxGoRoutines = 1
}
if maxGoRoutines > len(items) {
maxGoRoutines = len(items)
2013-07-18 16:49:28 +02:00
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// Start up validation handlers that are used to asynchronously
// validate each transaction input.
for i := 0; i < maxGoRoutines; i++ {
go v.validateHandler()
2013-07-18 16:49:28 +02:00
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// Validate each of the inputs. The quit channel is closed when any
// errors occur so all processing goroutines exit regardless of which
// input had the validation error.
numInputs := len(items)
currentItem := 0
processedItems := 0
for processedItems < numInputs {
// Only send items while there are still items that need to
// be processed. The select statement will never select a nil
// channel.
var validateChan chan *txValidateItem
var item *txValidateItem
if currentItem < numInputs {
validateChan = v.validateChan
item = items[currentItem]
}
select {
case validateChan <- item:
currentItem++
case err := <-v.resultChan:
processedItems++
if err != nil {
close(v.quitChan)
return err
}
}
2013-07-18 16:49:28 +02:00
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
close(v.quitChan)
2013-07-18 16:49:28 +02:00
return nil
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// newTxValidator returns a new instance of txValidator to be used for
// validating transaction scripts asynchronously.
func newTxValidator(utxoView *UtxoViewpoint, flags txscript.ScriptFlags,
sigCache *txscript.SigCache, hashCache *txscript.HashCache) *txValidator {
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
return &txValidator{
validateChan: make(chan *txValidateItem),
quitChan: make(chan struct{}),
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
resultChan: make(chan error),
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
utxoView: utxoView,
Integrate a valid ECDSA signature cache into btcd Introduce an ECDSA signature verification into btcd in order to mitigate a certain DoS attack and as a performance optimization. The benefits of SigCache are two fold. Firstly, usage of SigCache mitigates a DoS attack wherein an attacker causes a victim's client to hang due to worst-case behavior triggered while processing attacker crafted invalid transactions. A detailed description of the mitigated DoS attack can be found here: https://bitslog.wordpress.com/2013/01/23/fixed-bitcoin-vulnerability-explanation-why-the-signature-cache-is-a-dos-protection/ Secondly, usage of the SigCache introduces a signature verification optimization which speeds up the validation of transactions within a block, if they've already been seen and verified within the mempool. The server itself manages the sigCache instance. The blockManager and txMempool respectively now receive pointers to the created sigCache instance. All read (sig triplet existence) operations on the sigCache will not block unless a separate goroutine is adding an entry (writing) to the sigCache. GetBlockTemplate generation now also utilizes the sigCache in order to avoid unnecessarily double checking signatures when generating a template after previously accepting a txn to the mempool. Consequently, the CPU miner now also employs the same optimization. The maximum number of entries for the sigCache has been introduced as a config parameter in order to allow users to configure the amount of memory consumed by this new additional caching.
2015-09-25 01:22:00 +02:00
sigCache: sigCache,
hashCache: hashCache,
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
flags: flags,
}
}
// ValidateTransactionScripts validates the scripts for the passed transaction
// using multiple goroutines.
func ValidateTransactionScripts(tx *btcutil.Tx, utxoView *UtxoViewpoint,
flags txscript.ScriptFlags, sigCache *txscript.SigCache,
hashCache *txscript.HashCache) error {
// First determine if segwit is active according to the scriptFlags. If
// it isn't then we don't need to interact with the HashCache.
segwitActive := flags&txscript.ScriptVerifyWitness == txscript.ScriptVerifyWitness
// If the hashcache doesn't yet has the sighash midstate for this
// transaction, then we'll compute them now so we can re-use them
// amongst all worker validation goroutines.
if segwitActive && tx.MsgTx().HasWitness() &&
!hashCache.ContainsHashes(tx.Hash()) {
hashCache.AddSigHashes(tx.MsgTx())
}
var cachedHashes *txscript.TxSigHashes
if segwitActive && tx.MsgTx().HasWitness() {
// The same pointer to the transaction's sighash midstate will
// be re-used amongst all validation goroutines. By
// pre-computing the sighash here instead of during validation,
// we ensure the sighashes
// are only computed once.
cachedHashes, _ = hashCache.GetSigHashes(tx.Hash())
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// Collect all of the transaction inputs and required information for
// validation.
txIns := tx.MsgTx().TxIn
txValItems := make([]*txValidateItem, 0, len(txIns))
for txInIdx, txIn := range txIns {
// Skip coinbases.
if txIn.PreviousOutPoint.Index == math.MaxUint32 {
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
continue
2013-07-18 16:49:28 +02:00
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
txVI := &txValidateItem{
txInIndex: txInIdx,
txIn: txIn,
tx: tx,
sigHashes: cachedHashes,
2013-07-18 16:49:28 +02:00
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
txValItems = append(txValItems, txVI)
2013-07-18 16:49:28 +02:00
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// Validate all of the inputs.
validator := newTxValidator(utxoView, flags, sigCache, hashCache)
return validator.Validate(txValItems)
2013-07-18 16:49:28 +02:00
}
// checkBlockScripts executes and validates the scripts for all transactions in
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// the passed block using multiple goroutines.
func checkBlockScripts(block *btcutil.Block, utxoView *UtxoViewpoint,
scriptFlags txscript.ScriptFlags, sigCache *txscript.SigCache,
hashCache *txscript.HashCache) error {
// First determine if segwit is active according to the scriptFlags. If
// it isn't then we don't need to interact with the HashCache.
segwitActive := scriptFlags&txscript.ScriptVerifyWitness == txscript.ScriptVerifyWitness
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// Collect all of the transaction inputs and required information for
// validation for all transactions in the block into a single slice.
numInputs := 0
for _, tx := range block.Transactions() {
numInputs += len(tx.MsgTx().TxIn)
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
txValItems := make([]*txValidateItem, 0, numInputs)
for _, tx := range block.Transactions() {
hash := tx.Hash()
// If the HashCache is present, and it doesn't yet contain the
// partial sighashes for this transaction, then we add the
// sighashes for the transaction. This allows us to take
// advantage of the potential speed savings due to the new
// digest algorithm (BIP0143).
if segwitActive && tx.HasWitness() && hashCache != nil &&
!hashCache.ContainsHashes(hash) {
hashCache.AddSigHashes(tx.MsgTx())
}
var cachedHashes *txscript.TxSigHashes
if segwitActive && tx.HasWitness() {
if hashCache != nil {
cachedHashes, _ = hashCache.GetSigHashes(hash)
} else {
cachedHashes = txscript.NewTxSigHashes(tx.MsgTx())
}
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
for txInIdx, txIn := range tx.MsgTx().TxIn {
// Skip coinbases.
if txIn.PreviousOutPoint.Index == math.MaxUint32 {
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
continue
}
txVI := &txValidateItem{
txInIndex: txInIdx,
txIn: txIn,
tx: tx,
sigHashes: cachedHashes,
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
txValItems = append(txValItems, txVI)
}
}
Rework and improve async script validation logic. The previous script validation logic entailed starting up a hard-coded number of goroutines to process the transaction scripts in parallel. In particular, one goroutine (up to 8 max) was started per transaction in a block and another one was started for each input script pair in the each transaction. This resulted in 64 goroutines simultaneously running scripts and verifying cryptographic signatures. This could easily lead to the overall system feeling sluggish. Further the previous design could also result in bursty behavior since the number of inputs to a transaction as well as its complexity can vary widely between transactions. For example, starting 2 goroutines (one to process the transaction and one for actual script pair validation) to verify a transaction with a single input was not desirable. Finally, the previous design validated all transactions and inputs regardless of a failure in one of the other scripts. This really didn't have a big impact since it's quite rare that blocks with invalid verifications are being processed, but it was a potential way DoS vector. This commit changes the logic in a few ways to improve things: - The max number of validation goroutines is now based on the number of cores in the system - All transaction inputs from all transactions in the block are collated into a single list which is fed through the aforementioned validation goroutines - The validation CPU usage is much more consistent due to the collation of inputs - A validation error in any goroutine immediately stops validation of all remaining inputs - The errors have been improved to include context about what tx script pair failed as opposed to showing the information as a warning This closes conformal/btcd#59.
2014-01-16 19:48:37 +01:00
// Validate all of the inputs.
validator := newTxValidator(utxoView, scriptFlags, sigCache, hashCache)
start := time.Now()
if err := validator.Validate(txValItems); err != nil {
return err
}
elapsed := time.Since(start)
log.Tracef("block %v took %v to verify", block.Hash(), elapsed)
// If the HashCache is present, once we have validated the block, we no
// longer need the cached hashes for these transactions, so we purge
// them from the cache.
if segwitActive && hashCache != nil {
for _, tx := range block.Transactions() {
if tx.MsgTx().HasWitness() {
hashCache.PurgeSigHashes(tx.Hash())
}
}
}
return nil
2013-07-18 16:49:28 +02:00
}