The previous script validation logic entailed starting up a hard-coded
number of goroutines to process the transaction scripts in parallel. In
particular, one goroutine (up to 8 max) was started per transaction in a
block and another one was started for each input script pair in the
each transaction. This resulted in 64 goroutines simultaneously running
scripts and verifying cryptographic signatures. This could easily lead to
the overall system feeling sluggish.
Further the previous design could also result in bursty behavior since the
number of inputs to a transaction as well as its complexity can vary
widely between transactions. For example, starting 2 goroutines (one to
process the transaction and one for actual script pair validation) to
verify a transaction with a single input was not desirable.
Finally, the previous design validated all transactions and inputs
regardless of a failure in one of the other scripts. This really didn't
have a big impact since it's quite rare that blocks with invalid
verifications are being processed, but it was a potential way DoS vector.
This commit changes the logic in a few ways to improve things:
- The max number of validation goroutines is now based on the number of
cores in the system
- All transaction inputs from all transactions in the block are collated
into a single list which is fed through the aforementioned validation
goroutines
- The validation CPU usage is much more consistent due to the collation of
inputs
- A validation error in any goroutine immediately stops validation of all
remaining inputs
- The errors have been improved to include context about what tx script
pair failed as opposed to showing the information as a warning
This closesconformal/btcd#59.
The RPC server was performing some of the shutdown logic in the wrong
order, that is, logging the the server has shut down, waiting for all
server goroutines to finish, and then closing a channel to notify
server goroutines to stop. These three items have been reversed to
fix a hang where goroutines currently being waited on had not shut
down because they did not receive the notification.
While here, the server waitgroup was incremented for a goroutine that
was running without it, another select statement was added to stop a
duplicate close (which never occured last commit when I added the
select statements), and the "stopping rescan" logging was moved to
debug to make the ^C shutdown logging nicer.
It was pointed out in #76 that if you arrived to the Update section
of the README without seeing the Installation section, the requirement for
Go 1.2 is easy to miss. This commit builds the requirement in the
Installation section and adds it to the Updating section as well to
hopefully make it more noticable.
The addblock utility was originally written as a quick debug tool during
initial development to populate blocks into the database. However, now
that it has been designated as the standard way to import bootstrap.dat
(and indeed block data files in general), it was lacking a few features
such as properly checking against the chain rules and known good
checkpoints.
This commit reworks and improves the utility in several ways:
- Imported blocks are now checked against the chain rules including
checkpoints to ensure they match the known good chain
- The utility now properly shuts down after processing all blocks
- Attempting to import orphan blocks (blocks which build off a block you
don't yet have in the database) returns an error
- Blocks that are already known are now skipped instead of causing an
error which means you can stop and restart the import mid-way through
without issues or start it after you've already downloaded a
portion of the chain
- The block height is no longer assumed to start at 0 which means input
files that start later in the chain will work properly so long as you
already have the chain at least up to the point of the block just before
the first one in the input file
- Improved error handling and reporting
- How often the progress display is shown is now configurable
- Statistics about how many blocks were processed, imported, and already
known are now displayed after the input file has been fully processed
This resolves comments made in #60.
The warning about a missing config file should only be shown after all
other configuration has succeeded so it's not shown when there are invalid
options specified. Also add a comment about it where its intended
placement is for the future.
This changes the protocol between btcd and btcwallet to follow
JSON-RPC specifications sending notifications as requests with an
empty ID.
The notification request context handling has been greatly cleaned up
now that IDs no longer need to be saved when sending notifications.
This commit changes a couple of sections which deal with large lists of
inventory vectors to use the new size hint functions recently added to
btcwire. This allows a bit more efficiency since the size of the list is
known up front and we can therefore avoid dynamically growing the backing
array several times. This also helps avoid a Go bug that leaks memory on
appends and GC churn.
This commit adds two new functions named NewMsgGetDataSizeHint and
NewMsgInvSizeHint. These are intended to allow callers which know in
advance how large the inventory lists will grow the ability to provides
that information when creating the message. This in turn provides a
mechanism to avoid the need to perform several grow operations of the
backing array when adding large number of inventory vectors.
This commit changes all code which deals with extracting addresses from
scripts to use the btcscript API ExtractPkScriptAddrs which in turn makes
use of the new btcutil.Address interface.
This provides much cleaner code for dealing with arbitrary script
destinations which is extensible without having to churn the APIs if new
destination types are added.
This commit significantly changes the address extraction code. The
original code was written before some of the other newer code was written
and as a result essentially duplicated some of the logic for handling
standard scripts which is used elsewhere in the package.
The following is a summary of what has changed:
- CalcPkScriptAddrHashes, ScriptToAddrHash, and ScriptToAddrHashes have
been replaced by ExtractPkScriptAddresses
- The ScriptType type has been removed in favor of the existing
ScriptClass type
- The new function returns a slice of btcutil.Addresses instead of raw
hashes that the caller then needs to figure out what to do with to
convert them to proper addressses
- The new function makes use of the existing ScriptClass instead of an
nearly duplicate ScriptType
- The new function hooks into the existing infrastructure for parsing
scripts and identifying scripts of standard forms
- The new function only works with pkscripts to match the behavior of the
reference implementation - do note that the redeeming script from a p2sh
script is still considered a pkscript
- The logic combines extraction for all script types instead of using a
separate function for multi-signature transactions
- The new function ignores addresses which are invalid for some reason
such as invalid public keys