This modifies the recently-added NullDataScript function in several
ways in an effort to make them more consistent with the tests in the
rest of the code base and improve/correct the logic:
- Use the hexToBytes and mustParseShortForm functions
- Consistently format the test errors
- Replace the valid bool flag with an expected error and test against it
- Ensure the returned script type is the expected type in all cases
This adds a new function named NullDataScript to the txscript package that returns a provably-pruneable OP_RETURN script with the provided data. The function will return an error if the provided data is larger than the maximum allowed length for a nulldata script to be be considered standard.
Putting the test code in the same package makes it easier for forks
since they don't have to change the import paths as much and it also
gets rid of the need for internal_test.go to bridge.
Also, do some light cleanup on a few tests while here.
This corrects the isNullData standard transaction type test to work
properly with canonically-encoded data pushes. In particular, single
byte data pushes that are small integers (0-16) are converted to the
equivalent numeric opcodes when canonically encoded and the code failed
to detect them properly.
It also adds several tests to ensure that both canonical and
non-canonical nulldata scripts are recognized properly and modifies the
test failure print to include the script that failed.
This does not affect consensus since it is just a standardness check.
This exposes a new function on the ScriptBuilder type named AddOps that
allows multiple opcodes to be added via a single call and adds tests to
exercise the new function.
Finally, it updates a couple of places in the signing code that were
abusing the interface by setting its private script directly to use the
new public function instead.
This is mostly a backport of some of the same modifications made in
Decred along with a few additional things cleaned up. In particular,
this updates the code to make use of the new chainhash package.
Also, since this required API changes anyways and the hash algorithm is
no longer tied specifically to SHA, all other functions throughout the
code base which had "Sha" in their name have been changed to Hash so
they are not incorrectly implying the hash algorithm.
The following is an overview of the changes:
- Remove the wire.ShaHash type
- Update all references to wire.ShaHash to the new chainhash.Hash type
- Rename the following functions and update all references:
- wire.BlockHeader.BlockSha -> BlockHash
- wire.MsgBlock.BlockSha -> BlockHash
- wire.MsgBlock.TxShas -> TxHashes
- wire.MsgTx.TxSha -> TxHash
- blockchain.ShaHashToBig -> HashToBig
- peer.ShaFunc -> peer.HashFunc
- Rename all variables that included sha in their name to include hash
instead
- Update for function name changes in other dependent packages such as
btcutil
- Update copyright dates on all modified files
- Update glide.lock file to use the required version of btcutil
This changes the script template parsing function to use a pointer into
the constant global opcode array for parsed opcodes as opposed to making
a copy of the opcode entries which causes unnecessary allocations.
Profiling showed that after roughly 48 hours of operation, this
copy was the culprit of 207 million unnecessary allocations.
Profiles discovered that lookups into the signature cache included an
expensive comparison to the stored `sigInfo` struct. This lookup had the
potential to be more expensive than directly verifying the signature
itself!
In addition, evictions were rather expensive because they involved
reading from /dev/urandom, or equivalent, for each eviction once the
signature cache was full as well as potentially iterating over every
item in the cache in the worst-case.
To remedy this poor performance several changes have been made:
* Change the lookup key to the fixed sized 32-byte signature hash
* Perform a full equality check only if there is a cache hit which
results in a significant speed up for both insertions and existence
checks
* Override entries in the case of a colliding hash on insert Add an
* .IsEqual() method to the Signature and PublicKey types in the
btcec package to facilitate easy equivalence testing
* Allocate the signature cache map with the max number of entries in
order to avoid unnecessary map re-sizes/allocations
* Optimize evictions from the signature cache Delete the first entry
* seen which is safe from manipulation due to
the pre image resistance of the hash function
* Double the default maximum number of entries within the signature
cache due to the reduction in the size of a cache entry
* With this eviction scheme, removals are effectively O(1)
Fixes#575.
This modifies the conversion of the output index from the JSON-based
test data for valid and invalid transactions as well as the signature
hash type for signature hash tests to first convert to a signed int and
then to an unsigned int. This is necessary because the result of a
direct conversion of a float to an unsigned int is implementation
dependent and doesn't result in the expected value on all platforms.
Also, while here, change the function names in the error prints to match
the actual names.
Fixes#600.
500 tests with various transactions and scripts, verifying that
calcSignatureHash generates the expected hash in each case.
This requires changing SigHashType to uint32; that won't affect the
standard use-cases, but will make calcSignatureHash behave more like the
Core counterpart for non-standard SigHashType settings, like those in
some of these tests.
First, it removes the documentation section from all the README.md files
and instead puts a web-based godoc badge and link at the top with the
other badges. This is being done since the local godoc tool no longer
ships with Go by default, so the instructions no longer work without
first installing godoc. Due to this, pretty much everyone uses the
web-based godoc these days anyways. Anyone who has manually installed
godoc won't need instructions.
Second, it makes sure the ISC license badge is at the top with the other
badges and removes the textual reference in the overview section.
Finally, it's modifies the Installation section to Installation and
Updating and adds a '-u' to the 'go get' command since it works for both
and thus is simpler.
isMultiSig was not verifying the number of pubkeys specified matched
the number of pubkeys provided. This caused certain non-standard
scripts to be considered multisig scripts.
However, the script still would have failed during execution.
NOTE: This only affects whether or not the script is considered
standard and does NOT affect consensus.
Also, add a test for this check.
See https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki for
more information.
This commit mimics Bitcoin Core commit bc60b2b4b401f0adff5b8b9678903ff8feb5867b
and includes additional tests from Bitcoin Core commit
cb54d17355864fa08826d6511a0d7692b21ef2c9
We've already been generating lowS sigs for quite a while. This removes
the malleability vector.
This mimics Bitcoin Core commit 49dd5c629df0a08cf3b1ea8085c03312d1a81696
Introduce an ECDSA signature verification into btcd in order to
mitigate a certain DoS attack and as a performance optimization.
The benefits of SigCache are two fold. Firstly, usage of SigCache
mitigates a DoS attack wherein an attacker causes a victim's client to
hang due to worst-case behavior triggered while processing attacker
crafted invalid transactions. A detailed description of the mitigated
DoS attack can be found here: https://bitslog.wordpress.com/2013/01/23/fixed-bitcoin-vulnerability-explanation-why-the-signature-cache-is-a-dos-protection/
Secondly, usage of the SigCache introduces a signature verification
optimization which speeds up the validation of transactions within a
block, if they've already been seen and verified within the mempool.
The server itself manages the sigCache instance. The blockManager and
txMempool respectively now receive pointers to the created sigCache
instance. All read (sig triplet existence) operations on the sigCache
will not block unless a separate goroutine is adding an entry (writing)
to the sigCache. GetBlockTemplate generation now also utilizes the
sigCache in order to avoid unnecessarily double checking signatures
when generating a template after previously accepting a txn to the
mempool. Consequently, the CPU miner now also employs the same
optimization.
The maximum number of entries for the sigCache has been introduced as a
config parameter in order to allow users to configure the amount of
memory consumed by this new additional caching.
While current existing numeric opcodes are limited to 4 bytes, new
opcodes may need different limits.
This mimics Bitcoin Core commit 99088d60d8a7747c6d1a7fd5d8cd388be1b3e138
This commit modifies the DisasmString function to use a bytes buffer for
constructing the disassembled string instead of naive string
concatenation. This makes a huge difference when disassembling scripts
with large numbers of opcodes.
IsUnspendable takes a public key script and returns whether it is
spendable.
Additionally, hook this into the mempool isDust function, since
unspendable outputs can't be spent.
This mimics Bitcoin Core commit 0aad1f13b2430165062bf9436036c1222a8724da
This change moves IsFinalizedTransaction to txscript and also changes
the first argument to take a wire.MsgTx instead of btcutil.Tx. This
is needed for an upcoming diff in which txscript will require
IsFinalizedTransaction and we do not want to import the btcd/blockchain.
This commit contains fixes from the results of a thorough audit of
txscript to find any cases of script evaluation which doesn't match the
required consensus behavior. These conditions are fairly obscure and
highly unlikely to happen in any real scripts, but they could have
nevertheless been used by a clever attacker with malicious intent to
cause a fork.
Test cases which exercise these conditions have been added to the
reference tests and will contributed upstream to improve the quality for
the entire ecosystem.
Unlike OP_IF and OP_NOTIF which interpret the top stack item as a
number, OP_IFDUP interprets it as a boolean. This has important
consequences because numbers are imited to int32s while booleans can be
an arbitrary number of bytes.
The offending script was found and reported by Jonas Nick through the
use of fuzzing.
- Move reference tests to test package since they are intended to
exercise the engine as callers would
- Improve the short form script parsing to allow additional opcodes:
DATA_#, OP_#, FALSE, TRUE
- Make use of a function to decode hex strings rather than manually
defining byte slices
- Update the tests to make use of the short form script parsing logic
rather than manually defining byte slices
- Consistently replace all []byte{} and [][]byte{} with nil
- Define tests only used in a specific function inside that func
- Move invalid flag combination test to engine_test since that is what
it is testing
- Remove all redundant script tests in favor of the JSON-based tests in
the data directory.
- Move several functions from internal_test.go to the test files
associated with what the tests are checking
This commit moves all code related to standard scripts into a separate
file named standard.go as well as the associated tests into
standard_test.go. Since the code in address.go and address_test.go is
only related to standard scripts, it has been combined into the new
files and the old files deleted.
The intent here is to make it clear that the code in standard.go is not
related to consensus.
This commit implements a new type, named scriptNum, for handling all
numeric values used in scripts and converts the code over to make use of
it. This is being done for a few of reasons.
First, the consensus rules for handling numeric values in the scripts
require special handling with subtle semantics. By encapsulating those
details into a type specifically dedicated to that purpose, it
simplifies the code and generally helps prevent improper usage.
Second, the new type is quite a bit more efficient than big.Ints which
are designed to be arbitrarily large and thus involve a lot of heap
allocations and additional multi-precision bookkeeping. Because this
new type is based on an int64, it allows the numbers to be stack
allocated thereby eliminating a lot of GC and also eliminates the extra
multi-precision arithmetic bookkeeping.
The use of an int64 is possible because the consensus rules dictate that
when data is interpreted as a number, it is limited to an int32 even
though results outside of this range are allowed so long as they are not
interpreted as integers again themselves. Thus, the maximum possible
result comes from multiplying a max int32 by itself which safely fits
into an int64 and can then still appropriately provide the serialization
of the larger number as required by consensus.
Finally, it more closely resembles the implementation used by Bitcoin
Core and thus makes is easier to compare the behavior between the two
implementations.
This commit also includes a full suite of tests with 100% coverage of
the semantics of the new type.
This commit contains a lot of cleanup on the txscript code to make it
more consistent with the code throughout the rest of the project. It
doesn't change any operational logic.
The following is an overview of the changes:
- Add a significant number of comments throughout in order to better
explain what the code is doing
- Fix several comment typos
- Move a couple of constants only used by the engine to engine.go
- Move a variable only used by the engine to engine.go
- Fix a couple of format specifiers in the test prints
- Reorder functions so they're defined before/closer to use
- Make the code lint clean with the exception of the opcode definitions
- Remove all redundant opcode tests in favor of the JSON-based tests
in the data directory.
- Remove duplicate stack nip test
- Add new tests to data/script_invalid.json to exercise additional
negative error paths
- Remove old unneeded pubkey trace code from opcodeCheckSig
- Simplify and improve the disassembly print function
- Add new tests to directly test all individual opcode disassembly
- Add new tests to directly test opcode disabled function which does not
get invoked during ordinary execution
- Improve test coverage of opcode.go
This commit moves the opcode execution logic from the opcode type to the
engine type because execution of an opcode modifies the engine state
(primarily the main and alternate data stacks) as opposed to the state
of the opcode. Making the engine the receiver more clearly indicates
this fact.
This commit very slightly optimizes the cryptographic hashing performed
by the script opcodes by calling the hash sum routines directly (for
those that support it) rather than allocating a new generic hash.Hash
hasher instance for them.