This converts the IsMultisigSigScript function to analyze the raw script
and make use of the new tokenizer instead of the far less efficient
parseScript thereby significantly optimizing the function.
In order to accomplish this, it first rejects scripts that can't
possibly fit the bill due to the final byte of what would be the redeem
script not being the appropriate opcode or the overall script not having
enough bytes. Then, it uses a new function that is introduced named
finalOpcodeData that uses the tokenizer to return any data associated
with the final opcode in the signature script (which will be nil for
non-push opcodes or if the script fails to parse) and analyzes it as if
it were a redeem script when it is non nil.
It is also worth noting that this new implementation intentionally has
the same semantic difference from the existing implementation as the
updated IsMultisigScript function in regards to allowing zero pubkeys
whereas previously it incorrectly required at least one pubkey.
Finally, the comment is modified to explicitly call out the script
version semantics.
The following is a before and after comparison of analyzing a large
script that is not a multisig script and both a 1-of-2 multisig public
key script (which should be false) and a signature script comprised of a
pay-to-script-hash 1-of-2 multisig redeem script (which should be true):
benchmark old ns/op new ns/op delta
BenchmarkIsMultisigSigScriptLarge-8 69328 2.93 -100.00%
BenchmarkIsMultisigSigScript-8 2375 146 -93.85%
benchmark old allocs new allocs delta
BenchmarkIsMultisigSigScriptLarge-8 5 0 -100.00%
BenchmarkIsMultisigSigScript-8 3 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkIsMultisigSigScriptLarge-8 330035 0 -100.00%
BenchmarkIsMultisigSigScript-8 9472 0 -100.00%
This converts the IsMultisigScript function to make use of the new
tokenizer instead of the far less efficient parseScript thereby
significantly optimizing the function.
In order to accomplish this, it introduces two new functions. The first
one is named extractMultisigScriptDetails and works with the raw script
bytes to simultaneously determine if the script is a multisignature
script, and in the case it is, extract and return the relevant details.
The second new function is named isMultisigScript and is defined in
terms of the former.
The extract function accepts the script version, raw script bytes, and a
flag to determine whether or not the public keys should also be
extracted. The flag is provided because extracting pubkeys results in
an allocation that the caller might wish to avoid.
The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type. Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.
It is important to note that this new implementation intentionally has a
semantic difference from the existing implementation in that it will now
correctly identify a multisig script with zero pubkeys whereas
previously it incorrectly required at least one pubkey. This change is
acceptable because the function only deals with standardness rather than
consensus rules.
Finally, this also deprecates the isMultiSig function that requires
opcodes in favor of the new functions and deprecates the error return on
the export IsMultisigScript function since it really does not make sense
given the purpose of the function.
The following is a before and after comparison of analyzing both a large
script that is not a multisig script and a 1-of-2 multisig public key
script:
benchmark old ns/op new ns/op delta
BenchmarkIsMultisigScriptLarge-8 64166 5.52 -99.99%
BenchmarkIsMultisigScript-8 630 59.4 -90.57%
benchmark old allocs new allocs delta
BenchmarkIsMultisigScriptLarge-8 1 0 -100.00%
BenchmarkIsMultisigScript-8 1 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkIsMultisigScriptLarge-8 311299 0 -100.00%
BenchmarkIsMultisigScript-8 2304 0 -100.00%
This converts the IsPayToScriptHash function to analyze the raw script
instead of using the far less efficient parseScript thereby
significantly optimizing the function.
In order to accomplish this, it introduces two new functions. The first
one is named extractScriptHash and works with the raw script bytes to
simultaneously determine if the script is a p2sh script, and in the case
it is, extract and return the hash. The second new function is named
isScriptHashScript and is defined in terms of the former.
The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type. Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.
Finally, this also deprecates the isScriptHash function that requires
opcodes in favor of the new functions and modifies the comment on
IsPayToScriptHash to explicitly call out the script version semantics.
The following is a before and after comparison of analyzing a large
script that is not a p2sh script:
benchmark old ns/op new ns/op delta
BenchmarkIsPayToScriptHash-8 62393 0.60 -100.00%
benchmark old allocs new allocs delta
BenchmarkIsPayToScriptHash-8 1 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkIsPayToScriptHash-8 311299 0 -100.00%
This converts the IsPayToPubKeyHash function to analyze the raw script
instead of using the far less efficient parseScript, thereby
significantly optimization the function.
In order to accomplish this, it introduces two new functions. The first
one is named extractPubKeyHash and works with the raw script bytes
to simultaneously determine if the script is a pay-to-pubkey-hash script,
and in the case it is, extract and return the hash. The second new
function is named isPubKeyHashScript and is defined in terms of the
former.
The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type. Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.
The following is a before and after comparison of analyzing a large
script:
benchmark old ns/op new ns/op delta
BenchmarkIsPubKeyHashScript-8 62228 0.45 -100.00%
benchmark old allocs new allocs delta
BenchmarkIsPubKeyHashScript-8 1 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkIsPubKeyHashScript-8 311299 0 -100.00%
This converts the IsPayToScriptHash function to analyze the raw script
instead of using the far less efficient parseScript, thereby
significantly optimizing the function.
In order to accomplish this, it introduces four new functions:
extractCompressedPubKey, extractUncompressedPubKey, extractPubKey, and
isPubKeyScript. The extractPubKey function makes use of
extractCompressedPubKey and extractUncompressedPubKey to combine their
functionality as a convenience and isPubKeyScript is defined in terms of
extractPubKey.
The extractCompressedPubKey works with the raw script bytes to
simultaneously determine if the script is a pay-to-compressed-pubkey
script, and in the case it is, extract and return the raw compressed
pubkey bytes.
Similarly, the extractUncompressedPubKey works in the same way except it
determines if the script is a pay-to-uncompressed-pubkey script and
returns the raw uncompressed pubkey bytes in the case it is.
The extract function approach was chosen because it is common for
callers to want to only extract relevant details from a script if the
script is of the specific type. Extracting those details requires
performing the exact same checks to ensure the script is of the correct
type, so it is more efficient to combine the two into one and define the
type determination in terms of the result so long as the extraction does
not require allocations.
The following is a before and after comparison of analyzing a large
script:
benchmark old ns/op new ns/op delta
BenchmarkIsPubKeyScript-8 62323 2.97 -100.00%
benchmark old allocs new allocs delta
BenchmarkIsPubKeyScript-8 1 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkIsPubKeyScript-8 311299 0 -100.00%
This converts the asSmallInt function to accept an opcode as a byte
instead of the internal opcode data struct in order to make it more
flexible for raw script analysis.
It also updates all callers accordingly.
This converts the isSmallInt function to accept an opcode as a byte
instead of the internal opcode data struct in order to make it more
flexible for raw script analysis.
The comment is modified to explicitly call out the script version
semantics.
Finally, it updates all callers accordingly.
This converts the tests for calculating signature hashes to use the
exported function which handles the raw script versus the now deprecated
variant requiring parsed opcodes.
Backport of 06f769ef72e6042e7f2b5ff1c512ef1371d615e5
This modifies the CalcSignatureHash function to make use of the new
signature hash calculation function that accepts raw scripts without
needing to first parse them. Consequently, it also doubles as a slight
optimization to the execution time and a significant reduction in the
number of allocations.
In order to convert the CalcScriptHash function and keep the same
semantics, a new function named checkScriptParses is introduced which
will quickly determine if a script can be fully parsed without failure
and return the parse failure in the case it can't.
The following is a before and after comparison of analyzing a large
multiple input transaction:
benchmark old ns/op new ns/op delta
BenchmarkCalcSigHash-8 3627895 3619477 -0.23%
benchmark old allocs new allocs delta
BenchmarkCalcSigHash-8 1335 801 -40.00%
benchmark old bytes new bytes delta
BenchmarkCalcSigHash-8 1373812 1293354 -5.86%
This introduces a new function named calcSignatureHashRaw which accepts
the raw script bytes to calculate the script hash versus requiring the
parsed opcode only to unparse them later in order to make it more
flexible for working with raw scripts.
Since there are several places in the rest of the code that currently
only have access to the parsed opcodes, this modifies the existing
calcSignatureHash to first unparse the script before calling the new
function.
Backport of decred/dcrd:f306a72a16eaabfb7054a26f9d9f850b87b00279
This converts the DisasmString function to make use of the new
zero-allocation script tokenizer instead of the far less efficient
parseScript thereby significantly optimizing the function.
In order to facilitate this, the opcode disassembly functionality is
split into a separate function called disasmOpcode that accepts the
opcode struct and data independently as opposed to requiring a parsed
opcode. The new function also accepts a pointer to a string builder so
the disassembly can be more efficiently be built.
While here, the comment is modified to explicitly call out the script
version semantics.
The following is a before and after comparison of a large script:
benchmark old ns/op new ns/op delta
BenchmarkDisasmString-8 102902 40124 -61.01%
benchmark old allocs new allocs delta
BenchmarkDisasmString-8 46 51 +10.87%
benchmark old bytes new bytes delta
BenchmarkDisasmString-8 389324 130552 -66.47%
This implements an efficient and zero-allocation script tokenizer that
is exported to both provide a new capability to tokenize scripts to
external consumers of the API as well as to serve as a base for
refactoring the existing highly inefficient internal code.
It is important to note that this tokenizer is intended to be used in
consensus critical code in the future, so it must exactly follow the
existing semantics.
The current script parsing mechanism used throughout the txscript module
is to fully tokenize the scripts into an array of internal parsed
opcodes which are then examined and passed around in order to implement
virtually everything related to scripts.
While that approach does simplify the analysis of certain scripts and
thus provide some nice properties in that regard, it is both extremely
inefficient in many cases, and makes it impossible for external
consumers of the API to implement any form of custom script analysis
without manually implementing a bunch of error prone tokenizing code or,
alternatively, the script engine exposing internal structures.
For example, as shown by profiling the total memory allocations of an
initial sync, the existing script parsing code allocates a total of
around 295.12GB, which equates to around 50% of all allocations
performed. The zero-alloc tokenizer this introduces will allow that to
be reduced to virtually zero.
The following is a before and after comparison of tokenizing a large
script with a high opcode count using the existing code versus the
tokenizer this introduces for both speed and memory allocations:
benchmark old ns/op new ns/op delta
BenchmarkScriptParsing-8 63464 677 -98.93%
benchmark old allocs new allocs delta
BenchmarkScriptParsing-8 1 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkScriptParsing-8 311299 0 -100.00%
The following is an overview of the changes:
- Introduce new error code ErrUnsupportedScriptVersion
- Implement zero-allocation script tokenizer
- Add a full suite of tests to ensure the tokenizer works as intended
and follows the required consensus semantics
- Add an example of using the new tokenizer to count the number of
opcodes in a script
- Update README.md to include the new example
- Update script parsing benchmark to use the new tokenizer
This resolves the more fundamental flake in the unit tests noted in the
prior commit.
Because multiple unit tests call rand.Seed in parallel, it's possible
they can be executed with the same unix timestamp (in seconds). If the
second call happens between generating the hash cache and checking that
the cache doesn't contain a random txn, the random transaction is in
fact a duplicate of one generated earlier since the RNG state was reset.
To remedy, we initialize rand.Seed once in the init function.
TestHashCacheAddContainsHashes flakes fairly regularly when rebasing
PR #1684 with:
txid <txid> wasn't inserted into cache but was found.
With probabilty 1/10^2 there will be no inputs on the transaction. This
reduces the entropy in the txid, and I belive is the primary cause of
the flake.
- create benchmarks to measure allocations
- add test for benchmark input
- create a low alloc parseScriptTemplate
- refactor parsing logic for a single opcode
In this commit, we extend the txscript package to support re-deriving
the PkScript of an output by looking at the input's signature
script/witness attempting to spend it. As of this commit, the only
supported types are P2SH, v0 P2WSH, and v0 P2WPKH.
This will serve useful to detect when a particular script has been spent
on-chain.
A set of test vectors has also been added for the supported script types
to ensure its correctness.
This cleans up the code for handling the checksig and checkmultisig
opcode strict signatures to explicitly call out any semantics that are
likely not obvious and improve readability.
It also introduce new distinct errors for each condition which can
result in a signature being rejected due to not following the strict
encoding requirements and updates reference test adaptor accordingly.
This modifies calcSignatureHash to use a shallow copy of the transaction
versus a deep copy since the actual scripts themselves are not modified
and therefore don't need to be copied.
This is being done because profiling the most overall allocated space
shows that the deep copy performed in calcSignatureHash accounts for
nearly 20% of all allocations on a synced running instance. Also,
copying all of the additional data makes it more time consuming as well.
With this change, that figure drops from ~20% to ~5% of all allocations.
The following benchmark shows the relative speedups and allocation
reduction as a result of the optimization on my system. In particular,
the changes result in approximately a 15% speedup and a whopping 99.89%
reduction in allocations when using a large transaction with thousands
of inputs which was the worst case scenario.
benchmark old allocs new allocs delta
--------------------------------------------------------------------
BenchmarkCalcSignatureHash 11151 12 -99.89%
benchmark old ns/op new ns/op delta
--------------------------------------------------------------------
BenchmarkCalcSignatureHash 3599845 3056359 -15.10%
This commit adds verification of the post-segwit standardness
requirement that all pubkeys involved in checks operations MUST be
serialized as compressed public keys. A new ScriptFlag has been added
to guard this behavior when executing scripts.
This commit modifies the op-code execution for OP_IF and OP_NOTIF to
enforce the additional “minimal if” constraints which require the
top-stack item when the op codes are encountered to be either an empty
vector, or exactly [0x01].