This converts the isSmallInt function to accept an opcode as a byte
instead of the internal opcode data struct in order to make it more
flexible for raw script analysis.
The comment is modified to explicitly call out the script version
semantics.
Finally, it updates all callers accordingly.
This converts the tests for calculating signature hashes to use the
exported function which handles the raw script versus the now deprecated
variant requiring parsed opcodes.
Backport of 06f769ef72e6042e7f2b5ff1c512ef1371d615e5
This modifies the CalcSignatureHash function to make use of the new
signature hash calculation function that accepts raw scripts without
needing to first parse them. Consequently, it also doubles as a slight
optimization to the execution time and a significant reduction in the
number of allocations.
In order to convert the CalcScriptHash function and keep the same
semantics, a new function named checkScriptParses is introduced which
will quickly determine if a script can be fully parsed without failure
and return the parse failure in the case it can't.
The following is a before and after comparison of analyzing a large
multiple input transaction:
benchmark old ns/op new ns/op delta
BenchmarkCalcSigHash-8 3627895 3619477 -0.23%
benchmark old allocs new allocs delta
BenchmarkCalcSigHash-8 1335 801 -40.00%
benchmark old bytes new bytes delta
BenchmarkCalcSigHash-8 1373812 1293354 -5.86%
This introduces a new function named calcSignatureHashRaw which accepts
the raw script bytes to calculate the script hash versus requiring the
parsed opcode only to unparse them later in order to make it more
flexible for working with raw scripts.
Since there are several places in the rest of the code that currently
only have access to the parsed opcodes, this modifies the existing
calcSignatureHash to first unparse the script before calling the new
function.
Backport of decred/dcrd:f306a72a16eaabfb7054a26f9d9f850b87b00279
This converts the DisasmString function to make use of the new
zero-allocation script tokenizer instead of the far less efficient
parseScript thereby significantly optimizing the function.
In order to facilitate this, the opcode disassembly functionality is
split into a separate function called disasmOpcode that accepts the
opcode struct and data independently as opposed to requiring a parsed
opcode. The new function also accepts a pointer to a string builder so
the disassembly can be more efficiently be built.
While here, the comment is modified to explicitly call out the script
version semantics.
The following is a before and after comparison of a large script:
benchmark old ns/op new ns/op delta
BenchmarkDisasmString-8 102902 40124 -61.01%
benchmark old allocs new allocs delta
BenchmarkDisasmString-8 46 51 +10.87%
benchmark old bytes new bytes delta
BenchmarkDisasmString-8 389324 130552 -66.47%
This implements an efficient and zero-allocation script tokenizer that
is exported to both provide a new capability to tokenize scripts to
external consumers of the API as well as to serve as a base for
refactoring the existing highly inefficient internal code.
It is important to note that this tokenizer is intended to be used in
consensus critical code in the future, so it must exactly follow the
existing semantics.
The current script parsing mechanism used throughout the txscript module
is to fully tokenize the scripts into an array of internal parsed
opcodes which are then examined and passed around in order to implement
virtually everything related to scripts.
While that approach does simplify the analysis of certain scripts and
thus provide some nice properties in that regard, it is both extremely
inefficient in many cases, and makes it impossible for external
consumers of the API to implement any form of custom script analysis
without manually implementing a bunch of error prone tokenizing code or,
alternatively, the script engine exposing internal structures.
For example, as shown by profiling the total memory allocations of an
initial sync, the existing script parsing code allocates a total of
around 295.12GB, which equates to around 50% of all allocations
performed. The zero-alloc tokenizer this introduces will allow that to
be reduced to virtually zero.
The following is a before and after comparison of tokenizing a large
script with a high opcode count using the existing code versus the
tokenizer this introduces for both speed and memory allocations:
benchmark old ns/op new ns/op delta
BenchmarkScriptParsing-8 63464 677 -98.93%
benchmark old allocs new allocs delta
BenchmarkScriptParsing-8 1 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkScriptParsing-8 311299 0 -100.00%
The following is an overview of the changes:
- Introduce new error code ErrUnsupportedScriptVersion
- Implement zero-allocation script tokenizer
- Add a full suite of tests to ensure the tokenizer works as intended
and follows the required consensus semantics
- Add an example of using the new tokenizer to count the number of
opcodes in a script
- Update README.md to include the new example
- Update script parsing benchmark to use the new tokenizer
Adds behavior similar to the retries of persistent RPC connections
to HTTP request.
* Initial backoff: 500ms
* Linear increase
* Max retries: 10
Room for future improvement:
* Make configurable
* Add jitter
* Tests for retry behavior
This gives KnownAddress a sync.RWMutex so the exported methods may
safely access the na (*wire.NetAddress) and lastattempt fields.
The AddrManager is updated to lock the new KnownAddress mutex before
assigning to na or lastattempt.
The other KnownAddress fields are only accessed by AddrManager, using
its own Mutex for synchronization.
In this commit, we add a new config options that allows one to start
`btcd` in an operating mode that disables the stall detection. This can
be useful in simnet/regtest integration tests settings where it's
important that `btcd` holds on to its possibly sole connection to the
only other node in the test harness.
A new config flag has been added to gate this behavior, which is off by
default.
This commit modifies no behavior and would allow other projects to
retrieve the dust limit for a particular output type before the
amount of the output is known. This is particularly useful in the
Lightning Network for channel negotiation.
* rpcclient: Export sendCmd and response
This facilitates using custom commands with rpcclient.
See https://github.com/btcsuite/btcd/issues/1083
* rpcclient: Export receiveFuture
This facilitates using custom commands with rpcclient.
See https://github.com/btcsuite/btcd/issues/1083
* rpcclient: Add customcommand example
* rpcclient: remove "Namecoin" from customcommand readme heading
In this commit, we add an additional test case for inherited RBF
replacement. This test case asserts that if a parent is marked as being
replaceable, but the child isn't, then the child can still be replaced
as according to BIP 125 it shoudl _inhreit_ the replaceability of its
parent.
The addition of this test case was prompted by the recently discovered
Bitcoin Core "CVE" [1]. It turns out that bitcoind doesn't properly
implement BIP 125. Namely it fails to allow a child to "inherit"
replaceability if its parent is also replaceable. Our implementation
makes this trait rather explicit due to its recursive implementation.
Kudos to the original implementer @wpaulino for getting this correct.
[1]: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/018893.html.
On signet all previous soft forks and also taproot are always activated,
meaning the version is always 0x20000000 for all blocks. To make sure
they activate properly in `btcd` we therefore need to use the correct
bit to mask the version.
This means that on any custom signet there would need to be 2016 blocks
mined before SegWit or Taproot can be used.