The previous naming suggested that the value ((P+1)/4+1)/4 was being
returned, when in fact the returned value is simply (P+1)/4. The old
method is superseded by Q().
This commit optimizes the decompressPoint subroutine, used in extracting
compressed pubkeys and performing pubkey recovery. We do so by replacing
the use of big.Int.Exp with with square-and-multiply exponentiation of
btcec's more optimized fieldVals, reducing the overall latency and
memory requirements of decompressPoint.
Instead of operating on bits of Q = (P+1)/4, the exponentiation applies
the square-and-multiply operations on full bytes of Q. Compared to the
original speedup. Compared the bit-wise version, the improvement is
roughly 10%.
A new pair fieldVal methods called Sqrt and SqrtVal are added, which
applies the square-and-multiply exponentiation using precomputed
byte-slice of the value Q.
Comparison against big.Int sqrt and SAM sqrt over bytes of Q:
benchmark old ns/op new ns/op delta
BenchmarkParseCompressedPubKey-8 35545 23119 -34.96%
benchmark old allocs new allocs delta
BenchmarkParseCompressedPubKey-8 35 6 -82.86%
benchmark old bytes new bytes delta
BenchmarkParseCompressedPubKey-8 2777 256 -90.78%
As of https://github.com/btcsuite/btcd/pull/1193, decompressPoint now
validates that the point is on the curve. The x and y cooordinates are
also implicitly <= P, since the modular reduction is applied to both
before the method returns. The checks are moved so that they are still
applied when parsing an uncompressed pubkey, as the checks are not
redundant in that path.
This regenerates the precomputed secp256k1 byte points used to optimize
scalar multiplication. This should have been done as part of the
normalization correction.
This commit adds a new function to btcec: IsCompressedPubKey. This
function returns true iff the passed serialized public key is encoded
in compressed format.
This slightly optimizes the NAF function by avoiding returning the
unused bit when there is not a carry.
It also adds a bunch of additional unit tests which I made while
debugging.
This modifies the normalize function of the internal field value to
both optimize it and address an issue where the reduction could
lead to an incorrect result with a small range of values. It also adds
tests to ensure the behavior is correct.
The following benchmark shows the relative speedups as a result of the
optimization on my system. In particular, the changes result in
approximately a 14% speedup in Normalize, which ultimately translates to
a 2% speedup in signature verifies.
benchmark old ns/op new ns/op delta
--------------------------------------------------------------------
BenchmarkAddJacobian 1364 1289 -5.50%
BenchmarkAddJacobianNotZOne 3150 3091 -1.87%
BenchmarkScalarBaseMult 134117 132816 -0.97%
BenchmarkScalarBaseMultLarge 135067 132966 -1.56%
BenchmarkScalarMult 411218 402217 -2.19%
BenchmarkSigVerify 671585 657833 -2.05%
BenchmarkFieldNormalize 36.0 31.0 -13.89%
The github markdown interpreter has been changed such that it no longer
allows spaces in between the brackets and parenthesis of links and now
requires a newline in between anchors and other formatting. This
updates all of the markdown files accordingly.
While here, it also corrects a couple of inconsistencies in some of the
README.md files.
This simplifies the code based on the recommendations of the gosimple
lint tool.
Also, it increases the deadline for the linters to run to 10 minutes and
reduces the number of threads that is uses. This is being done because
the Travis environment has become increasingly slower and it also seems
to be hampered by too many threads running concurrently.
Putting the test code in the same package makes it easier for forks
since they don't have to change the import paths as much and it also
gets rid of the need for internal_test.go to bridge.
Also, remove the exception from the lint checks about returning the
unexported type since it is no longer required.
This adds new tests to the TestNormalize, TestMul, TestAdd2 functions
which trigger an issue with modular reduction that was fixed in the
prevous commit to prevent regressions.
As noted in issue #706, the existing code had an issue where the
normalized result was > P when both the first and second words of the
field representation being normalized were BOTH greater than or equal to
the first and second words of P. Although this condition is rare in
practice, it needs to be handled properly.
This resolves the issue by comparing the low words in the final
reduction step against the normalized low order prime bits to ensure the
final subtraction occurs correctly any time they're > P. This approach
retains the constant time property as well.
This is mostly a backport of some of the same modifications made in
Decred along with a few additional things cleaned up. In particular,
this updates the code to make use of the new chainhash package.
Also, since this required API changes anyways and the hash algorithm is
no longer tied specifically to SHA, all other functions throughout the
code base which had "Sha" in their name have been changed to Hash so
they are not incorrectly implying the hash algorithm.
The following is an overview of the changes:
- Remove the wire.ShaHash type
- Update all references to wire.ShaHash to the new chainhash.Hash type
- Rename the following functions and update all references:
- wire.BlockHeader.BlockSha -> BlockHash
- wire.MsgBlock.BlockSha -> BlockHash
- wire.MsgBlock.TxShas -> TxHashes
- wire.MsgTx.TxSha -> TxHash
- blockchain.ShaHashToBig -> HashToBig
- peer.ShaFunc -> peer.HashFunc
- Rename all variables that included sha in their name to include hash
instead
- Update for function name changes in other dependent packages such as
btcutil
- Update copyright dates on all modified files
- Update glide.lock file to use the required version of btcutil
Profiles discovered that lookups into the signature cache included an
expensive comparison to the stored `sigInfo` struct. This lookup had the
potential to be more expensive than directly verifying the signature
itself!
In addition, evictions were rather expensive because they involved
reading from /dev/urandom, or equivalent, for each eviction once the
signature cache was full as well as potentially iterating over every
item in the cache in the worst-case.
To remedy this poor performance several changes have been made:
* Change the lookup key to the fixed sized 32-byte signature hash
* Perform a full equality check only if there is a cache hit which
results in a significant speed up for both insertions and existence
checks
* Override entries in the case of a colliding hash on insert Add an
* .IsEqual() method to the Signature and PublicKey types in the
btcec package to facilitate easy equivalence testing
* Allocate the signature cache map with the max number of entries in
order to avoid unnecessary map re-sizes/allocations
* Optimize evictions from the signature cache Delete the first entry
* seen which is safe from manipulation due to
the pre image resistance of the hash function
* Double the default maximum number of entries within the signature
cache due to the reduction in the size of a cache entry
* With this eviction scheme, removals are effectively O(1)
Fixes#575.
First, it removes the documentation section from all the README.md files
and instead puts a web-based godoc badge and link at the top with the
other badges. This is being done since the local godoc tool no longer
ships with Go by default, so the instructions no longer work without
first installing godoc. Due to this, pretty much everyone uses the
web-based godoc these days anyways. Anyone who has manually installed
godoc won't need instructions.
Second, it makes sure the ISC license badge is at the top with the other
badges and removes the textual reference in the overview section.
Finally, it's modifies the Installation section to Installation and
Updating and adds a '-u' to the 'go get' command since it works for both
and thus is simpler.
This commit adds two new verification flags to txscript named
ScriptVerifyStrictEncoding and ScriptVerifyDerSignatures.
The ScriptVerifyStrictEncoding flag enforces signature scripts
and public keys to follow the strict encoding requirements.
The ScriptVerifyDerSignatures flag enforces signature scripts
to follow the strict encoding requirements.
These flags mimic Bitcoin Core's SCRIPT_VERIFY_STRICTENC and
SCRIPT_VERIFY_DERSIG flags and brings the Bitcoin Core test scripts up
to date.
This commit modifies the pre-computed table used to optimize the secp256k1
scalar multiplication to a string instead of a byte slice. This change
makes the compile more efficient since the Go compiler internally
represents bytes slices inefficiently.
This reduces the memory needed to compile btcec to 3MB versus the previous
40MB before this change.
In addition, it modifies the code which loads the pre-computed table to
deserialize directly into the table instead of into locals that are then
copied.
Fixes#297.
This commit contains various modifications for code and comment
consistency in the btcec package:
- Call out references at the top and reference them by their identifier in
the other comments
- Remove a TODO that no longer applies
- Add comments to the fields in the KoblitzCurve struct and reorder them
slightly
- Make comments wrap to 80
- Cleanup code that was far exceeding col 80 (only function declarations
typically do this)
- Extend block comments to use as much of the 80 cols as available
- Add a bit more explanation in a couple of places
- Update copyright year on secp256k1.go
- Fix a couple of typos in the comments
The benchmarks are still only compiled when running 'go test' so this has
no effect on regular usage. This is being done because benchmarks often
need access to internal state.
Normal tests are kept in a separate package since they also serve to
exercise the public API, and by intentionally making it more difficult to
reach into the internals, it helps ensure the public API is sanely usable.
Since the benchmarks can now access the internals directly, this commit
also removes the functions which exposed the internals to the test package
from internal_test.go which were only used by the benchmarks.
Also, it removes a duplicate benchmark.
This commit converts the initialization of the constants to use a function
which panics on error instead of just ignoring the error. This is
acceptable since they are hard-coded constants and should never fail.
This commit adds code which generates the linearly independent vectors
used by the secp256k1 endomorphism code. These value are hard-coded into
the curve already, but having the code used to generate them is handy
should any future curves be added which can also make use of the same
class of endomorphism.