This commit converts the initialization of the constants to use a function
which panics on error instead of just ignoring the error. This is
acceptable since they are hard-coded constants and should never fail.
This commit adds code which generates the linearly independent vectors
used by the secp256k1 endomorphism code. These value are hard-coded into
the curve already, but having the code used to generate them is handy
should any future curves be added which can also make use of the same
class of endomorphism.
This commit contains the entire btcnet repository along with several
changes needed to move all of the files into the chaincfg directory in
order to prepare it for merging. This does NOT update btcd or any of the
other packages to use the new location as that will be done separately.
- All import paths in the old btcnet test files have been changed to the
new location
- All references to btcnet as the package name have been changed to
chaincfg
- The coveralls badge has been removed since it unfortunately doesn't
support coverage of sub-packages
This is ongoing work toward #214.
* Address index is built up concurrently with the `--addrindex` flag.
* Entire index can be deleted with `--dropaddrindex`.
* New RPC call: `searchrawtransaction`
* Returns all transacitons related to a particular address
* Includes mempool transactions
* Requires `--addrindex` to be activated and fully caught up.
* New `blockLogger` struct has been added to factor our common logging
code
* Wiki and docs updated with new features.
Use Non-Adjacent Form (NAF) of large numbers to reduce ScalarMult computation times.
Preliminary results indicate around a 8-9% speed improvement according to BenchmarkScalarMult.
The algorithm used is 3.77 from Guide to Elliptical Curve Crytography by Hankerson, et al.
This closes#3
This implements a speedup to ScalarMult using the endomorphism available to secp256k1.
Note the constants lambda, beta, a1, b1, a2 and b2 are from here:
https://bitcointalk.org/index.php?topic=3238.0
Preliminary tests indicate a speedup of between 17%-20% (BenchScalarMult).
More speedup can probably be achieved once splitK uses something more like what fieldVal uses. Unfortunately, the prime for this math is the order of G (N), not P.
Note the NAF optimization was specifically not done as that's the purview of another issue.
Changed both ScalarMult and ScalarBaseMult to take advantage of curve.N to reduce k.
This results in a 80% speedup to large values of k for ScalarBaseMult.
Note the new test BenchmarkScalarBaseMultLarge is how that speedup number can
be checked.
This closes#1
This commit reworks the way that the pre-computed table which is used to
accelerate scalar base multiple is generated and loaded to make use of the
go generate infrastructure and greatly reduce the memory needed to compile
as well as speed up the compile.
Previously, the table was being generated using the in-memory
representation directly written into the file. Since the table has a very
large number of entries, the Go compiler was taking up to nearly 1GB to
compile. It also took a comparatively long period of time to compile.
Instead, this commit modifies the generated table to be a serialized,
compressed, and base64-encoded byte slice. At init time, this process is
reversed to create the in-memory representation. This approach provides
fast compile times with much lower memory needed to compile (16MB versus
1GB). In addition, the init time cost is extremely low, especially as
compared to computing the entire table.
Finally, the automatic generation wasn't really automatic. It is now
fully automatic with 'go generate'.
This commit contains the entire btcwire repository along with several
changes needed to move all of the files into the wire directory in
order to prepare it for merging. This does NOT update btcd or any of the
other packages to use the new location as that will be done separately.
- All import paths in the old btcwire test files have been changed to the
new location
- All references to btcwire as the package name have been chagned to
wire
- The coveralls badge has been removed since it unfortunately doesn't
support coverage of sub-packages
This is ongoing work toward #214.
This commit causes TravisCI to run several tools on each pull request and
commit to help ensure the code quality remains high. This includes gofmt,
goimports, golint, go vet, the race detector, and coverage stats.
Also, it instructs TravisCI to use nicer container-based builds.
This change converts the leveldb database's ExistsSha() and
ExistsTxSha to use the goleveldb API. Has() only returns if
the key exists and does not need to read the entire value into
memory resulting in less disk i/o and much less GC.
This commit contains the entire btcchain repository along with several
changes needed to move all of the files into the blockchain directory in
order to prepare it for merging. This does NOT update btcd or any of the
other packages to use the new location as that will be done separately.
- All import paths in the old btcchain test files have been changed to
the new location
- All references to btcchain as the package name have been changed to
blockchain
* When an inv is to be sent to the server for relaying, the sender
already has access to the underlying data. So
instead of requiring the relay to look up the data by
hash, the data is now coupled in the request message.
This commit contains the entire btcscript repository along with several
changes needed to move all of the files into the txscript directory in
order to prepare it for merging. This does NOT update btcd or any of the
other packages to use the new location as that will be done separately.
- All import paths in the old btcscript test files have been changed to the
new location
- All references to btcscript as the package name have been chagned to
txscript
This is ongoing work toward #214.
The ScriptVerifySigPushOnly flag enforces signature scripts to
only contain pushed data. This is rule 2 of BIP0062.
Mimics Bitcoin Core commit d752ba86c1872f64a4641cf77008826d32bde65f
This commit makes use of the new ScriptDiscourageUpgradableNops flag to
reject execution of NOP1 through NOP10 for transactions that are
considered standard.
This mirrors the behavior added to Bitcoin Core via pull request 5000.
This commit modifies various regression tests to make them more consistent
with other tests throughout the code base.
Also, it allows of all the tests to run in parallel.
NOP1 through NOP10 are reserved for future soft-fork upgrades. When
such an upgrade occurs, the NOP argument will then require verification.
Rejecting transactions that contain these NOPs into the mempool will
discourage those transactions from being mined elsewhere and ensure
btcd will never mine such transactions. This prevents now
invalid scripts (according to the majority of hashing power) even if the
client has not yet upgraded.
Non-executed upgradable NOPs are still allowed as they will still be
valid post-upgrade.
Mimics Bitcoin Core commit 03914234b3c9c35d66b51d580fe727a0707394ca
BIP0062 defines specific rules and canonical encodings for data pushes.
The existing script builder code already conformed to all but one of the
canonical data push rules that was added after it was originally
implemented (adding a single byte of 0x81 must be converted to
OP_1NEGATE). This commit implements that case and expands the existing
tests to explicitly cover all cases mentioned in BIP0062.
In addition, as a part of this change, the AddData function has been
modified so that any attempt to push more than the maximum script element
size bytes (520) in one push or any pushes the would cause the script to
exceed the maximum script bytes allowed by the script engine (10000) will
result in the final call to the Script function to only return the script
up to the point of the first error along with the error. This change
should have little effect on existing callers since they are almost
positively not creating scripts which violate these rules as they could
never be executed, however it does mean they need to check the new error
return.
Since the regression tests intentionally need to be able to exceed that
limit, a new function named AddFullData has been added which does not
enforce the limits, but still provides canonical encoding of the pushed
data.
Note that this commit does not affect consensus rules nor modify the
script engine.
Also, the tests have been marked so they can run in parallel.