This updates the GetNetworkInfoResult structure to include the latest
fields added to Core for compatibility purposes.
While here, also move the definitions of the subtypes for the result
before their use for consistency.
This changes the nonce generated to detect self connections over to use
pseudo randoms instead of a cryptographically random nonce.
There is really not a good reason for it to be cryptographically strong,
using the prng is much faster, and the prng also doesn't burn entropy.
This removes the field that tracks whether the version has been sent
since it is no longer used after the version negotiation was separated
from the main read and write code.
This commit modifies the existing block validation logic to examine the
current version bits state of the CSV soft-fork, enforcing the new
validation rules (BIPs 68, 112, and 113) accordingly based on the
current `ThesholdState`.
This commit publicly exports the CreateBlock function as it can be very
useful for generating blocks for tests. Additionally, the behavior of
the function has been modified slightly to build off of the genesis
block for the specified chain if the `prevBlock` paramter is nil.
This commit adds BIP-9 deployment parameters for all registered
networks for the CSV soft-fork package.
The mainnet and testnet parameters have been set in accordance to the
finalized BIPs.
For simnet, and the regression net, the activation date is back-dated
in order to allow signaling for the soft-fork at any time. Additionally
the expiration time for simnet and regrets has been set to
math.MaxInt64, meaning they’ll never expire.
Now that glide is used for version management and a specific commit of
the upstream repository can be locked it is no longer necessary to
maintain a fork of the package specifically to keep a stable dependency.
While here, update the glide dependency for btcutil as well since it was
switched to use the upstream path as well.
Invalid tokens: github had made some changes the past months, the old style is not rendering at all, so I fixed that for you.
Other contributors can do the same for all of the project's documents.
Thanks.
This modifies the goclean.sh script that is executed on Travis to
only run the tests once.
While it is nice to see coverage reports in the log, unfortunately it
appears that both the -race and -cover flags can't be used together, and
the tests have grown in complexity such that they are starting to get
close to TravisCI time limits.
This simplifies the code based on the recommendations of the gosimple
lint tool.
Also, it increases the deadline for the linters to run to 10 minutes and
reduces the number of threads that is uses. This is being done because
the Travis environment has become increasingly slower and it also seems
to be hampered by too many threads running concurrently.
This modifies the blockNode and BestState structs in the blockchain
package to store hashes directly instead of pointers to them and updates
callers to deal with the API change in the exported BestState struct.
In general, the preferred approach for hashes moving forward is to store
hash values in complex data structures, particularly those that will be
used for cache entries, and accept pointers to hashes in arguments to
functions.
Some of the reasoning behind making this change is:
- It is generally preferred to avoid storing pointers to data in cache
objects since doing so can easily lead to storing interior pointers
into other structs that then can't be GC'd
- Keeping the hash values directly in the block node provides better
cache locality
Since the code base is currently in the process of changing over to
decouple download and connection logic, but not all of the necessary
parts are updated yet, ensure blocks that are in the database, but do
not have an associated main chain block index entry, are treated as if
they do not exist for the purposes of chain connection and selection
logic.
This refactors the block index logic into a separate struct and
introduces an individual lock for it so it can be queried independent of
the chain lock.
This modifies the block node structure to include a couple of extra
fields needed to be able to reconstruct the block header from a node,
and exposes a new function from chain to fetch the block headers which
takes advantage of the new functionality to reconstruct the headers from
memory when possible. Finally, it updates both the p2p and RPC servers
to make use of the new function.
This is useful since many of the block header fields need to be kept in
order to form the block index anyways and storing the extra fields means
the database does not have to be consulted when headers are requested if
the associated node is still in memory.
The following timings show representative performance gains as measured
from one system:
new: Time to fetch 100000 headers: 59ms
old: Time to fetch 100000 headers: 4783ms
This removes the CalcPastMedianTime since it is now exposed much more
efficiently via the MedianTime field of the BestState snapshot returned
from the BestSnapshot function.
This modifies the block nodes used in the blockchain package for keeping
track of the block index to use int64 for the timestamps instead of
time.Time.
This is being done because a time.Time takes 24 bytes while an int64
only takes 8 and the plan is to eventually move the entire block index
into memory instead of the current dynamically-loaded version, so
cutting the number of bytes used for the timestamp by a third is highly
desirable.
Also, the consensus code requires working with unix-style timestamps
anyways, so switching over to them in the block node does not seem
unreasonable.
Finally, this does not go so far as to change all of the time.Time
references, particularly those that are in the public API, so it is
purely an internal change.
This modifies the rpctest harness and its associated memwallet to make
use of the new filter-based notifications since the old notifications
are now deprecated.
It also updates the glide.lock file to require the necessary
btcrpcclient version.
This modifies the blockchain code to store all blocks that have passed
proof-of-work and contextual validity tests in the database even if they
may ultimately fail to connect.
This eliminates the need to store those blocks in memory, allows them to
be available as orphans later even if they were never part of the main
chain, and helps pave the way toward being able to separate the download
logic from the connection logic.
This modifies the .travis.yml file to install a release version of glide
instead of using its latest master branch. This will prevent upstream
changes from inadvertently breaking the CI builds.
This contains a bit of cleanup and additional logic to improve the
recently-added ability to specify additional checkpoints via the
--addcheckpoint option.
In particular:
- Improve error messages in the checkpoint parsing
- Correct the mergeCheckpoints function to weed out duplicate height
checkpoints while using the most-recently provided one as described by
its comment
- Add an assertion to blockchain.New that the provided checkpoints are
sorted as required
- Keep comments to 80 columns and use two spaces after periods in them to
be consistent with the rest of the code base
- Make the entry in doc.go match the actual btcd -h output
The most recent version of glide creates a working directory named
.glide that includes .go files and thus must be excluded from the find
operation that generates merged test coverage of all packages.
A DNS lookup was being attempted on onion addresses causing
connections to fail. This has been fixed by introducing type
onionAddr (which implements a net.Addr interface) and passing
it to btcdDial.
Also, the following onion related fixes have been made:
* getaddednodeinfo - updated to handle onion addrs.
* TorLookupIP - fixed err being shadowed.
* newServer - rename tcpAddr to netAddr
* addrStringToNetAddr - skip if host is already an IP addr.
* addrStringToNetAddr - err if tor is disabled
* getaddednodeinfo - check if host is already an IP addr.
ScriptVerifyNullFail defines that signatures must be empty if a
CHECKSIG or CHECKMULTISIG operation fails.
This commit also enables ScriptVerifyNullFail at the mempool policy
level.
This updates the data driven transaction script tests to use the most
recent format and test data as implemented by Core so the test data can
more easily be updated and help prove cross-compatibility correctness.
In particular, the new format combines the previously separate valid and
invalid test data files into a single file and adds a field for the
expected result. This is a nice improvement since it means tests can
now ensure script failures are due to a specific expected reason as
opposed to only generically detecting failure as the previous format
required.
The btcd script engine typically returns more fine grained errors than
the test data expects, so the test adapter handles this by allowing
expected errors in the test data to be mapped to multiple txscript
errors.
It should also be noted that the tests related to segwit have been
stripped from the data since the segwit PR has not landed in master yet,
however the test adapter does recognize the new ability for optional
segwit data to be supplied, though it will need to properly construct
the transaction using that data when the time comes.
This converts the majority of script errors from generic errors created
via errors.New and fmt.Errorf to use a concrete type that implements the
error interface with an error code and description.
This allows callers to programmatically detect the type of error via
type assertions and an error code while still allowing the errors to
provide more context.
For example, instead of just having an error the reads "disabled opcode"
as would happen prior to these changes when a disabled opcode is
encountered, the error will now read "attempt to execute disabled opcode
OP_FOO".
While it was previously possible to programmatically detect many errors
due to them being exported, they provided no additional context and
there were also various instances that were just returning errors
created on the spot which callers could not reliably detect without
resorting to looking at the actual error message, which is nearly always
bad practice.
Also, while here, export the MaxStackSize and MaxScriptSize constants
since they can be useful for consumers of the package and perform some
minor cleanup of some of the tests.