This adds new tests to the TestNormalize, TestMul, TestAdd2 functions
which trigger an issue with modular reduction that was fixed in the
prevous commit to prevent regressions.
As noted in issue #706, the existing code had an issue where the
normalized result was > P when both the first and second words of the
field representation being normalized were BOTH greater than or equal to
the first and second words of P. Although this condition is rare in
practice, it needs to be handled properly.
This resolves the issue by comparing the low words in the final
reduction step against the normalized low order prime bits to ensure the
final subtraction occurs correctly any time they're > P. This approach
retains the constant time property as well.
This adds a full-blown testing infrastructure in order to test consensus
validation rules. It is built around the idea of dynamically generating
full blocks that target specific rules linked together to form a block
chain. In order to properly test the rules, each test instance starts
with a valid block that is then modified in the specific way needed to
test a specific rule.
Blocks which exercise following rules have been added for this initial
version. These tests were largely ported from the original Java-based
'official' block acceptance tests as well as some additional tests
available in the Core python port. It is expected that further tests
can be added over time as consensus rules change.
* Enough valid blocks to have a stable base of mature coinbases to spend
for futher tests
* Basic forking and chain reorganization
* Double spends on forks
* Too much proof-of-work coinbase (extending main chain, in block that
forces a reorg, and in a valid fork)
* Max and too many signature operations via various combinations of
OP_CHECKSIG, OP_MULTISIG, OP_CHECKSIGVERIFY, and OP_MULTISIGVERIFY
* Too many and max signature operations with offending sigop after
invalid data push
* Max and too many signature operations via pay-to-script-hash redeem
scripts
* Attempt to spend tx created on a different fork
* Attempt to spend immature coinbase (on main chain and fork)
* Max size block and block that exceeds the max size
* Children of rejected blocks are either orphans or rejected
* Coinbase script too small and too large
* Max length coinbase script
* Attempt to spend tx in blocks that failed to connect
* Valid non-coinbase tx in place of coinbase
* Block with no transactions
* Invalid proof-of-work
* Block with a timestamp too far in the future
* Invalid merkle root
* Invalid proof-of-work limit (bits header field)
* Negative proof-of-work limit (bits header field)
* Two coinbase transactions
* Duplicate transactions
* Spend from transaction that does not exist
* Timestamp exactly at and one second after the median time
* Blocks with same hash via merkle root tricks
* Spend from transaction index that is out of range
* Transaction that spends more that its inputs provide
* Transaction with same hash as an existing tx that has not been
fully spent (BIP0030)
* Non-final coinbase and non-coinbase txns
* Max size block with canonical encoding which exceeds max size with
non-canonical encoding
* Spend from transaction earlier in same block
* Spend from transaction later in same block
* Double spend transaction from earlier in same block
* Coinbase that pays more than subsidy + fees
* Coinbase that includes subsidy + fees
* Invalid opcode in dead execution path
* Reorganization of txns with OP_RETURN outputs
* Spend of an OP_RETURN output
* Transaction with multiple OP_RETURN outputs
* Large max-sized block reorganization test (disabled by default since
it takes a long time and a lot of memory to run)
Finally, the README.md files in the main and docs directories have been
updated to reflect the use of the new testing framework.
This removes the exported CalcPastTimeMedian function from the
blockchain package as it is no longer needed since the information is
now available via the BestState snapshot.
Also, update the only known caller of this, which is the chain state in
block manager, to use the snapshot instead. In reality, now that
everything the block manager chain state provides is available via the
blockchain BestState snapshot, the entire thing can be removed, however
that will be done in a separate to commit to keep the changes targeted.
This makes the enforcement of the bloom filter service bit much more
strict. In particular, it does the following:
- Moves the enforcement of the bloom filter service bit out of the peer
package and into the server so the server can ban as necessary
- Disconnect peers that send filter commands when the server is
configured to disable them regardless of the protocol version
- Bans peers that are a high enough protocol version that they are
supposed to observe the service bit is disabled, but ignore it and
send filter commands regardless.
As an added bonus, this fixes the old logic which had a bug in that it
was examining the *remote* peer's supported services in order to choose
whether or not to disconnect instead of the *local* server's supported
services.
This modifies the blockchain.ProcessBlock function to return an
additional boolean as the first parameter which indicates whether or not
the block ended up on the main chain.
This is primarily useful for upcoming test code that needs to be able to
tell the difference between a block accepted to a side chain and a block
that either extends the main chain or causes a reorganize that causes it
to become the main chain. However, it is also useful for the addblock
utility since it allows a better error in the case a file with out of
order blocks is provided.
allAddr was being allocated with counters instead of the actual size
of the address map. This led to the possibility of including nils
in the returned slice, which resulted in a panic.
This adds a new build tag named rpctest which must be set in order for
rpctest-based tests to be executed. The new build tag is also added to
the goclean.sh script which is executed by Travis during continuous
integration builds.
This change is being made because the rpctest framework requires
additional careful user configuration to ensure the version of btcd
under test can be programmatically launched from the system path with
all of the necessary ports open whereas all of the other tests are
self-contained within the test binary itself.
Since said additional configuration is typically not done, it leads to a
lot of false positives. Putting the tests behind a build tag allows
them to remain to be available and run during continuous integration
without imposing the additional configuration requirements on users.
Use os.Getpid() to get the process ID, not os.Getppid(), which returns
the parent process ID. This resulted in multiple calls to
generateListeningAddresses() getting the same listening ports.
This modifies the ports that are selected for use for the p2p and rpc
ports to start with a port that is based on the process id instead of a
hard-coded value. The chosen ports are incremented for each running
instance similar to the previous code except the p2p and rpc ports and
now split into ranges instead of being 2 apart.
This is being done because the previous code only worked for a single
process which means it prevented the ability to run tests in parallel.
The new approach will work with multiple processes, however it must be
stated that there is still a very small probability that the stars could
align resulting in the same ports being selected.
Finally, this also reverts the recent change to run tests serially since
this fixes the underlying cause for that change.
This modifies the goclean.sh to execute all the tests amongst
the packages serially. The default behavior of the `go test` command is
to execute all tests in parallel amongst the listed packages. This
behavior can at times cause tests which use the `rpctest` package to
fail due to multiple `btcd` nodes attempting to bind to the same port
simultaneously. As only one node can successfully bind to the port, the
btcd processes for the other concurrent harness instances exit silently
causing the RPC clients to fail with connection timeouts as their
target process no longer exists. Executing all tests serially
eliminates such a race condition which can cause non-deterministic test
failures.
This modifies the code which handles failed server responses to attempt
to unmarshal the response bytes as a regular JSON-RPC response
regardless of the HTTP status code and, if that fails, return an error
that includes the status code as well as the raw response bytes.
This is being done because some JSON-RPC servers, such as Bitcoin Core,
don't follow the intended HTTP status code meanings. For example, if
you request a block that doesn't exist, it returns a status code of 500
which is supposed to mean internal server error instead of a standard
200 status code (since the HTTP request itself was successful) along
with the JSON-RPC error response.
The result is that errors from Core will now show up as an actual
RPCError instead of an error with the raw JSON bytes.
This also has the benefit of returning the HTTP status code in the
error for real HTTP failure cases such as 401 authentication failures,
which previously would just be an empty error when used against Core
since it doesn't return the actual response along with the status code
as it should.
This changes the GetBlockVersion API to not send a third parameter.
The third parameter is a boolean that expands the transaction data
structures as well. However, Bitcore Core does not recognize this
feature.
GetBlockVerbose now only takes a hash value.
Users of the GetBlockVerbose(hash, true) must now use
GetBlockVerboseTx(hash).
This modifies the return value of the GetBlock and GetBlockAsync
functions to a raw *wire.MsgBlock instead of *btcutil.Block.
This is being done because a btcutil.Block assumes there is a height
associated with the block and the block height is not part of a raw
serialized block as returned by the underlying non-verbose RPC API
getblock call and thus is not available to populate a btcutil.Block
correctly.
A raw wire.Block correctly matches what the underlying RPC call is
actually returning and thus avoids the confusion that could occur.
GetBlockHeader returns the blockheader of the specified blockhash.
GetBlockHeaderVerbose returns the data structure with information about
the blockheader of the specified blockhash.
Fixes#89
This modifies the rpctest harness to call the TearDown function in the
SetUp error path. This ensures any resources, such as temp directories,
that were created during setup before whatever the failure that caused
the error are properly cleaned up.
This corrects the case where any errors that might have happened when
creating the config file were not being displayed due to the logging
system not being initialized yet.
While here, consistently use fmt.Fprint{f,ln} throughout the loadConfig
function even after the logging system is initialized since it will
help prevent copy/paste errors such as the one that induced this change
to begin with.
This changes the transaction input standardness checks as follows:
- Allow any script in a pay-to-script-hash transaction to be relayed and
mined so long as it has no more than 15 signature operations
- Remove the obsolete checks which naively calculated the number of
expected inputs in favor of the more accurate ScriptVerifyCleanStack
and ScriptVerifySigPushOnly functionality of the script engine that was
added after these checks.
This commit modifies the current set of integration tests to ensure
that that the main harness always rejects non-standard transactions.
With this in place, even though we’re using simnet parameters, we
ensure that transaction acceptance/validation is identical to that of
main net.
This commit adds two new cli flags: one for accepting non-std
transactions, and the other for rejecting non-std transactions.
The two flag are rejected when using concurrently. Config parsing is
set up such that, the desired policy expressed via the config always
overrides the policy set by default for a particular chain.
The doc.go files and the sample-btcd.conf file have been updated to document
the new flags exposing further policy control.
This adds a basic test harness infrastructure for the mempool package
which aims to make writing tests for it much easier.
The harness provides functionality for creating and signing transactions
as well as a fake chain that provides utxos for use in generating valid
transactions and allows an arbitrary chain height to be set. In order
to simplify transaction creation, a single signing key and payment
address is used throughout and a convenience type for spendable outputs
is provided.
The harness is initialized with a spendable coinbase output by default
and the fake chain height set to the maturity height needed to ensure
the provided output is in fact spendable as well as a policy that is
suitable for testing.
Since tests are in the same package and each harness provides a unique
pool and fake chain instance, the tests can safely reach into the pool
policy, or any other state, and change it for a given harness without
affecting the others.
In order to be able to make use of the existing blockchain.Viewpoint
type, a Clone method has been to the UtxoEntry type which allows the
fake chain instance to keep a single view with the actual available
unspent utxos while the mempool ends up fetching a subset of the view
with the specifically requested entries cloned.
To demo the harness, this also contains a couple of tests which make use
of it:
- TestSimpleOrphanChain -- Ensures an entire chain of orphans is
properly accepted and connects up when the missing parent transaction
is added
- TestOrphanRejects -- Ensure orphans are actually rejected when the
flag on ProcessTransactions is set to reject them
This modifies the config for the new mempool package such that it takes
a callback function to obtain the best chain height instead of requiring
a fully initialized blockchain.BlockChain instance.
This will make it much easier to test the mempool since the tests will
be able to provide their own height function to test various
functionality without having create and manipulate full blocks and chain
instances.
This adds a new field to the best chain state snapshot for the
calculated past median time as returned by the calcPastMedianTime
function. This is useful since it provides fast access to it without
having to acquire the chain lock which is needed to recalculate it.
This will ultimately allow the associated exported function to be
removed since it only exists to be able to calculate this exact value,
however this commit only introduces the new field in order to keep the
changes minimal.
This commit adds a `defer` statement at the top of `TestRpcServer`
which will attempt a `recover` which tears down all active harnesses in
the event that one of the tests causes a panic in the main goroutine.
Before this commit, if a buggy test caused a panic while all integration
tests were being executed, then any active harnesses would fail to be
properly torn down. This would cause the running btcd processes to be
leaked, possibly interfering with future test runs until the process was
manually killed. This commit fixes such behavior.
In order to aide in debugging, when a test panics, the test number is
printed out along with a full stack-trace from the start of the test to
the panic point.
This corrects several issues with the new rpctest package.
The following is an overview of the issues that have been corrected:
- The JoinNodes code was incorrect for both mempool and blocks.
- For mempool it was only checking the first node against itself
- For blocks it was only testing the height instead of hash and height
which means the chains could be different and it would claim they're
synced
- The test for mempool joins was inaccurate in a few ways:
- It was generating separate chains such that the generated
transaction was invalid (an orphan) on one chain, but not
the other
- Mempools are not automatically synced when nodes are connected and
there is a large random delay before any transaction rebroadcast
happens, so it can't be relied on for the purposes of this test
- The test for block joins was generating two independent chains of the
same height with the same difficulty and was only passing due to the
aforementioned bug in JoinNodes
- All of the ConnectNode calls were connecting the main harness outbound
to the local test harness instances
- This is not correct because ConnectNode makes the outbound
connection persistent, which means once the local test harness is
gone, it would keep trying to connect for the remainder of the tests
to a node that is never coming back and instead ends up connecting to
an independent test harness.
This commit modifies the existing Travis configuration to also install
the btcd binary within the container’s PATH.
This change is necessary in order for tests written against the new
rpctest package to work properly within the Travis testing environment.
This commit introduces a new file: rpcserver_test.go dedicated for
including integration tests for btcd using the new rpctest package.
The tests are created using a TestMain instance first creates a single
main harness which is intended to be re-used across tests instances.
Afterwards all registered RPC tests are executed, with proper clean up
being executed regardless of the passing state of the tests.
The following RPC calls are excessed by the initial set of tests added:
* getbestblock
* getblockcount
* getblockhash
This commit adds a new package (rpctest) which provides functionality
for writing automated black box tests to exercise the RPC interface.
An instance of a rpctest consists of an active btcd process running in
(typically) --simnet mode, a btcrpcclient instance connected to said
node, and finally an embedded in-memory wallet instance (the memWallet)
which manages any created coinbase outputs created by the mining btcd
node.
As part of the SetUp process for an RPC test, a test author can
optionally opt to have a test blockchain created. The second argument
to SetUp dictates the number of mature coinbase outputs desired. The
btcd process will then be directed to generate a test chain of length:
100 + numMatureOutputs.
The embedded memWallet instance acts as a minimal, simple wallet for
each Harness instance. The memWallet itself is a BIP 32 HD wallet
capable of creating new addresses, creating fully signed transactions,
creating+broadcasting a transaction paying to an arbitrary set of
outputs, and querying the currently confirmed balance.
In order to test various scenarios of blocks containing arbitrary
transactions, one can use the Generate rpc call via the exposed
btcrpcclient connected to the active btcd node. Additionally, the
Harness also exposes a secondary block generation API allowing callers
to create blocks with a set of hand-selected transactions, and an
arbitrary BlockVersion or Timestamp.
After execution of test logic TearDown should be called, allowing the
test instance to clean up created temporary directories, and shut down
the running processes.
Running multiple concurrent rpctest.Harness instances is supported in
order to allow for test authors to exercise complex scenarios. As a
result, the primary interface to create, and initialize an
rpctest.Harness instance is concurrent safe, with shared package level
private global variables protected by a sync.Mutex.
Fixes#116.
The protocol was silently upgrade in Core some time ago to enforce a
limit of 256 for the user agent in version messages. This updates wire
to coincide with that change and consequently includes a wire version
bump to 0.4.1.
This does the minimum work necessary to refactor the mempool code into
its own package. The idea is that separating this code into its own
package will greatly improve its testability, allow independent
benchmarking and profiling, and open up some interesting opportunities
for future development related to the memory pool.
There are likely some areas related to policy that could be further
refactored, however it is better to do that in future commits in order
to keep the changeset as small as possible during this refactor.
Overview of the major changes:
- Create the new package
- Move several files into the new package:
- mempool.go -> mempool/mempool.go
- mempoolerror.go -> mempool/error.go
- policy.go -> mempool/policy.go
- policy_test.go -> mempool/policy_test.go
- Update mempool logging to use the new mempool package logger
- Rename mempoolPolicy to Policy (so it's now mempool.Policy)
- Rename mempoolConfig to Config (so it's now mempool.Config)
- Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc)
- Rename txMemPool to TxPool (so it's now mempool.TxPool)
- Move defaultBlockPrioritySize to the new package and export it
- Export DefaultMinRelayTxFee from the mempool package
- Export the CalcPriority function from the mempool package
- Introduce a new RawMempoolVerbose function on the TxPool and update
the RPC server to use it
- Update all references to the mempool to use the package.
- Add a skeleton README.md
This reduces the mempool lock contention by removing an unnecessary
check when responding to a "mempool" request.
In particular, the code first gets a list of all transactions from the
mempool and then iterates them in order to construct the inventory
vectors and apply bloom filtering if it is enabled. Since it is
possible that the transaction was removed from the mempool by another
thread while that list is being iterated, the code was checking if each
transaction was still in the mempool. This is a pointless check because
the transaction might still be removed at any point after the check
anyways. For example, it might be removed after the mempool response
has been sent to the remote peer or even while the loop is still
iterating.
This refactors the notification state mutex out of the state itself to
the client. This is being done since the code makes a copy of the
notification state and accesses that copy immutably, and therefore there
is no need for it to have its own mutex.
This makes vet happy by manually copying the notification state fields
during the deep copy instead of copying the entire struct which contains
an embedded mutex.
This exposes a new function on the ScriptBuilder type named AddOps that
allows multiple opcodes to be added via a single call and adds tests to
exercise the new function.
Finally, it updates a couple of places in the signing code that were
abusing the interface by setting its private script directly to use the
new public function instead.