This introduces a new pool membership test function to the mempool
testing infrastructure and refactors the tests to make use of it.
It is useful since it is common logic that is not only needed in the
existing tests, but will be needed by most mempool-related tests.
Avoid compatibility issues with software that relies on the behavior of
bitcoind's JSON-RPC implementation.
The JSON-RPC 1.0 spec defines that notifications must have their "id"
set to null and states that notifications do not have a response.
A JSON-RPC 2.0 notification is a request with "json-rpc":"2.0", and
without an "id" member. The specification states that notifications
must not be responded to. JSON-RPC 2.0 permits the null value as a
valid request id, therefore such requests are not notifications.
Bitcoin Core serves requests with "id":null or even an absent "id", and
responds to such requests with "id":null in the response.
Btcd does not respond to any request without and "id" or with "id":null,
regardless the indicated JSON-RPC protocol version.
In order to avoid compatibility issues with software relying on
Core's behavior, this commit implements "quirks mode" as follows:
- quirks mode can be enabled via configuration (disabled by default)
- If no JSON-RPC version is indicated in the request, accept and
respond to request with "id":null
- If no JSON-RPC version is indicated in the request, accept and
respond to requests without an "id" member
- In both cases above, use "id":null in the response
- Do not respond to request without an "id" or with "id":null when
JSON-RPC version is indicated in the request (process as notification)
This optimizes the way in which the mempool oprhan map is limited in the
same way the server block manager maps were previously optimized.
Previously the code would read a cryptographically random value large
enough to construct a hash, find the first entry larger than that value,
and evict it.
That approach is quite inefficient and could easily become a
bottleneck when processing transactions due to the need to read from a
source such as /dev/urandom and all of the subsequent hash comparisons.
Luckily, strong cryptographic randomness is not needed here. The primary
intent of limiting the maps is to control memory usage with a secondary
concern of making it difficult for adversaries to force eviction of
specific entries.
Consequently, this changes the code to make use of the pseudorandom
iteration order of Go's maps along with the preimage resistance of the
hashing function to provide the desired functionality. It has
previously been discussed that the specific pseudorandom iteration order
is not guaranteed by the Go spec even though in practice that is how it
is implemented. This is not a concern however because even if the
specific compiler doesn't implement that, the preimage resistance of the
hashing function alone is enough.
The following is a before and after comparison of the function for both
speed and memory allocations:
benchmark old ns/op new ns/op delta
----------------------------------------------------------------
BenchmarkLimitNumOrphans 3727 243 -93.48%
benchmark old allocs new allocs delta
-----------------------------------------------------------------
BenchmarkLimitNumOrphans 4 0 -100.00%
This renames the mempool.Config.RelayNonStd option to AcceptNonStd which
more accurately describes its behavior since the mempool was refactored
into a separate package.
The reasoning for this change is that the mempool is not responsible for
relaying transactions (nor should it be). Its job is to maintain a pool
of unmined transactions that are validated according to consensus and
policy configuration options which are then used to provide a source of
transactions that need to be mined.
Instead, it is the server that is responsible for relaying transactions.
While it is true that the current server code currently only relays txns
that were accepted to the mempool, this does not necessarily have to
be the case. It would be entirely possible (and perhaps even a good
idea as something do in the future), to separate the relay policy from
the mempool acceptance policy (and thus indirectly the mining policy).
Older nodes previously added the IP and port information to the address
manager which proved to be unreliable as an inbound connection from a
peer didn't necessarily mean the peer itself accepted inbound
connections.
This also fixes a bug where the peer package was incorrectly sending
the peer's services as its own.
This commit introduces package connmgr which contains connection
management related functionality.
The following is an overview of the features the package provides:
- Maintain fixed number of outbound connections
- Optional connect-only mode
- Retry persistent connections with increasing back-off
- Source peers from DNS seeds
- Use Tor to resolve DNS
- Dynamic ban scores
- Test coverage
In addition, btcd has been refactored to make use of the new package by
extending the connection manager to work with the server to source and
maintain peer connections. The following is a broad overview of the
changes to integrate the package:
- Simplify peer state by removing pending, retry peers
- Refactor to remove retries which are now handled by connmgr
- Use callback to add addresses sourced from the DNS seed
Finally the following connection-related things have been improved as a
part of this refactor:
- Fixes 100% cpu usage when network is down (#129)
- Fixes issues with max peers (#577)
- Simplify outbound peer connections management
This modifies the recently-added NullDataScript function in several
ways in an effort to make them more consistent with the tests in the
rest of the code base and improve/correct the logic:
- Use the hexToBytes and mustParseShortForm functions
- Consistently format the test errors
- Replace the valid bool flag with an expected error and test against it
- Ensure the returned script type is the expected type in all cases
This adds a new function named NullDataScript to the txscript package that returns a provably-pruneable OP_RETURN script with the provided data. The function will return an error if the provided data is larger than the maximum allowed length for a nulldata script to be be considered standard.
This modifies the rpctest framework to start btcd with the appropriate
network flags depending on the provided parameters.
Previously, it always started btcd with --simnet even if other
parameters, such as those for the regression test network, were
provided.
This modifies the ExtractCoinbaseHeight function to recognize small
canonically serialized block heights in coinbase scripts of blocks
higher than version 2.
This allows regression test chains in which blocks encode the serialized
height in the coinbase starting from block 1.
Putting the test code in the same package makes it easier for forks
since they don't have to change the import paths as much and it also
gets rid of the need for internal_test.go to bridge.
Also, do some light cleanup on a few tests while here.
This corrects the isNullData standard transaction type test to work
properly with canonically-encoded data pushes. In particular, single
byte data pushes that are small integers (0-16) are converted to the
equivalent numeric opcodes when canonically encoded and the code failed
to detect them properly.
It also adds several tests to ensure that both canonical and
non-canonical nulldata scripts are recognized properly and modifies the
test failure print to include the script that failed.
This does not affect consensus since it is just a standardness check.
This coincides with the mempool only, policy change which enforces
transaction finality according to the median-time-past rather than
blockheader timestamps. The behavior is pre-cursor to full blown BIP
113 consensus deployment, and subsequent activation.
As a result, the TimeSource field in the mempoolConfig is no longer
needed so it has been removed. Additionally, checkTransactionStandard has been
modified to instead take a time.Time as the mempool is no longer explicitly
dependant on a Chain instance.
This commit adds an additional closure function to the mempool’s config
which computes the median time past from the point of view of the best
node in the chain. The mempool test harness has also been updated to allow
setting a mock median time past for testing purposes.
In addition to increasing the testability of the mempool, this commit
should also speed up transaction and block validation for BIP 113 as
the MTP no longer needs to be re-calculated each time from scratch.
Putting the test code in the same package makes it easier for forks
since they don't have to change the import paths as much and it also
gets rid of the need for internal_test.go to bridge.
Also, remove the exception from the lint checks about returning the
unexported type since it is no longer required.
This adds new tests to the TestNormalize, TestMul, TestAdd2 functions
which trigger an issue with modular reduction that was fixed in the
prevous commit to prevent regressions.
As noted in issue #706, the existing code had an issue where the
normalized result was > P when both the first and second words of the
field representation being normalized were BOTH greater than or equal to
the first and second words of P. Although this condition is rare in
practice, it needs to be handled properly.
This resolves the issue by comparing the low words in the final
reduction step against the normalized low order prime bits to ensure the
final subtraction occurs correctly any time they're > P. This approach
retains the constant time property as well.
This adds a full-blown testing infrastructure in order to test consensus
validation rules. It is built around the idea of dynamically generating
full blocks that target specific rules linked together to form a block
chain. In order to properly test the rules, each test instance starts
with a valid block that is then modified in the specific way needed to
test a specific rule.
Blocks which exercise following rules have been added for this initial
version. These tests were largely ported from the original Java-based
'official' block acceptance tests as well as some additional tests
available in the Core python port. It is expected that further tests
can be added over time as consensus rules change.
* Enough valid blocks to have a stable base of mature coinbases to spend
for futher tests
* Basic forking and chain reorganization
* Double spends on forks
* Too much proof-of-work coinbase (extending main chain, in block that
forces a reorg, and in a valid fork)
* Max and too many signature operations via various combinations of
OP_CHECKSIG, OP_MULTISIG, OP_CHECKSIGVERIFY, and OP_MULTISIGVERIFY
* Too many and max signature operations with offending sigop after
invalid data push
* Max and too many signature operations via pay-to-script-hash redeem
scripts
* Attempt to spend tx created on a different fork
* Attempt to spend immature coinbase (on main chain and fork)
* Max size block and block that exceeds the max size
* Children of rejected blocks are either orphans or rejected
* Coinbase script too small and too large
* Max length coinbase script
* Attempt to spend tx in blocks that failed to connect
* Valid non-coinbase tx in place of coinbase
* Block with no transactions
* Invalid proof-of-work
* Block with a timestamp too far in the future
* Invalid merkle root
* Invalid proof-of-work limit (bits header field)
* Negative proof-of-work limit (bits header field)
* Two coinbase transactions
* Duplicate transactions
* Spend from transaction that does not exist
* Timestamp exactly at and one second after the median time
* Blocks with same hash via merkle root tricks
* Spend from transaction index that is out of range
* Transaction that spends more that its inputs provide
* Transaction with same hash as an existing tx that has not been
fully spent (BIP0030)
* Non-final coinbase and non-coinbase txns
* Max size block with canonical encoding which exceeds max size with
non-canonical encoding
* Spend from transaction earlier in same block
* Spend from transaction later in same block
* Double spend transaction from earlier in same block
* Coinbase that pays more than subsidy + fees
* Coinbase that includes subsidy + fees
* Invalid opcode in dead execution path
* Reorganization of txns with OP_RETURN outputs
* Spend of an OP_RETURN output
* Transaction with multiple OP_RETURN outputs
* Large max-sized block reorganization test (disabled by default since
it takes a long time and a lot of memory to run)
Finally, the README.md files in the main and docs directories have been
updated to reflect the use of the new testing framework.
This removes the exported CalcPastTimeMedian function from the
blockchain package as it is no longer needed since the information is
now available via the BestState snapshot.
Also, update the only known caller of this, which is the chain state in
block manager, to use the snapshot instead. In reality, now that
everything the block manager chain state provides is available via the
blockchain BestState snapshot, the entire thing can be removed, however
that will be done in a separate to commit to keep the changes targeted.
This makes the enforcement of the bloom filter service bit much more
strict. In particular, it does the following:
- Moves the enforcement of the bloom filter service bit out of the peer
package and into the server so the server can ban as necessary
- Disconnect peers that send filter commands when the server is
configured to disable them regardless of the protocol version
- Bans peers that are a high enough protocol version that they are
supposed to observe the service bit is disabled, but ignore it and
send filter commands regardless.
As an added bonus, this fixes the old logic which had a bug in that it
was examining the *remote* peer's supported services in order to choose
whether or not to disconnect instead of the *local* server's supported
services.
This modifies the blockchain.ProcessBlock function to return an
additional boolean as the first parameter which indicates whether or not
the block ended up on the main chain.
This is primarily useful for upcoming test code that needs to be able to
tell the difference between a block accepted to a side chain and a block
that either extends the main chain or causes a reorganize that causes it
to become the main chain. However, it is also useful for the addblock
utility since it allows a better error in the case a file with out of
order blocks is provided.
allAddr was being allocated with counters instead of the actual size
of the address map. This led to the possibility of including nils
in the returned slice, which resulted in a panic.
This adds a new build tag named rpctest which must be set in order for
rpctest-based tests to be executed. The new build tag is also added to
the goclean.sh script which is executed by Travis during continuous
integration builds.
This change is being made because the rpctest framework requires
additional careful user configuration to ensure the version of btcd
under test can be programmatically launched from the system path with
all of the necessary ports open whereas all of the other tests are
self-contained within the test binary itself.
Since said additional configuration is typically not done, it leads to a
lot of false positives. Putting the tests behind a build tag allows
them to remain to be available and run during continuous integration
without imposing the additional configuration requirements on users.
Use os.Getpid() to get the process ID, not os.Getppid(), which returns
the parent process ID. This resulted in multiple calls to
generateListeningAddresses() getting the same listening ports.
This modifies the ports that are selected for use for the p2p and rpc
ports to start with a port that is based on the process id instead of a
hard-coded value. The chosen ports are incremented for each running
instance similar to the previous code except the p2p and rpc ports and
now split into ranges instead of being 2 apart.
This is being done because the previous code only worked for a single
process which means it prevented the ability to run tests in parallel.
The new approach will work with multiple processes, however it must be
stated that there is still a very small probability that the stars could
align resulting in the same ports being selected.
Finally, this also reverts the recent change to run tests serially since
this fixes the underlying cause for that change.
This modifies the goclean.sh to execute all the tests amongst
the packages serially. The default behavior of the `go test` command is
to execute all tests in parallel amongst the listed packages. This
behavior can at times cause tests which use the `rpctest` package to
fail due to multiple `btcd` nodes attempting to bind to the same port
simultaneously. As only one node can successfully bind to the port, the
btcd processes for the other concurrent harness instances exit silently
causing the RPC clients to fail with connection timeouts as their
target process no longer exists. Executing all tests serially
eliminates such a race condition which can cause non-deterministic test
failures.
This modifies the code which handles failed server responses to attempt
to unmarshal the response bytes as a regular JSON-RPC response
regardless of the HTTP status code and, if that fails, return an error
that includes the status code as well as the raw response bytes.
This is being done because some JSON-RPC servers, such as Bitcoin Core,
don't follow the intended HTTP status code meanings. For example, if
you request a block that doesn't exist, it returns a status code of 500
which is supposed to mean internal server error instead of a standard
200 status code (since the HTTP request itself was successful) along
with the JSON-RPC error response.
The result is that errors from Core will now show up as an actual
RPCError instead of an error with the raw JSON bytes.
This also has the benefit of returning the HTTP status code in the
error for real HTTP failure cases such as 401 authentication failures,
which previously would just be an empty error when used against Core
since it doesn't return the actual response along with the status code
as it should.