The inability for a peer to negotiate is not something that should be a
warning which implies something is wrong. On the contrary, it is quite
expected that various peers will connect (or be connected to) that are
unable to properly negotiate for a variety of reasons. One example would
be a peer that is too old.
Also, while here take care of a few style nits.
This corrects a few issues introduced with the connection manager where
the server was not notifying the connection manager when a connection
request is available again.
The cases resolved are:
- Unable to initialize a server peer instance in response to the connection
- Failure to associate the connection with the server peer instance
- Disconnection of a non-persistent outbound peer
It also changes the log message to a debug in the former case because
it's not something that should be shown to the user as an error given
it's not due to anything the user has misconfigured nor is it even
unexpected if an invalid address is provided.
This adds a new field to the peer struct which stores the protocol
version advertised by the remote peer and updates the StatsSnapshot to
return the advertised version instead of the negotiated version.
Now that glide is used for version management the deps.txt file is no
longer needed. It will still be available in the git history if it is
ever needed.
This modifies the NewMsgTx function to accept the transaction version as
a parameter and updates all callers.
The reason for this change is so the transaction version can be bumped
in wire without breaking existing tests and to provide the caller with
the flexibility to create the specific transaction version they desire.
This does the minimum work necessary to refactor the block template
generation code into the mining package. The idea is that separating
this code into the mining package will greatly improve its testability,
allow independent benchmarking and profiling, and open up some
interesting opportunities for future development related to mining.
There are some areas related to policy and other configuration that
could be further refactored, however it is better to do that in future
commits in order to keep the changeset as small as possible during this
refactor.
Overview of the major changes:
- Move mining.go -> mining/mining.go
- Move mining_test.go -> mining/mining_test.go
- Add logger to mining package
- Update the MINR subsystem to use the new mining package logger
- Export CoinbaseFlags from the mining package
- BlkTmplGenerator is now mining.BlkTmplGenerator
- Update all references to the mining code to use the package
This move the export for MinHighPriority from the mempool package to the
mining package. This should have been done when the priority
calculation code was moved to the mining package.
This modifies the block template generate for the mining code such that
it takes chain instance and params instead of requiring a fully
initialized blockManager instance.
Also, in preparation for being able to more easily separate the code, it
exposes and makes use of two new functions:
- BestSnapshot which returns the state snapshot from the underlying
chain instance
- TxSource which returns the underlying transaction source
This is a step towards being able to separate the mining code into its
own package. No functional change.
This introduces a cpuminerConfig type which contains the necessary
information to break the direct dependency on the main server instance.
This change is a step towards being able to separate the cpu miner into
its own subpackage. No functional change.
This commit adds a new option to the mempool’s policy configuration
which determines which transaction versions should be accepted as
standard.
The default version set by the policy within the server is 2; this
allows accepting transactions which have version 2 enabled in order to
utilize the new sequence locks feature.
This commit adds a new function to the package: `LockTimeToSequence`.
The function is a simple utility function which aides the caller to
mapping a targeted time or block based relative lock-time to the
appropriate sequence number.
This commit introduces the concept of “sequence locks” borrowed from
Bitcoin Core for converting an input’s relative time-locks to an
absolute value based on a particular block for input maturity
evaluation.
A sequence lock is computed as the most distant maturity height/time
amongst all the referenced outputs within a particular transaction.
A transaction with sequence locks activated within any of its inputs
can *only* be included within a block if from the point-of-view of that
block either the time-based or height-based maturity for all referenced
inputs has been met.
A transaction with sequence locks can only be accepted to the mempool
iff from the point-of-view of the *next* (yet to be found block) all
referenced inputs within the transaction are mature.
This introduces a new type named BlkTmplGenerator which encapsulates the
various state needed to generate block templates.
This is useful since it means code that needs to generate block
templates can simply accept the generator rather than needing access to
all of the additional state which in turn will ultimately make it easier
to split the mining code into its own package.
This moves the priority-related code from the mempool package to the
mining package and also exports a new constant named UnminedHeight which
takes the place of the old unexported mempoolHeight.
Even though the mempool makes use of the priority code to make decisions
about what it will accept, priority really has to do with mining since
it influences which transactions will end up into a block. This change
also has the side effect of being a step towards enabling separation of
the mining code into its own package which, as previously mentioned,
needs access to the priority calculation code as well.
Finally, the mempoolHeight variable was poorly named since what it
really represents is a transaction that has not been mined into a block
yet. Renaming the variable to more accurately reflect its purpose makes
it clear that it belongs in the mining package which also needs the
definition now as well since the priority calculation code relies on it.
This will also benefit an outstanding PR which needs access to the same
value.
This fixes a bug where a transaction would lose reference to other
transactions dependant on it when being considered for inclusion in a
new block template. The issue only occurs when the transaction being
considered triggers a change of priority queue ordering to ordering by
fee. It results in none of the dependant transactions being considered
for inclusion in the new block template.
This implements orphan expiration in the mempool such that any orphans
that have not had their ancestors materialize within 15 minutes of their
initial arrival time will be evicted which in turn will remove any other
orphans that attempted to redeem it.
In order to perform the evictions with reasonable efficiency, an
opportunistic scan interval of 5 minutes is used. That is to say that
there is not a hard deadline on the scan interval and instead it runs
when a new orphan is added to the pool if enough time has passed.
The following is an example of running this code against the main
network for around 30 minutes:
23:05:34 2016-10-24 [DBG] TXMP: Expired 3 orphans (remaining: 254)
23:10:38 2016-10-24 [DBG] TXMP: Expired 112 orphans (remaining: 231)
23:15:43 2016-10-24 [DBG] TXMP: Expired 95 orphans (remaining: 206)
23:20:44 2016-10-24 [DBG] TXMP: Expired 90 orphans (remaining: 191)
23:25:51 2016-10-24 [DBG] TXMP: Expired 71 orphans (remaining: 191)
23:30:55 2016-10-24 [DBG] TXMP: Expired 70 orphans (remaining: 105)
23:36:19 2016-10-24 [DBG] TXMP: Expired 55 orphans (remaining: 107)
As can be seen from the above, without orphan expiration on this data
set, the orphan pool would have grown an additional 496 entries.
This introduces a new package to house tests which use the rpctest
package to programmatically drive end-to-end integration testing and
move the existing RPC integration tests into the new package.
This modifies the way orphan removal and processing is done to more
aggressively remove orphans that can no longer be valid due to other
transactions being added or removed from the primary transaction pool.
The net effect of these changes is that orphan pool will typically be
much smaller which greatly improves its effectiveness. Previously, it
would typically quickly reach the max allowed worst-case usage and
effectively stay there forever.
The following is a summary of the changes:
- Modify the map that tracks which orphans redeem a given transaction to
instead track by the specific outpoints that are redeemed
- Modify the various orphan removal and processing functions to accept
the full transaction rather than just its hash
- Introduce a new flag on removeOrphans which specifies whether or not
to remove the transactions that redeem the orphan being removed as
well which is necessary since only some paths require it
- Add a new function named removeOrphanDoubleSpends that is invoked
whenever a transaction is added to the main pool and thus the outputs
they spent become concrete spends
- Introduce a new flag on maybeAcceptTransaction which specifies whether
or not duplicate orphans should be rejected since only some paths
require it
- Modify processOrphans as follows:
- Make use of the modified map
- Use newly available flags and logic work more strictly work with tx
chains
- Recursively remove any orphans that also redeem any outputs redeemed
by the accepted transactions
- Several new tests to ensure proper functionality
- Removing an orphan that doesn't exist is removed both when there is
another orphan that redeems it and when there is not
- Removing orphans works properly with orphan chains per the new
remove redeemers flag
- Removal of multi-input orphans that double spend an output when a
concrete redeemer enters the transaction pool
This introduces a new pool membership test function to the mempool
testing infrastructure and refactors the tests to make use of it.
It is useful since it is common logic that is not only needed in the
existing tests, but will be needed by most mempool-related tests.
Avoid compatibility issues with software that relies on the behavior of
bitcoind's JSON-RPC implementation.
The JSON-RPC 1.0 spec defines that notifications must have their "id"
set to null and states that notifications do not have a response.
A JSON-RPC 2.0 notification is a request with "json-rpc":"2.0", and
without an "id" member. The specification states that notifications
must not be responded to. JSON-RPC 2.0 permits the null value as a
valid request id, therefore such requests are not notifications.
Bitcoin Core serves requests with "id":null or even an absent "id", and
responds to such requests with "id":null in the response.
Btcd does not respond to any request without and "id" or with "id":null,
regardless the indicated JSON-RPC protocol version.
In order to avoid compatibility issues with software relying on
Core's behavior, this commit implements "quirks mode" as follows:
- quirks mode can be enabled via configuration (disabled by default)
- If no JSON-RPC version is indicated in the request, accept and
respond to request with "id":null
- If no JSON-RPC version is indicated in the request, accept and
respond to requests without an "id" member
- In both cases above, use "id":null in the response
- Do not respond to request without an "id" or with "id":null when
JSON-RPC version is indicated in the request (process as notification)
This optimizes the way in which the mempool oprhan map is limited in the
same way the server block manager maps were previously optimized.
Previously the code would read a cryptographically random value large
enough to construct a hash, find the first entry larger than that value,
and evict it.
That approach is quite inefficient and could easily become a
bottleneck when processing transactions due to the need to read from a
source such as /dev/urandom and all of the subsequent hash comparisons.
Luckily, strong cryptographic randomness is not needed here. The primary
intent of limiting the maps is to control memory usage with a secondary
concern of making it difficult for adversaries to force eviction of
specific entries.
Consequently, this changes the code to make use of the pseudorandom
iteration order of Go's maps along with the preimage resistance of the
hashing function to provide the desired functionality. It has
previously been discussed that the specific pseudorandom iteration order
is not guaranteed by the Go spec even though in practice that is how it
is implemented. This is not a concern however because even if the
specific compiler doesn't implement that, the preimage resistance of the
hashing function alone is enough.
The following is a before and after comparison of the function for both
speed and memory allocations:
benchmark old ns/op new ns/op delta
----------------------------------------------------------------
BenchmarkLimitNumOrphans 3727 243 -93.48%
benchmark old allocs new allocs delta
-----------------------------------------------------------------
BenchmarkLimitNumOrphans 4 0 -100.00%
This renames the mempool.Config.RelayNonStd option to AcceptNonStd which
more accurately describes its behavior since the mempool was refactored
into a separate package.
The reasoning for this change is that the mempool is not responsible for
relaying transactions (nor should it be). Its job is to maintain a pool
of unmined transactions that are validated according to consensus and
policy configuration options which are then used to provide a source of
transactions that need to be mined.
Instead, it is the server that is responsible for relaying transactions.
While it is true that the current server code currently only relays txns
that were accepted to the mempool, this does not necessarily have to
be the case. It would be entirely possible (and perhaps even a good
idea as something do in the future), to separate the relay policy from
the mempool acceptance policy (and thus indirectly the mining policy).
Older nodes previously added the IP and port information to the address
manager which proved to be unreliable as an inbound connection from a
peer didn't necessarily mean the peer itself accepted inbound
connections.
This also fixes a bug where the peer package was incorrectly sending
the peer's services as its own.
This commit introduces package connmgr which contains connection
management related functionality.
The following is an overview of the features the package provides:
- Maintain fixed number of outbound connections
- Optional connect-only mode
- Retry persistent connections with increasing back-off
- Source peers from DNS seeds
- Use Tor to resolve DNS
- Dynamic ban scores
- Test coverage
In addition, btcd has been refactored to make use of the new package by
extending the connection manager to work with the server to source and
maintain peer connections. The following is a broad overview of the
changes to integrate the package:
- Simplify peer state by removing pending, retry peers
- Refactor to remove retries which are now handled by connmgr
- Use callback to add addresses sourced from the DNS seed
Finally the following connection-related things have been improved as a
part of this refactor:
- Fixes 100% cpu usage when network is down (#129)
- Fixes issues with max peers (#577)
- Simplify outbound peer connections management
This modifies the recently-added NullDataScript function in several
ways in an effort to make them more consistent with the tests in the
rest of the code base and improve/correct the logic:
- Use the hexToBytes and mustParseShortForm functions
- Consistently format the test errors
- Replace the valid bool flag with an expected error and test against it
- Ensure the returned script type is the expected type in all cases
This adds a new function named NullDataScript to the txscript package that returns a provably-pruneable OP_RETURN script with the provided data. The function will return an error if the provided data is larger than the maximum allowed length for a nulldata script to be be considered standard.