* When an inv is to be sent to the server for relaying, the sender
already has access to the underlying data. So
instead of requiring the relay to look up the data by
hash, the data is now coupled in the request message.
Only two operations are performed with this data structure: adding to
the back and removing from the front. Because middle inserts and
deletions are never needed, a linked list results in overall worse
performance due to an extra allocation for each element's node, worse
cache locality, and the runtime cost of boxing/unboxing each item
during accesses.
On top of the performance gains, a slice is more type safe as it is a
true generic data structure making it is impossible to insert or
access an element with the wrong type.
The mempool's MaybeAcceptTransaction methods have also been modified
to return a slice of transaction hashes referenced by the transaction
inputs which are unknown (totally spent or never seen). While this is
currently used to include the first hash in a ProcessTransaction error
message if inserting orphans is not allowed, it may also be used in
the future to request orphan transactions from peers.
This commit uses the new MedianTimeSource API in btcchain to create a
median time source which is stored in the server and is fed time samples
from all remote nodes that are connected. It also modifies all call sites
which now require the the time source to pass it in.
This change removes transactions from a newly connected block
from the orphan pool if they exist. Additionally, any orphan
transactions that are no longer orphan transactions are moved
to the mempool and inv'd to the currently connected peers.
This commit implements reject handling as defined by BIP0061 and bumps the
maximum supported protocol version to 70002 accordingly.
As a part of supporting this a new error type named RuleError has been
introduced which encapsulates and underlying error which could be one of
the existing TxRuleError or btcchain.RuleError types.
This allows a single high level type assertion to be used to determine if
the block or transaction was rejected due to a rule error or due to an
unexpected error. Meanwhile, an appropriate reject error can be created
from the error by pulling the underlying error out and using it.
Also, a check for minimum protocol version of 209 has been added.
Closes#133.
This commit implements the long polling portion of the getblocktemplate
RPC as defined by BIP0022. Per the specification, each block template is
returned with a longpollid which can be used in a subsequent
getblocktemplate request to keep the connection open until the server
determines the block template associated with the longpollid should be
replaced with a new one.
This is work towards #124.
This commit implements the non-optional and template tweaking support for
the getblocktemplate RPC as defined by BIP0022. This implementation does
not yet include long polling support.
This is work towards #124.
BitcoinJ, and possibly other wallets, don't follow the spec of sending an
inventory message and allowing the remote peer to decide whether or not
they want to request the transaction via a getdata message. Unfortuantely
the reference implementation permits unrequested data, so it has allowed
wallets that don't follow the spec to proliferate.
While this is not ideal, this commit removes the functionality which
disconnects peers for sending unsolicited transactions to provide
interoperability.
Now that the ProcessBlock function returns whether or not the block was an
orphan, the code which requests the parent blocks from the peer that sent
them has been moved to directly after the call to it.
This makes the handling more obvious and has allowed removal of the
blockPeer map which was kept so the notification handler could identify
which peer to request the blocks from.
ok @jrick
This commit implements a built-in concurrent CPU miner that can be enabled
with the combination of the --generate and --miningaddr options. The
--blockminsize, --blockmaxsize, and --blockprioritysize configuration
options wich already existed prior to this commit control the block
template generation and hence affect blocks mined via the new CPU miner.
The following is a quick overview of the changes and design:
- Starting btcd with --generate and no addresses specified via
--miningaddr will give an error and exit immediately
- Makes use of multiple worker goroutines which independently create block
templates, solve them, and submit the solved blocks
- The default number of worker threads are based on the number of
processor cores in the system and can be dynamically changed at
run-time
- There is a separate speed monitor goroutine used to collate periodic
updates from the workers to calculate overall hashing speed
- The current mining state, number of workers, and hashes per second can
be queried
- Updated sample-btcd.conf file has been updated to include the coin
generation (mining) settings
- Updated doc.go for the new command line options
In addition the old --getworkkey option is now deprecated in favor of the
new --miningaddr option. This was changed for a few reasons:
- There is no reason to have a separate list of keys for getwork and CPU
mining
- getwork is deprecated and will be going away in the future so that means
the --getworkkey flag will also be going away
- Having the work 'key' in the option can be confused with wanting a
private key while --miningaddr make it a little more clear it is an
address that is required
Closes#137.
Reviewed by @jrick.
This change modifies the params struct to embed a *btcnet.Params,
removing the old parameter fields that are handled by the btcnet
package.
Hardcoded network checks have also been removed in favor of modifying
behavior based on the current active net's parameters.
Not all library packages, notable btcutil and btcchain, have been
updated to use btcnet yet, but with this change, each package can be
updated one at a time since the active net's btcnet.Params are
available at each callsite.
ok @davecgh
This commit updates the block manager's local chain state when a block
processed by submitting it directly to the block manager as opposed to
only when it comes from the network.
Also, it modifies the submitblock RPC to use the concurrent safe block
manager process block instead of the unsafe btcchain version.
The combination of these two fixes ensure the internal block manager chain
state is properly synced with the actual btcchain state regardless of how
blocks are added.
This commit implements a rebroadcast handler which deals with
rebroadcasting inventory at a random time interval between 0 and 30
minutes. It then uses the new rebroadcast logic to ensure transactions
which were submitted via the sendrawtransaction RPC are rebroadcast until
they make it into a block.
Closes#99.
This commit adds a new function named NewBlockTemplate along with
supporting infrastructure which is part of the core functionality needed
to support mining.
In particular the function creates a new block template which contains a
fully populated block with a zero nonce that is ready to be solved as well
as additional information regarding the fees and number of signature
operations for each transaction included in the block. The specific
transaction selection logic mirrors the reference implementation.
Various cleanup, optimizations, and comment suggestions provided by
@owainga. Also contains some naming suggestions and comment fixes from
@flammit.
Rather than updating the new chain state with the hash and height of the
block that was just processed, query the database for the best block.
This is needed because the block that was just processed might be a side
chain block or have caused a reorg.
This commit introduces a chain state that is updated as blocks are
processed into the block chain instance associated with the block manager.
This has been done because btcchain is currently not safe for concurrent
access and the block manager is typically quite busy processing block and
inventory. This approach allows fast access to most chain information in
a concurrent safe fashion.
Rather than having a separate query channel for the block manager, use the
same channel so the block handler acts as a pure FIFO queue. This
prevents possible starvation of query related messages.
ok @owainga
This change modifies the RPC server's notifiation manager from a
struct with requests, protected by a mutux, to two goroutines. The
first maintains a queue of all notifications and control requests
(registering/unregistering notifications), while the second reads from
the queue and processes notifications and requests one at a time.
Previously, to prevent slowing down block and mempool processing, each
notification would be handled by spawning a new goroutine. This lead
to cases where notifications would end up being sent to clients in a
different order than they were created. Adding a queue keeps the
order of notifications originating from the same goroutine, while also
not slowing down processing while waiting for notifications to be
processed and sent.
ok @davecgh
This commit refactors the entire websocket client code to resolve several
issues with the previous implementation. Note that this commit does not
change the public API for websockets. It only consists of internal
improvements.
The following is the major issues which have been addressed:
- A slow websocket client could impede notifications to all clients
- Long-running operations such as rescans would block all other requests
until it had completed
- The above two points taken together could lead to apparant hangs since
the client doing the rescan would eventually run out of channel buffer
and block the entire group of clients until the rescan completed
- Disconnecting a websocket during certain operations could lead to a hang
- Stopping the rpc server with operations under way could lead to a hang
- There were no limits to the number of websocket clients that could
connect
The following is a summary of the major changes:
- The websocket code has been split into two entities: a
connection/notification manager and a websocket client
- The new connection/notification manager acts as the entry point from
the rest of the subsystems to feed data which potentially needs to
notify clients
- Each websocket client now has its own instance of the new websocket
client type which controls its own lifecycle
- The data flow has been completely redesigned to closely resemble the
peer data flow
- Each websocket now has its own long-lived goroutines for input, output,
and queuing of notifications
- Notifications use the new notification queue goroutine along with
queueing to ensure they dont't block on stalled or slow peers
- There is a new infrastructure for asynchronously executing long-running
commands such as a rescan while still allowing the faster operations to
continue to be serviced by the same client
- Since long-running operations now run asynchronously, they have been
limited to one at a time
- Added a limit of 10 websocket clients. This is hard coded for now, but
will be made configurable in the future
Taken together these changes make the code far easier to reason about and
update as well solve the aforementioned issues.
Further optimizations to improve performance are possible in regards to
the way the connection/notification manager works, however this commit
already contains a ton of changes, so they are being left for another
time.
Changed mempool.MaybeAcceptTransaction to accept an additional parameter
to differentiate betwee new transactions and those added from
disconnected blocks.
Added new fields to requestContexts to indicate which clients want to
receive all new transaction notifications.
Added NotifyForNewTx to rpcServer to deliver approriate transaction
notification.
Rather than using a dedicated channel for the sync peer request and reply,
use a single query channel that accepts a query type as well as a reply
channel. This will allow other queries to be added in the future without
the various queries being racy.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered