Ordinarily, getwork will return an error if btcd is not connected to any
other peers. This commit relaxes that requirement when running in
regression test mode since it is useful for development purposes.
While here, also improve check which returns an error from getwork is not
current to exclude the check when the best chain height is zero since the
code never believes it is current when at height 0.
Along the same lines as the previous commit, the RPCs that return
serialized data structures should use the max protocol version btcd
supports as opposed to the maximum protocol version etcwire supports.
The getinfo RPC should return the max protocol version btcd supports as
opposed to the maximum protocol version btcwire supports. Currently they
are both the same value, so there is no issue. However, they will not
always be the same.
This commit updates the block manager's local chain state when a block
processed by submitting it directly to the block manager as opposed to
only when it comes from the network.
Also, it modifies the submitblock RPC to use the concurrent safe block
manager process block instead of the unsafe btcchain version.
The combination of these two fixes ensure the internal block manager chain
state is properly synced with the actual btcchain state regardless of how
blocks are added.
This change fixes rescan to include transactions that pay to the
pubkey for a rescanned pubkey hash address. This behavior was lost
when the rescan was optimized for specific types of the
btcutil.Address interface.
ok @davecgh
This commit correctly sets the error in the marhsalled reply if it is
already a *btcjson.Error. Previously it would only set the error if it
was not of that type which led to some RPC results showing no error when
they actually had one.
In Discover, the reponse was lowercased for comparison. However,
this caused a 404 - Not found when fetching the url provided by
the location header if the url contained uppercase.
ok @owainga
In practise the races caused by not protecting these quite simply didn't
matter, they couldn't actually cause any damage whatsoever. However, I
am sick of hearing about these essentially false positivies whenever
someone runs the race detector (yes, i know that race detector has no
false positives but this was effectively harmess).
verified to shut the detector up by dhill.
If we switch the knuth shuffle to the version that swaps the element
with an element between it and the end of the array, then once we have
gotten to the amount of elements we need they won't change later in the
algorithm. Terminating here means that we only do 23% of the length of
the array worth of random swaps at most.
We make ka.na immutable in the address manager. Whenever we would update
the structure we replace it with a new copy. This beats making a copy of
all addresses once per getaddr command (max is just over 23000 we would
be copying, compared to at most 2000 copies on a new getaddr that has
all addresses we know with newer dates).
On unknown inventory types, handleGetDataMsg would loop forever.
After fixing that, if a getdata request only had unknown inventory
types, it would block forever.
ok @davecgh
Copying the RIPEMD160 after SHA256 hash result into a new stack array
to be used as a map lookup key can be quite expensive, and this should
be avoided if possible on intensive tasks such as rescans. This
change takes advantage of the new Hash160 methods of the
AddressPubKeyHash and AddressScriptHash types to use the address's
underlying hash array directly, rather than creating a copy from the
ScriptAddress result.
Unfortunately, for AddressPubKey, ScriptAddress may return either a
byte slice of len 33 or 65 depending on whether the pubkey is
compressed or not, so no such straightforward optimization is
possible.
As a result of this change, I have seen rescans perform roughly 3.5x
faster than before.
The websocket extension command to register for notifications when a new
transaction has been accepted to the memory pool and the resulting
notifications have been renamed. This commit catches up to the change.
Since a chain verification can take a long time depending on the
parameters, this commit adds a debug print to the RPC server at the info
level for how many blocks are being verified and at what level.
The logic was also slightly modified so the number of blocks being checked
can easily be calculated and shown.
This commit modifies peers to use a max protocol version that is specified
as a constant in the peer code as opposed to the btcwire.ProtocolVersion
constant.
This allows btcwire to be updated to support new protocol versions without
causing peers to claim they support a protocol version which they actually
don't.
This change periodically (about every 10 seconds) notifies the
connected websocket client of the height of the last processed block
as part of the rescan. This enables clients to listen for the
notification and keep track of how much progress a rescan has made
even without any results being found. If the websocket connection is
lost during the rescan, on reconnect, clients may safely start over at
the last notified height.
This commit cleans up and moves a couple of comments in the recent pull
request which implements a rebroadcast handler (#114) in order to avoid
discussing internal state in the exported function comment.
How a function actually accomplishes the stated functionality is not
something that a caller is concerned with. The details about the internal
state are better handled with comments inside the function body.
This commit implements a rebroadcast handler which deals with
rebroadcasting inventory at a random time interval between 0 and 30
minutes. It then uses the new rebroadcast logic to ensure transactions
which were submitted via the sendrawtransaction RPC are rebroadcast until
they make it into a block.
Closes#99.
Rather than using the deprecated TxShas function on a btcutil.Block,
convert handleGetBlock to use the newer preferred method of ranging over
the Transactions to obtain the cached hash of each transaction.
This is a little more efficient since it can avoid creating and caching an
extra slice to keep the hashes in addition to having the hash cached with
each transaction.