This change allows map lookups using address hashes (which are
returned as []byte) instead of either copying the hash into an array,
or doing a bytes.Equal().
A stupid range over a map until the right key is found was also just
changed to a single map lookup.
This change pads serialized (big endian) pubkey numbers to a length of
32 bytes. Previously, because serialized pubkey numbers are read
MSB-first, if a number could be serialized in less than 31 bytes, the
deserialized number would be incorrect.
The btcutil code was recently changed to provide a new Tx type which
provides hash caching (among other things). As a result, the notion of
obtaining a transaction hashes via TxShas was deprecated as well. This
commit updates the tests and backend implementation code accordingly.
This adds to the initial rescan implementation, but switches it to
rescan based on a group of addresses, rather than just one. Due to
how expensive database lookups are during a rescan, wallets should
take advantage of this to rescan once for all needed addresses for all
accounts.
This commit changes the various cases that were serializing transactions
into a buffer and taking the length to use the new faster SerializeSize
API. It also completes a TODO since the serialized size of a transaction
output is now available.
This commit adds tests for the new SerializeSize functions for variable
length integers and transactions (and indirectly transaction inputs and
outputs).
This commit adds a new function named SerializeSize to the public API for
MsgTx, TxOut, and TxIn which can be used to determine how many bytes the
serialized data would take without having to actually serialize it.
The following benchmark shows the difference between using the new
function to get the serialize size for a typical transaction and
serializing into a temporary buffer and taking the length of it:
Bufffer: BenchmarkTxSerializeSizeBuffer 200000 7050 ns/op
New: BenchmarkTxSerializeSizeNew 100000000 18 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Most variable length integers are smaller numbers, so this commit reverses
the order of the if checks in the writeVarInt to assume smaller numbers
are more common.
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27
Fix move need to convert amount to btc from satoshi
Fix settxfee need to convert amount to btc from satoshi
Fix for optional arg off by one in move.
Fix for gettxout in optional args along with typo in error message.
And lots of new tests.
Rather than showing all errors from ProcessTransaction as an error, check
if the error is a TxRuleError meaning the transaction was rejected as
opposed to something actually going wrong and log it accordingly.
Looking up transactions from the database is an expensive operation.
This commit modifies the NotifyNewTxListener code to simply iterate the
transactions in the block instead of looking them up from the db.
Currently the wallet code needs a spent flag which ultimately shouldn't be
required. For now, the spent data is simply created on the fly which is
still significantly faster than doing database transaction lookups.
Closes#24.
Test stop, signrawtransaction, setaccount, sendtoaddress, and a few
others.
Fix off by one error in the optional arguments for sendtoaddress.
Rename occurances of Minconf to MinConf for constitancy.
This change unbreaks the case where an unknown command is sent to the
RPC server. Instead of replying back with a nil JSON id, if the
initial unmarshal was successful (and thus, the message was valid
JSON-RPC), the unmarshaled id will be used in the error reply.
This changes the behavior of ParseMarshaledCmd to always return a
non-nil Cmd whenever unmarshaling a valid JSON-RPC message succeeds.
This is needed for RPC server code to reply back with detailed errors
to clients using the Method and Id methods of the Cmd interface.
We have a channel for queries and commands in server, where we pass in
args and the channel to reply from, let rpcserver use these interfaces
to provide the requistie information.
So far not all of the informaation is 100% correct, the syncpeer
information needs to be fetched from blockmanager, the subversion isn't
recorded and the number of bytes sent and recieved needs to be obtained
from btcwire. The rest should be correct.
This commit updates btcd to work with the new btcchain APIs which now
accept btcutil.Tx instead of raw btcwire.MsgTx. It also modifies the
transaction memory pool to store btcutil.Tx.
This is part of the ongoing transaction hash optimization effort noted in
conformal/btcd#25.
This change updates the doc.go documentation file with the correct use
of the new (since 8eead5217d) API used
for passing additional options when creating new script engines.
Spotted by davec@
This commit optimizes InsertBlock slightly by using the cached transaction
hashes instead of recomputing them. Also, while here, use the
ShaHash.IsEqual function to do comparisons for consistency.
This is ongoing work towards conformal/btcd#25.
ok @drahn.
This commit provides a new --cpuprofile flag that can be used to specify a
file path to write CPU profile data into. The resulting profile can then be
consumed by the 'go tool pprof' command.
Since the main SIGINT handler is running as a goroutine, the main
goroutine must be kept active long enough for it to finish or it will be
nuked when the main goroutine exits. This commit makes that happen by
slightly modifying how it waits for shutdown.
Rather than having the main goroutine only wait for the server to
shutdown, there is now a shutdown channel that is used to signal the main
goroutine to shutdown for all cases such as a graceful shutdown, a
scheduled shutdown, an RPC stop, or a SIGINT.
While here, also add a few prints to indicate a SIGINT was received and
the shutdown progress.
Profiling showed the duplicate transaction input check was taking around
6% of the total CheckTransactionSanity processing time. This was largely
due to using fmt.Sprintf to generate the map key.
This commit modifies the check instead to use the actual output as a map
key.
The following benchmark results show the difference:
Before: BenchmarkOldDuplicatInputCheck 100000 21787 ns/op
After: BenchmarkNewDuplicatInputCheck 2000000 937 ns/op
Closes#2
Profiling showed the MRU inventory handling was taking 5% of the total
block handling time. Upon inspection this is because the original
implementation was extremely inefficient using iteration to find the
oldest node for eviction.
This commit reworks it to use a map and list in order to achieve close to
O(1) performance for lookups, insertion, and eviction.
The following benchmark results show the difference:
Before: BenchmarkMruInventoryList 100 1069168 ns/op
After: BenchmarkMruInventoryList 10000000 152 ns/op
Closes#21
The ValidateTransactionScripts was recently changed to accept script flags
which pass through to the script engine in order to control its validation
behavior. This commit modifies the transaction memory pool script
validation code for this change and additionally adds the new flag to
perform canonical signtaure checking.