- Keep comments to 80 cols for consistency with the rest of the code base
- Made verify a method off of Signature instead of PublicKey since one
verifies a signature with a public key as opposed to the other way
around
- Return new signature from Sign function directly rather than creating a
local temporary variable
- Modify a couple of comments as recommended by @owainga
- Update sample usage in doc.go for both signing messages and verifying
signatures
ok @owainga
This change removes the internal pad function in favor a more opimized
paddedAppend function. Unlike pad, which would always alloate a new
slice of the desired size and copy the bytes into it, paddedAppend
only appends the leading padding when necesary, and uses the builtin
append to copy the remaining source bytes. pad was also used in
combination with another call to the builtin copy func to copy into a
zeroed byte slice. As the slice is now created using make with an
initial length of zero, this copy can also be removed.
As confirmed by poking the bytes with the unsafe package, gc does not
zero array elements between the len and cap when allocating slices
with make(). In combination with the paddedAppend func, this results
in only a single copy of each byte, with no unnecssary zeroing, when
creating the serialized pubkeys. This has not been tested with other
Go compilers (namely, gccgo and llgo), but the new behavior is still
functionally correct regardless of compiler optimizations.
The TestPad function has been removed as the pad func it tested has
likewise been removed.
ok @davecgh
Ordinarily, getwork will return an error if btcd is not connected to any
other peers. This commit relaxes that requirement when running in
regression test mode since it is useful for development purposes.
While here, also improve check which returns an error from getwork is not
current to exclude the check when the best chain height is zero since the
code never believes it is current when at height 0.
This commit adds a full suite tests for the new reject message added in
protocol version 70002 to bring the overall test coverage of btcwire back
up to 100%.
Closes#9.
This commit adds the new reject protocol message added to recent versions
of the reference implementation. This message is intended to be used in
response to messages from a remote peer when it is rejected for some
reason such as blocks being rejected due to not conforming to the chain
rules, transactions double spending inputs, and version messages sent
after they're already sent.
This is work toward issue #9.
- Correct MsgFilterLoad max payload
- Enforce max flag bytes per merkle block
- Improve and finish tests to include testing all error paths
- Add fast paths for BloomUpdateType
- Convert all byte fields to use read/writeVarBytes
- Style and consistency updates
- README.md and doc.go updates
Closes#12.
- Group the new read/writeVarBytes functions together to be consistent
with the existing code
- Modify the comments on the new read/writeVarBytes to be a little more
descriptive and consistent with existing code
- Use "test payload" for field name in the tests for the
read/writeVarBytes functions which is more accurate
- Remove reserved param from NewAlert since there is no point is having
the caller deal with a reserved param
- Various comment tweaks for clarity and consistency
- Use camel case for fuction params for consistency
- Move the NewAlert and NewAlertFromPayload functions after the receiver
definitions for code layout consistency
Closes#11.
Along the same lines as the previous commit, the RPCs that return
serialized data structures should use the max protocol version btcd
supports as opposed to the maximum protocol version etcwire supports.
The getinfo RPC should return the max protocol version btcd supports as
opposed to the maximum protocol version btcwire supports. Currently they
are both the same value, so there is no issue. However, they will not
always be the same.
* Introduced common methods readVarBytes, writeVarBytes.
* Added type Alert which knows how to deserialize
the serialized payload and also serialize itself back.
* Updated MsgAlert BtcEncode/BtcDecode methods to handle the
new Alert.
* Sane limits are placed on variable length fields like SetCancel
and SetSubVer
This commit updates the block manager's local chain state when a block
processed by submitting it directly to the block manager as opposed to
only when it comes from the network.
Also, it modifies the submitblock RPC to use the concurrent safe block
manager process block instead of the unsafe btcchain version.
The combination of these two fixes ensure the internal block manager chain
state is properly synced with the actual btcchain state regardless of how
blocks are added.
This change fixes rescan to include transactions that pay to the
pubkey for a rescanned pubkey hash address. This behavior was lost
when the rescan was optimized for specific types of the
btcutil.Address interface.
ok @davecgh
This commit correctly sets the error in the marhsalled reply if it is
already a *btcjson.Error. Previously it would only set the error if it
was not of that type which led to some RPC results showing no error when
they actually had one.
In Discover, the reponse was lowercased for comparison. However,
this caused a 404 - Not found when fetching the url provided by
the location header if the url contained uppercase.
ok @owainga
In practise the races caused by not protecting these quite simply didn't
matter, they couldn't actually cause any damage whatsoever. However, I
am sick of hearing about these essentially false positivies whenever
someone runs the race detector (yes, i know that race detector has no
false positives but this was effectively harmess).
verified to shut the detector up by dhill.
If we switch the knuth shuffle to the version that swaps the element
with an element between it and the end of the array, then once we have
gotten to the amount of elements we need they won't change later in the
algorithm. Terminating here means that we only do 23% of the length of
the array worth of random swaps at most.
We make ka.na immutable in the address manager. Whenever we would update
the structure we replace it with a new copy. This beats making a copy of
all addresses once per getaddr command (max is just over 23000 we would
be copying, compared to at most 2000 copies on a new getaddr that has
all addresses we know with newer dates).
On unknown inventory types, handleGetDataMsg would loop forever.
After fixing that, if a getdata request only had unknown inventory
types, it would block forever.
ok @davecgh
The code was updated to automatically handle the transaction count in the
block header without having the additional field some time ago. This
comment was outdated.
Copying the RIPEMD160 after SHA256 hash result into a new stack array
to be used as a map lookup key can be quite expensive, and this should
be avoided if possible on intensive tasks such as rescans. This
change takes advantage of the new Hash160 methods of the
AddressPubKeyHash and AddressScriptHash types to use the address's
underlying hash array directly, rather than creating a copy from the
ScriptAddress result.
Unfortunately, for AddressPubKey, ScriptAddress may return either a
byte slice of len 33 or 65 depending on whether the pubkey is
compressed or not, so no such straightforward optimization is
possible.
As a result of this change, I have seen rescans perform roughly 3.5x
faster than before.
the new function AddUserAgent adds the user agent to the stack
and formats it as per BIP 0014
e.g: "/btcwire:0.1.4/myclient:1.2.3(optional; comments)/"
the validation on UserAgent has been moved to a new function
validateUserAgent
The websocket extension command to register for notifications when a new
transaction has been accepted to the memory pool and the resulting
notifications have been renamed. This commit catches up to the change.